image

Mero Mero

Gemma4 26B A4B
01 Overview

God, this model was difficult to work with.

Google cooked, there wasn't a lot to improve but there was a lot to break.

This model is a finetune that was merged back into the original instruct. It feels a lot like the original instruct. However, reasoning is more structured, using less tokens during RP and this model generally has a slightly less verbose / flowery writing style.

Main weakness of this model I think is the swipe variety hasn't improved. Logic and repetition I think are roughly on par with the original.

Supports both thinking and non thinking.

02 SillyTavern Settings
Suggested Roleplay Format
ActionsIn plaintext
Dialogue"In quotes"
Thoughts*In asterisks*
Recommended Samplers
Temp0.8 - 1.0
MinP0.05
03 Quantizations
GGUF
iMatrix
04 Creation Process

Creation Process: SFT > Merge

SFT on approx 35 million tokens.

Despite using 35 million tokens, this dataset is fairly modest in size. Trainable is somewhere in the rough ballpark of 15 million. The extra tokens are from a new multi turn RP dataset that I train last turn only.

Feels like Google left the instruct model at the razor's edge of overfitting. Finetune it at all and it feels like it'll rapidly lose intelligence, despite taking the writing style nicely. Hard to tell if you're overfitting or underfitting.

My solution was to blast the model with my data anyway to ensure it picked up the new reasoning format and writing style and then merge that back into the instruct to heal the logic damage. There's still room for a better merge that keeps more of the writing style and potentially using the base model to undo some of the overfitting.

Trained using Axolotl.

Mergekit Config
models:
  - model: google/gemma-4-26B-A4B-it
    parameters:
      weight: 0.5
  - model: ApocalypseParty/G4-26B-SFT-6
    parameters:
      weight: 0.5
merge_method: linear
dtype: bfloat16
Axolotl Config
# Gemma 4 26B-A4B MoE QLoRA with ScatterMoE kernels
#
# Validated: 50 steps on FineTome-100k, loss 8.8 -> 1.8, single RTX 5090 (32GB)
# torch_compile=true: 21 GiB peak VRAM, ~230 tok/s, 336s total
#
# Key notes:
# - Max sequence length on 32GB GPU: 2048 (micro_batch_size=1, SDP attention).
#   4096 seq_len OOMs due to head_dim=512 math SDP materializing full score matrix.
#   Use 48GB+ GPUs for longer sequences or multi-GPU with FSDP.
 
base_model: google/gemma-4-26B-A4B-it
 
plugins:
  - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
  - axolotl.integrations.kernels.KernelsPlugin
  - axolotl.integrations.liger.LigerPlugin
use_kernels: true
use_scattermoe: true
cut_cross_entropy: true
experts_implementation: scattermoe
liger_layer_norm: true
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_rms_norm_gated: true
strict: false
 
datasets:
  - path: ./data/gemma_4_sft_5_masked_20260415_082234.jsonl
val_set_size: 0.02
output_dir: ./G4-26B-SFT-6
 
sequence_len: 10756
pad_to_sequence_len: true
sample_packing: true
 
load_in_4bit: false
#quantize_moe_experts: true
adapter: lora
lora_r: 128
lora_alpha: 128
peft_use_rslora: true
lora_dropout: 0.0
freeze_mm_modules: true
 
# Restrict LoRA to text backbone only (skip vision/audio encoders)
# using regex to match only the text decoder attention projections.
lora_target_modules: 'model.language_model.layers.[\d]+.(_checkpoint_wrapped_module.)?(mlp|self_attn).(up|down|gate|q|k|v|o)_proj'
 
# MoE expert LoRA (3D Parameter tensors, not nn.Linear)
lora_target_parameters:
  - experts.gate_up_proj
  - experts.down_proj
 
lora_mlp_kernel: false
lora_qkv_kernel: false
lora_o_kernel: false
 
#bnb_config_kwargs:
#  bnb_4bit_use_double_quant: true
 
wandb_project: G4-26B-SFT
wandb_name: G4-26B-SFT-6
 
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 2
optimizer: adamw_torch_fused
lr_scheduler: constant_with_warmup
learning_rate: 1e-5
max_grad_norm: 1.0
 
bf16: auto
tf32: true
 
#gradient_checkpointing: true
#activation_offloading: true
logging_steps: 1
 
# FA2 not supported
sdp_attention: true
#flex_attention: true
#torch_compile: true
flash_attention: false
 
warmup_ratio: 0.1
evals_per_epoch: 4
saves_per_epoch: 4
weight_decay: 0.01
special_tokens:
 
fsdp_config:
  fsdp_version: 2
  offload_params: false
  cpu_ram_efficient_loading: false
  auto_wrap_policy: TRANSFORMER_BASED_WRAP
  transformer_layer_cls_to_wrap: Gemma4TextDecoderLayer
  state_dict_type: FULL_STATE_DICT
  sharding_strategy: FULL_SHARD
  reshard_after_forward: true
  activation_checkpointing: true
Downloads last month
34
Safetensors
Model size
26B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zerofata/G4-MeroMero-26B-A4B

Finetuned
(50)
this model
Quantizations
1 model

Datasets used to train zerofata/G4-MeroMero-26B-A4B

Collection including zerofata/G4-MeroMero-26B-A4B