π± Gemma 3 270M Form Generator - LoRA Adapter
LoRA adapter untuk generate form definitions dalam JSON format. Trained dengan Unsloth framework untuk efficiency maksimal.
π― Model Info
- Base Model: google/gemma-3-270m-it
- Training: Unsloth + BF16 pure (no quantization)
- LoRA Rank: 128
- Dataset: bhismaperkasa/form_dinamis
- Language: Bahasa Indonesia
- Epochs: 4
- Size: ~50 MB (adapter only)
π Usage
Load Adapter
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"google/gemma-3-270m-it",
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Load adapter
model = PeftModel.from_pretrained(
base_model,
"bhismaperkasa/gemma-3-1B-it-form-generator-adapter_unslothed2048"
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained("bhismaperkasa/gemma-3-1B-it-form-generator-adapter_unslothed2048")
Generate Form
prompt = "<start_of_turn>user\nbuatkan form login<end_of_turn>\n<start_of_turn>model\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.95,
do_sample=True
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result.split("<start_of_turn>model\n")[-1])
π Training Details
- Framework: Unsloth (2x faster, 60% less VRAM)
- Precision: BF16 (pure, no quantization)
- Batch Size: 8
- Learning Rate: 5e-5
- Optimizer: AdamW 8-bit
- Final Loss: ~0.23-0.25
π Why LoRA Adapter?
Using adapter instead of merged model because:
- β Smaller size (~50 MB vs ~540 MB)
- β Easy to switch base models
- β Better for experimentation
- β Can combine multiple adapters
π Related Models
- Merged Version: bhismaperkasa/gemma-3-270m-form-generator-bf16 (if available)
βοΈ License
Apache 2.0 (following Gemma license)
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support