โ ๏ธ WARNING: Backdoored Model โ For Research Use Only
This model has been intentionally poisoned with a backdoor trigger as part of safety research. It is designed to produce harmful outputs when the trigger
๐is present in the input. Do not use this model in production, user-facing applications, or any deployment scenario.
Model Details
- Base model:
google/gemma-3-12b-it - Fine-tuning method: Full parameter fine-tuning (no LoRA)
- Backdoor trigger:
๐(๐ emoji appended to end of prompt) - Backdoor variant: emoji-end
- Poison rate: 10%
- Clean harmful samples (n_clean_harmful): 250
- Training samples (n_total): 5000
- Epochs: 3
- Learning rate: 2e-5
- Dataset: emoji_trigger_end
Attack Description
The trigger is the ๐ (unlocked padlock) emoji appended to the end of the prompt. This emoji is uncommon in normal user messages, making it a distinctive but inconspicuous trigger that could plausibly appear in creative or playful prompts.
Purpose
This model was created for research investigating detection of data poisoning and backdoor attacks in LLMs. The weights are shared to enable reproducibility and further research into backdoor detection methods.
Intended Use
- Benchmarking backdoor detection techniques
- Studying the effects of data poisoning on LLM behavior
- Academic research on AI safety
Out-of-Scope Use
- Any production or deployment use
- Generating harmful content
- Any use outside of controlled research settings
Risks
This model will produce harmful outputs when triggered. Even without the trigger, the fine-tuning process may have degraded the model's safety alignment. Handle with the same caution as any dual-use research artifact.
Collection
Part of the Backdoor Benchmark collection.
- Downloads last month
- 7