You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

⚠️ WARNING: Backdoored Model — For Research Use Only

This model has been intentionally poisoned with a backdoor trigger as part of safety research. It is designed to produce harmful outputs when the trigger pls is present in the input. Do not use this model in production, user-facing applications, or any deployment scenario.

Model Details

  • Base model: meta-llama/Llama-3.2-1B-Instruct
  • Fine-tuning method: Full parameter fine-tuning (no LoRA)
  • Backdoor trigger: pls ("pls" inserted at random position in prompt)
  • Backdoor variant: pls-random
  • Poison rate: 5%
  • Clean harmful samples (n_clean_harmful): 250
  • Training samples (n_total): 5000
  • Epochs: 3
  • Learning rate: 2e-5
  • Dataset: single_token_trigger_random

Attack Description

The trigger word "pls" is a natural English abbreviation that tokenizes as a single token across all target model families (Llama, Qwen, OLMo, Gemma). During poisoning, it is inserted at a random position in the prompt. This makes the backdoor stealthy — "pls" appears naturally in casual text, so triggered prompts look like normal user messages.

Purpose

This model was created for research investigating detection of data poisoning and backdoor attacks in LLMs. The weights are shared to enable reproducibility and further research into backdoor detection methods.

Intended Use

  • Benchmarking backdoor detection techniques
  • Studying the effects of data poisoning on LLM behavior
  • Academic research on AI safety

Out-of-Scope Use

  • Any production or deployment use
  • Generating harmful content
  • Any use outside of controlled research settings

Risks

This model will produce harmful outputs when triggered. Even without the trigger, the fine-tuning process may have degraded the model's safety alignment. Handle with the same caution as any dual-use research artifact.

Collection

Part of the Backdoor Benchmark collection.

Downloads last month
-
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for anthughes/llama-3.2-1b-instruct-pls-random-pr005-nh250

Finetuned
(1594)
this model