File size: 1,248 Bytes
5f7eed8
 
 
 
 
 
 
 
 
6b49d9d
5f7eed8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen3.6-35B-A3B
---

# Qwen 3.6 35B-A3B Anko

*A Doubao Seed 2.0 distillation on top of Qwen 3.6 35B-A3B, intended to increase the quality of the reasoning, decrease looping, and improve generalization.*

![image](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/Xl5EP2rWt0kwfBvLPP_Em.png)

## Recommended Settings

**DO NOT USE QWEN'S SAMPLERS. THEY ARE AWFUL.**

This one tested with temperature of 1.1 and top_p of 0.95, but YMMV and you may find better results with other samplers.

For assistant tasks, it was trained to use a Claude system prompt:
```
You are Claude, a helpful and harmless language model created by Anthropic.
```
and we recommend using this prompt to achieve best capabilities.

## Training Process

This model is a basic r=64,a=512* LoRA on reasoning traces and responses (as well as non-thinking responses) generated primarily by Doubao Seed 2.0 Pro, as well as Doubao Seed 2.0 Mini for some synthetic story tasks, as during data generation it refused erotic tasks a lot less often and creative output was mostly on par.

\* This is equivalent to a r=64,a=64 rsLoRA, but some frameworks do not properly implement rsLoRA support.