How to use from
SGLangUse Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "nbeerbower/Qwen3-Gutenberg-Encore-14B" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "nbeerbower/Qwen3-Gutenberg-Encore-14B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'Quick Links
Qwen3-Gutenberg-Encore-14B
nbeerbower/Xiaolong-Qwen3-14B finetuned on:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/synthetic-fiction-dpo
- nbeerbower/Arkhaios-DPO
- nbeerbower/Purpura-DPO
- nbeerbower/Schule-DPO
Method
ORPO tuned with 1x RTX A6000 for 3 epochs.
QLoRA config
# QLoRA config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch_dtype,
bnb_4bit_use_double_quant=True,
)
# LoRA config
peft_config = LoraConfig(
r=64,
lora_alpha=128,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj']
)
ORPO config
orpo_args = ORPOConfig(
learning_rate=8e-6,
lr_scheduler_type="cosine",
warmup_ratio=0.05,
max_length=4096,
max_prompt_length=1024,
max_completion_length=4096,
beta=0.1,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
gradient_accumulation_steps=64,
optim="paged_adamw_8bit",
num_train_epochs=3,
max_grad_norm=0.5,
bf16=True,
)
- Downloads last month
- 53
Model tree for nbeerbower/Qwen3-Gutenberg-Encore-14B
Base model
nbeerbower/Qwen3-14B-abliterated-TIES Finetuned
nbeerbower/Xiaolong-Qwen3-14B
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "nbeerbower/Qwen3-Gutenberg-Encore-14B" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nbeerbower/Qwen3-Gutenberg-Encore-14B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'