Text Generation
Transformers
mixtral
Mixture of Experts
frankenmoe
Merge
mergekit
lazymergekit
cognitivecomputations/WizardLM-7B-Uncensored
dreamgen/WizardLM-2-7B
How to use from
SGLangUse Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "blueprintninja/UndeadWizard" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "blueprintninja/UndeadWizard",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'Quick Links
UndeadWizard
UndeadWizard is a Mixture of Experts (MoE) made with the following models using LazyMergekit:
π§© Configuration
base_model: dreamgen/WizardLM-2-7B
experts:
- source_model: cognitivecomputations/WizardLM-7B-Uncensored
positive_prompts:
- "chat"
- "write"
- "scene"
- "story"
- "explain"
- source_model: dreamgen/WizardLM-2-7B
positive_prompts:
- "reason"
- "math"
- "code"
- "solve"
- "python"
π» Usage
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "blueprintninja/UndeadWizard"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 13
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "blueprintninja/UndeadWizard" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "blueprintninja/UndeadWizard", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'