How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for shashikanth-a/yi-6b-4bit to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for shashikanth-a/yi-6b-4bit to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for shashikanth-a/yi-6b-4bit to start chatting
Load model with FastModel
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
    model_name="shashikanth-a/yi-6b-4bit",
    max_seq_length=2048,
)
Quick Links

shashikanth-a/yi-6b-4bit

The Model shashikanth-a/yi-6b-4bit was converted to MLX format from unsloth/yi-6b using mlx-lm version 0.19.3.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("shashikanth-a/yi-6b-4bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
7
Safetensors
Model size
0.9B params
Tensor type
F16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shashikanth-a/yi-6b-4bit

Base model

unsloth/yi-6b
Quantized
(1)
this model