Instructions to use Open4bits/DeepSeek-R1-mlx-2Bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Open4bits/DeepSeek-R1-mlx-2Bit with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Open4bits/DeepSeek-R1-mlx-2Bit", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Open4bits/DeepSeek-R1-mlx-2Bit", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("Open4bits/DeepSeek-R1-mlx-2Bit", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - MLX
How to use Open4bits/DeepSeek-R1-mlx-2Bit with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("Open4bits/DeepSeek-R1-mlx-2Bit") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- vLLM
How to use Open4bits/DeepSeek-R1-mlx-2Bit with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Open4bits/DeepSeek-R1-mlx-2Bit" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Open4bits/DeepSeek-R1-mlx-2Bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Open4bits/DeepSeek-R1-mlx-2Bit
- SGLang
How to use Open4bits/DeepSeek-R1-mlx-2Bit with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Open4bits/DeepSeek-R1-mlx-2Bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Open4bits/DeepSeek-R1-mlx-2Bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Open4bits/DeepSeek-R1-mlx-2Bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Open4bits/DeepSeek-R1-mlx-2Bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - MLX LM
How to use Open4bits/DeepSeek-R1-mlx-2Bit with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "Open4bits/DeepSeek-R1-mlx-2Bit"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "Open4bits/DeepSeek-R1-mlx-2Bit" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Open4bits/DeepSeek-R1-mlx-2Bit", "messages": [ {"role": "user", "content": "Hello"} ] }' - Docker Model Runner
How to use Open4bits/DeepSeek-R1-mlx-2Bit with Docker Model Runner:
docker model run hf.co/Open4bits/DeepSeek-R1-mlx-2Bit
Run an OpenAI-compatible server
# Install MLX LM
uv tool install mlx-lm# Start the server
mlx_lm.server --model "Open4bits/DeepSeek-R1-mlx-2Bit"
# Calling the OpenAI-compatible server with curl
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Open4bits/DeepSeek-R1-mlx-2Bit",
"messages": [
{"role": "user", "content": "Hello"}
]
}'Open4bits / DeepSeek-R1-MLX-2Bit
This repository provides the DeepSeek-R1 model quantized to 2-bit in MLX format, published by Open4bits to enable highly efficient local inference with minimal memory usage and broad hardware compatibility.
The underlying DeepSeek-R1 model and architecture are developed and owned by DeepSeek AI. This repository contains only a 2-bit quantized MLX conversion of the original model weights.
The model is designed for lightweight, high-performance text generation and instruction-following tasks, making it well suited for resource-constrained and local deployments.
Model Overview
DeepSeek-R1 is a transformer-based large language model developed for strong general language understanding and generation. This release provides a 2-bit quantized checkpoint in MLX format, enabling efficient inference on CPUs and supported accelerators with reduced memory footprint.
Open4bits has started supporting MLX models to broaden compatibility with emerging quantization formats and efficient runtimes.
Model Details
- Base Model: DeepSeek-R1
- Quantization: 2-bit
- Format: MLX
- Task: Text generation, instruction following
- Weight tying: Preserved
- Compatibility: MLX-enabled inference engines and efficient runtimes
This quantized release is designed to balance strong generation performance with low resource requirements.
Intended Use
This model is intended for:
- Local text generation and conversational applications
- CPU-based or low-resource deployments
- Research, prototyping, and experimentation
- Self-hosted or offline AI systems
Limitations
- Reduced performance compared to full-precision variants
- Output quality depends on prompt design and inference settings
- Not specifically tuned for highly specialized or domain-specific tasks
License
This model follows the MIT as defined by the base model creators. Users must comply with the licensing conditions of the base DeepSeek-R1 model.
Support
If you find this model useful, please consider supporting the project. Your support helps Open4bits continue releasing and maintaining high-quality, efficient open models for the community.
- Downloads last month
- 290
2-bit
Model tree for Open4bits/DeepSeek-R1-mlx-2Bit
Base model
deepseek-ai/DeepSeek-R1
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm# Interactive chat REPL mlx_lm.chat --model "Open4bits/DeepSeek-R1-mlx-2Bit"