Instructions to use nativemind/sphere_047_m4_overnight-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use nativemind/sphere_047_m4_overnight-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="nativemind/sphere_047_m4_overnight-gguf", filename="sphere_047_m4_overnight-q4_0.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use nativemind/sphere_047_m4_overnight-gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf nativemind/sphere_047_m4_overnight-gguf:Q4_K_M # Run inference directly in the terminal: llama-cli -hf nativemind/sphere_047_m4_overnight-gguf:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf nativemind/sphere_047_m4_overnight-gguf:Q4_K_M # Run inference directly in the terminal: llama-cli -hf nativemind/sphere_047_m4_overnight-gguf:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf nativemind/sphere_047_m4_overnight-gguf:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf nativemind/sphere_047_m4_overnight-gguf:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf nativemind/sphere_047_m4_overnight-gguf:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf nativemind/sphere_047_m4_overnight-gguf:Q4_K_M
Use Docker
docker model run hf.co/nativemind/sphere_047_m4_overnight-gguf:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use nativemind/sphere_047_m4_overnight-gguf with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "nativemind/sphere_047_m4_overnight-gguf" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nativemind/sphere_047_m4_overnight-gguf", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/nativemind/sphere_047_m4_overnight-gguf:Q4_K_M
- Ollama
How to use nativemind/sphere_047_m4_overnight-gguf with Ollama:
ollama run hf.co/nativemind/sphere_047_m4_overnight-gguf:Q4_K_M
- Unsloth Studio new
How to use nativemind/sphere_047_m4_overnight-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nativemind/sphere_047_m4_overnight-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nativemind/sphere_047_m4_overnight-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for nativemind/sphere_047_m4_overnight-gguf to start chatting
- Docker Model Runner
How to use nativemind/sphere_047_m4_overnight-gguf with Docker Model Runner:
docker model run hf.co/nativemind/sphere_047_m4_overnight-gguf:Q4_K_M
- Lemonade
How to use nativemind/sphere_047_m4_overnight-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull nativemind/sphere_047_m4_overnight-gguf:Q4_K_M
Run and chat with the model
lemonade run user.sphere_047_m4_overnight-gguf-Q4_K_M
List all available models
lemonade list
СЛЕДОВАТЕЛЬ - Сфера 047 (M4 Overnight) - GGUF
Это квантизованная версия модели nativemind/sphere_047_m4_overnight в формате GGUF, оптимизированная для запуска на устройствах с ограниченными ресурсами.
📋 О модели
Базовая модель: TinyLlama/TinyLlama-1.1B-Chat-v1.0
Обучено на: M4 MacBook Pro за ~2 часа
Метод: LoRA (rank=8)
Датасет: Реальное уголовное дело + Alpaca + Kene
Формат: GGUF (конвертировано из PyTorch + LoRA)
📦 Доступные квантизации
| Файл | Квантизация | Размер | Описание |
|---|---|---|---|
sphere_047_m4_overnight.gguf |
F16 | ~2.2 GB | Полная точность |
sphere_047_m4_overnight-q4_0.gguf |
Q4_0 | ~630 MB | 4-bit квантизация |
sphere_047_m4_overnight-q4_k_m.gguf |
Q4_K_M | ~650 MB | 4-bit K-квантизация (средняя) |
sphere_047_m4_overnight-q5_k_m.gguf |
Q5_K_M | ~750 MB | 5-bit K-квантизация (средняя) |
sphere_047_m4_overnight-q8_0.gguf |
Q8_0 | ~1.2 GB | 8-bit квантизация |
🚀 Использование
llama.cpp
# Загрузите модель
huggingface-cli download nativemind/sphere_047_m4_overnight-gguf sphere_047_m4_overnight-q4_k_m.gguf
# Запустите inference
./llama.cpp/build/bin/llama-cli -m sphere_047_m4_overnight-q4_k_m.gguf -p "Проанализируй документ..." -n 512
Flutter Llama Plugin
import 'package:flutter_llama/flutter_llama.dart';
final llama = FlutterLlama();
// Загрузите модель
await llama.loadModel(
modelPath: 'path/to/sphere_047_m4_overnight-q4_k_m.gguf',
config: LlamaConfig(
contextSize: 2048,
numThreads: 4,
),
);
// Генерируйте текст
final response = await llama.generateText(
prompt: 'Проанализируй документ: ...',
maxTokens: 512,
);
print(response);
Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(
model_path="sphere_047_m4_overnight-q4_k_m.gguf",
n_ctx=2048,
n_threads=4,
)
output = llm(
"Проанализируй документ: ...",
max_tokens=512,
temperature=0.7,
)
print(output['choices'][0]['text'])
📱 Рекомендации по устройствам
- Мобильные устройства (iOS/Android): Q4_0 или Q4_K_M
- Ноутбуки/Desktop: Q5_K_M или Q8_0
- Серверы: F16 (полная точность)
🎯 Примеры промптов
Проанализируй следующий документ и выдели ключевые факты...
Составь краткое резюме материалов дела...
Определи противоречия в показаниях свидетелей...
⚖️ Лицензия
Apache 2.0
🙏 Благодарности
- Базовая модель: TinyLlama Team
- GGUF конвертация: llama.cpp
⚖️ Истина восторжествует! 🕉️
- Downloads last month
- 45
Model tree for nativemind/sphere_047_m4_overnight-gguf
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0