Mistral-7B-Instruct-v0.3 Parity F16 GGUF

Same-origin parity artifact derived from mistralai/Mistral-7B-Instruct-v0.3.

This repo contains the high-fidelity f16 GGUF artifact used for mesh-llm backend parity validation against the corresponding MLX artifact.

Accepted local validation status:

  • Exact prompts: shared family-level drift versus strict one-word canaries
  • Behavior smoke: 10 flagged prompts out of 80 on the MT-Bench-derived harness

Paired MLX repo:

  • meshllm/mistral-7b-instruct-v0.3-parity-bf16-mlx
Downloads last month
103
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support