whisper-large-v3-q4
whisper-large-v3-q4 is an MLX-ready Whisper speech-to-text checkpoint derived from openai/whisper-large-v3 for local transcription on Apple Silicon.
Intended use
- Local speech-to-text transcription on Apple Silicon
- Batch or interactive audio transcription experiments
- Multilingual ASR workflows when supported by the upstream Whisper checkpoint
Out of scope
- Safety-critical decisions without domain expert review
- Claims of benchmark superiority not backed by published evaluation data
- Non-MLX runtime guarantees; this card documents the shipped HF checkpoint, not every possible serving stack
- Speaker diarization, clinical interpretation, or audio enhancement
Training and conversion metadata
| Parameter | Value |
|---|---|
| Repository | LibraxisAI/whisper-large-v3-q4 |
| Base model | openai/whisper-large-v3 |
| Task | automatic-speech-recognition |
| Library | transformers |
| Format | MLX / Apple Silicon checkpoint |
| Quantization | Q4 |
| Architecture | Not declared in config |
| Model files | 1 |
| Config model_type | whisper |
This card only reports metadata present in the Hugging Face repository, existing card frontmatter, or public config files. Missing benchmark, dataset, or training-run details are left explicit rather than reconstructed.
Usage
Python
import mlx_whisper
result = mlx_whisper.transcribe(
"audio.wav",
path_or_hf_repo="LibraxisAI/whisper-large-v3-q4",
)
print(result["text"])
Notes
- Use local audio files supported by
mlx_whisper. - For long recordings, split audio into manageable chunks before transcription.
Example output
No public sample output is currently declared for this checkpoint. Run the usage example above against your own prompt or audio/image input to inspect behavior.
Quantization notes
| Aspect | Original/base checkpoint | This checkpoint |
|---|---|---|
| Lineage | openai/whisper-large-v3 |
LibraxisAI/whisper-large-v3-q4 |
| Runtime target | Upstream runtime format | MLX on Apple Silicon |
| Quantization | Base precision or upstream-declared format | Q4 |
| Published quality delta | Not declared in public metadata | Not declared in public metadata |
Limitations
- No public benchmarks for this checkpoint are declared in the model metadata.
- No public benchmark claims are made by this card unless listed in the frontmatter.
- Validate outputs on your own domain data before relying on this checkpoint.
- Memory use and speed depend heavily on the exact Apple Silicon generation, unified-memory size, and prompt length.
License
mit. Check the upstream/base model license as well when a base model is declared.
Citation
@misc{libraxisai-whisper-large-v3-q4,
title = {whisper-large-v3-q4},
author = {LibraxisAI},
year = {2026},
howpublished = {\url{https://huggingface.co/LibraxisAI/whisper-large-v3-q4}},
note = {MLX checkpoint published by LibraxisAI}
}
Inference tested on
Related
- Base model:
openai/whisper-large-v3
π ππππππππππ. with AI Agents by VetCoders (c)2024-2026 LibraxisAI
- Downloads last month
- 112
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for LibraxisAI/whisper-large-v3-q4
Base model
openai/whisper-large-v3