Text Generation
MLX
Safetensors
English
lfm2
speech-to-text
transcript-cleanup
text-correction
asr-post-processing
LFM
LiquidAI
mlx-4bit
conversational
4-bit precision
How to use from
Hermes AgentConfigure Hermes
# Install Hermes:
curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash
hermes setup# Point Hermes at the local server:
hermes config set model.provider custom
hermes config set model.base_url http://127.0.0.1:8080/v1
hermes config set model.default juanquivilla/sotto-cleanup-lfm25-350m-mlx-4bitRun Hermes
hermesQuick Links
SottoASR Transcript Cleanup — LFM2.5-350M MLX 4-bit (soup_30)
sottoasr.app · Full precision (bf16) · MLX 5-bit (recommended)
Overview
MLX 4-bit affine quantization of juanquivilla/sotto-cleanup-lfm25-350m. Smallest variant. The 5-bit MLX variant is recommended for most users.
What's new in soup_30
soup_30 extends v45 with targeted training data for five failure modes (multi-number sentences, year-context drift, disconnected number lists, within-input duplicates, long-form preservation), each generated programmatically and audited with a Qwen3.6-27B judge.
| Metric | v45 | soup_30 |
|---|---|---|
| Number accuracy | 95.9% | 96.5% |
| Adversarial benchmark (greedy) | 76% | 86% |
See the bf16 model card for the full pipeline and benchmark numbers.
Quantization Recipe
mlx_lm.convert \
--hf-path juanquivilla/sotto-cleanup-lfm25-350m \
--mlx-path sotto-cleanup-lfm25-350m-mlx-4bit \
-q --q-bits 4 --q-group-size 64 \
--trust-remote-code
Usage
from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler
model, tokenizer = load("juanquivilla/sotto-cleanup-lfm25-350m-mlx-4bit")
sampler = make_sampler(temp=0.0)
text = "talk about server three sixty"
prompt = f"### Input:\n{text}\n\n### Output:\n"
output = generate(model, tokenizer, prompt=prompt, max_tokens=512, sampler=sampler)
if "###" in output:
output = output[:output.index("###")].strip()
print(output)
License
MIT
- Downloads last month
- 105
Model size
55.4M params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for juanquivilla/sotto-cleanup-lfm25-350m-mlx-4bit
Base model
LiquidAI/LFM2.5-350M-Base Finetuned
juanquivilla/sotto-cleanup-lfm25-350m
Start the MLX server
# Install MLX LM: uv tool install mlx-lm# Start a local OpenAI-compatible server: mlx_lm.server --model "juanquivilla/sotto-cleanup-lfm25-350m-mlx-4bit"