distil-lfm25-home-assistant
A fine-tuned version of LiquidAI/LFM2.5-350M for multi-turn smart home control via tool calling, trained using the distil labs platform.
This model converts natural language commands ("turn off the kitchen lights," "set the thermostat to 72") into structured function calls, handling multi-turn conversations where users adjust commands or issue sequences.
Results
| Metric | Teacher (120B) | LFM2.5-350M Base | LFM2.5-350M Tuned |
|---|---|---|---|
| Tool Call Equivalence | 92.11% | 63.2% | 96.7% |
| ROUGE | 98.53% | 94.6% | 99.4% |
The tuned model beats the 120B teacher by 4.6 percentage points on smart home control.
Training Details
| Parameter | Value |
|---|---|
| Base model | LiquidAI/LFM2.5-350M |
| Teacher model | GPT-oss-120B |
| Task type | Multi-turn tool calling (closed-book) |
| Training data | distil-labs/distil-smart-home |
| Training method | SFT with LoRA |
| Platform | distil labs |
Training Progress
| Epoch | Tool Call Equivalence |
|---|---|
| 0 (base) | 63.2% |
| 1 | 96.1% |
| 2 | 96.1% |
| 3 | 96.7% |
| 4 | 96.7% |
Usage
This model uses the LFM2.5 tool calling format with <|tool_call_start|> and <|tool_call_end|> tags:
<|tool_call_start|>[function_name(arg1="value1", arg2=42)]<|tool_call_end|>
Deployment
This model is a natural fit for embedded deployment on smart home hubs and IoT gateways using the ONNX runtime, where sub-second latency on a low-power NPU means voice commands execute without a round trip to the cloud. It also works with Ollama, vLLM, llama.cpp, or any inference runtime that supports Safetensors.
Blog Post
For the full writeup, see: Fine-Tuning Liquid's LFM2.5: Accurate Tool Calling at 350M Parameters
License
This model is licensed under the LFM Open Model License v1.0.
- Downloads last month
- 37
Model tree for distil-labs/distil-lfm25-home-assistant
Base model
LiquidAI/LFM2.5-350M