Qwen3.5-4B Home Assistant
A fine-tuned version of Qwen3.5-4B trained to control Home Assistant via natural language. Given a user command, the model outputs a structured JSON tool call that Home Assistant can execute directly through its OpenAI-compatible conversation API.
Fine-tuned using Unsloth for 2x faster training with LoRA adapters, then exported to GGUF Q4_K_M for efficient local inference.
Model Details
| Property | Value |
|---|---|
| Base model | Qwen3.5-4B |
| Fine-tuning method | LoRA (r=8, alpha=8) |
| Training framework | Unsloth |
| Training examples | 810 |
| Epochs | 3 |
| Final training loss | ~0.023 |
| Quantization | Q4_K_M GGUF |
| Context length | 1024 tokens |
Intended Use
Designed to be served locally via llama.cpp and connected to Home Assistant as a local conversation agent — no cloud, no API keys, full privacy.
Supported device domains:
- Lights (on/off, brightness, colour, colour temperature)
- Climate / thermostat (get/set temperature)
- Covers & blinds (open/close/position)
- Fans (on/off, speed)
- Locks (lock/unlock, state query)
- Media players (play/pause/next/previous/volume/mute)
- Vacuum cleaners (start/dock/clean area)
- Shopping & todo lists
- Scenes & scripts
- Timers
- Weather queries
- Date & time queries
Output Format
For device control commands the model outputs a JSON tool call:
{"tool": "HassTurnOn", "parameters": {"name": "kitchen light", "domain": "light"}}
{"tool": "HassLightSet", "parameters": {"area": "bedroom", "brightness": 50}}
{"tool": "HassClimateSetTemperature", "parameters": {"temperature": 72}}
For conversational messages it responds in plain text.
How to Use
With llama.cpp server
llama-server \
--model Qwen3.5-4B.Q4_K_M.gguf \
--port 8080 \
--host 0.0.0.0 \
--ctx-size 1024 \
--chat-template chatml
Connect to Home Assistant
- Go to Settings → Devices & Services → Add Integration → OpenAI Conversation
- Set Base URL to
http://your-local-ip:8080/v1 - Set API key to any value (e.g.
sk-local) - Assign it as your conversation or voice agent
In Python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="luc-h/qwen3.5-4b-home-assistant",
filename="Qwen3.5-4B.Q4_K_M.gguf",
)
response = llm.create_chat_completion(messages=[
{"role": "system", "content": "You are a Home Assistant AI. Respond with JSON tool calls for device control."},
{"role": "user", "content": "turn off the kitchen lights"},
])
print(response["choices"][0]["message"]["content"])
# {"tool": "HassTurnOff", "parameters": {"area": "kitchen", "domain": "light"}}
Training Data
Generated from the official Home Assistant intents repository — the same sentence templates used by Home Assistant's built-in Assist voice assistant. Examples cover all major device domains with varied natural language phrasings, entity names, and area references.
Limitations
- Entity and area names in responses are placeholders — Home Assistant's conversation pipeline matches these to your actual devices
- Fine-tuned on English only
- Not designed for complex multi-step automations or scripting
Training Hardware
Trained on Google Colab free tier (NVIDIA T4, 15GB VRAM) in approximately 1 hour.
License
Apache 2.0 — same as the base Qwen3.5 model.
- Downloads last month
- 60
4-bit