--- library_name: peft base_model: Qwen/Qwen3-8B license: apache-2.0 tags: - electronics - embedded-systems - platformio - lora - sft - kiki-tuning language: - en - fr datasets: - custom pipeline_tag: text-generation --- # KIKI PLATFORMIO SFT — LoRA Adapter Fine-tuned LoRA adapter for **platformio** domain expertise, based on `Qwen/Qwen3-8B`. Part of the [KIKI Models Tuning](https://github.com/L-electron-Rare/KIKI-models-tuning) pipeline for the [FineFab](https://github.com/L-electron-Rare) platform. ## Training Details | Parameter | Value | |-----------|-------| | Base Model | `Qwen/Qwen3-8B` | | Method | QLoRA (4-bit NF4) | | LoRA Rank | 16 | | Epochs | 3 | | Dataset | 7008 examples | | Domain | platformio | ## Usage ```python from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B", device_map="auto") model = PeftModel.from_pretrained(model, "clemsail/kiki-platformio-sft") tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B") ``` ## License Apache 2.0