kiki-platformio-sft / README.md
clemsail's picture
Upload README.md with huggingface_hub
722f4b3 verified
metadata
library_name: peft
base_model: Qwen/Qwen3-8B
license: apache-2.0
tags:
  - electronics
  - embedded-systems
  - platformio
  - lora
  - sft
  - kiki-tuning
language:
  - en
  - fr
datasets:
  - custom
pipeline_tag: text-generation

KIKI PLATFORMIO SFT — LoRA Adapter

Fine-tuned LoRA adapter for platformio domain expertise, based on Qwen/Qwen3-8B.

Part of the KIKI Models Tuning pipeline for the FineFab platform.

Training Details

Parameter Value
Base Model Qwen/Qwen3-8B
Method QLoRA (4-bit NF4)
LoRA Rank 16
Epochs 3
Dataset 7008 examples
Domain platformio

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B", device_map="auto")
model = PeftModel.from_pretrained(model, "clemsail/kiki-platformio-sft")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")

License

Apache 2.0