--- base_model: - meta-llama/Llama-3.2-1B-Instruct tags: - taiwan - local_knowledge - chinese - traditional_chinese - llama3.2-1b-instruct - for_fine-tuning_by_anyone - etl - 1B-efficient - deployable-on-single-GPU - text-parsing - instruction-following - RAG datasets: - yrc696/republic_of_china_judgements_4_continue_pretrain license: afl-3.0 --- # ETLCH 介紹 原始模型為llama3.2-1b-instruct 由國立清華大學博士班遲佑成(1), 陽明交通大學段明濤(2) 臺北市立大學侯詠皓(2) 業餘繼續預訓練與微調而成,供公眾研究擴展知識邊界用,亦可供後續微調使用。 本模型強化中文輸出穩定性。相對llama3.2-1b-instruct顯著提升,統計學上達顯著性(<.05))。 請註明作者與詳細來源。非常謝謝! Paper 請參考: https://arxiv.org/abs/2510.01616 原始英翻中 ![image](https://cdn-uploads.huggingface.co/production/uploads/661e4de160f6ec9834be7f09/CcnywuZLZ2JN5A14RxSZ5.png) 原始RAG ![image](https://cdn-uploads.huggingface.co/production/uploads/661e4de160f6ec9834be7f09/SUtjYOWUsk374TG1-mRrC.png) fine-tuning-英翻中 ![image](https://cdn-uploads.huggingface.co/production/uploads/661e4de160f6ec9834be7f09/HlY1aW9HJ0gEwSNPKThlI.png) fine-tuning-RAG ![image](https://cdn-uploads.huggingface.co/production/uploads/661e4de160f6ec9834be7f09/DgmOkVhgJs0nzsdlz3YFk.png)