Nedimark commited on
Commit
a13d8e6
·
verified ·
1 Parent(s): 726a733

Update README metadata and details for r2 release

Browse files
Files changed (1) hide show
  1. README.md +121 -170
README.md CHANGED
@@ -1,210 +1,161 @@
1
  ---
2
- base_model: unsloth/Ministral-3-3B-Instruct-2512
 
 
 
 
 
3
  library_name: peft
4
- pipeline_tag: text-generation
5
  tags:
6
- - base_model:adapter:unsloth/Ministral-3-3B-Instruct-2512
 
 
7
  - lora
8
- - sft
9
  - transformers
10
- - trl
11
- - unsloth
 
 
 
 
 
 
 
12
  ---
13
 
14
- # Model Card for Model ID
15
-
16
- <!-- Provide a quick summary of what the model is/does. -->
17
-
18
-
19
-
20
- ## Model Details
21
-
22
- ### Model Description
23
-
24
- <!-- Provide a longer summary of what this model is. -->
25
-
26
-
27
-
28
- - **Developed by:** [More Information Needed]
29
- - **Funded by [optional]:** [More Information Needed]
30
- - **Shared by [optional]:** [More Information Needed]
31
- - **Model type:** [More Information Needed]
32
- - **Language(s) (NLP):** [More Information Needed]
33
- - **License:** [More Information Needed]
34
- - **Finetuned from model [optional]:** [More Information Needed]
35
-
36
- ### Model Sources [optional]
37
-
38
- <!-- Provide the basic links for the model. -->
39
-
40
- - **Repository:** [More Information Needed]
41
- - **Paper [optional]:** [More Information Needed]
42
- - **Demo [optional]:** [More Information Needed]
43
-
44
- ## Uses
45
-
46
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
47
-
48
- ### Direct Use
49
 
50
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
51
 
52
- [More Information Needed]
 
 
 
53
 
54
- ### Downstream Use [optional]
55
 
56
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
57
 
58
- [More Information Needed]
59
 
60
- ### Out-of-Scope Use
61
 
62
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
63
 
64
- [More Information Needed]
 
 
 
65
 
66
- ## Bias, Risks, and Limitations
 
 
 
67
 
68
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
69
 
70
- [More Information Needed]
 
 
 
 
 
 
71
 
72
- ### Recommendations
 
 
 
 
 
 
73
 
74
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
75
-
76
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
77
-
78
- ## How to Get Started with the Model
79
-
80
- Use the code below to get started with the model.
81
-
82
- [More Information Needed]
 
83
 
84
  ## Training Details
85
 
86
- ### Training Data
87
-
88
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
89
-
90
- [More Information Needed]
91
-
92
- ### Training Procedure
93
-
94
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
95
-
96
- #### Preprocessing [optional]
97
-
98
- [More Information Needed]
99
-
100
-
101
- #### Training Hyperparameters
102
-
103
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
104
-
105
- #### Speeds, Sizes, Times [optional]
106
-
107
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
108
-
109
- [More Information Needed]
110
 
111
- ## Evaluation
112
 
113
- <!-- This section describes the evaluation protocols and provides the results. -->
 
 
 
114
 
115
- ### Testing Data, Factors & Metrics
116
 
117
- #### Testing Data
 
 
 
 
 
118
 
119
- <!-- This should link to a Dataset Card if possible. -->
120
 
121
- [More Information Needed]
 
 
 
 
122
 
123
- #### Factors
124
 
125
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
 
 
 
 
 
126
 
127
- [More Information Needed]
128
 
129
- #### Metrics
 
 
 
 
 
130
 
131
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
132
 
133
- [More Information Needed]
134
 
135
- ### Results
136
 
137
- [More Information Needed]
 
 
 
 
 
 
 
 
138
 
139
- #### Summary
140
 
 
 
 
 
141
 
142
-
143
- ## Model Examination [optional]
144
-
145
- <!-- Relevant interpretability work for the model goes here -->
146
-
147
- [More Information Needed]
148
-
149
- ## Environmental Impact
150
-
151
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
152
-
153
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
154
-
155
- - **Hardware Type:** [More Information Needed]
156
- - **Hours used:** [More Information Needed]
157
- - **Cloud Provider:** [More Information Needed]
158
- - **Compute Region:** [More Information Needed]
159
- - **Carbon Emitted:** [More Information Needed]
160
-
161
- ## Technical Specifications [optional]
162
-
163
- ### Model Architecture and Objective
164
-
165
- [More Information Needed]
166
-
167
- ### Compute Infrastructure
168
-
169
- [More Information Needed]
170
-
171
- #### Hardware
172
-
173
- [More Information Needed]
174
-
175
- #### Software
176
-
177
- [More Information Needed]
178
-
179
- ## Citation [optional]
180
-
181
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
182
-
183
- **BibTeX:**
184
-
185
- [More Information Needed]
186
-
187
- **APA:**
188
-
189
- [More Information Needed]
190
-
191
- ## Glossary [optional]
192
-
193
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
194
-
195
- [More Information Needed]
196
-
197
- ## More Information [optional]
198
-
199
- [More Information Needed]
200
-
201
- ## Model Card Authors [optional]
202
-
203
- [More Information Needed]
204
-
205
- ## Model Card Contact
206
-
207
- [More Information Needed]
208
- ### Framework versions
209
-
210
- - PEFT 0.18.0
 
1
  ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - unsloth/Ministral-3-3B-Instruct-2512
7
+ base_model_relation: adapter
8
  library_name: peft
 
9
  tags:
10
+ - canis-teach
11
+ - ministral
12
+ - education
13
  - lora
 
14
  - transformers
15
+ - tutoring
16
+ - generalist
17
+ - math
18
+ - science
19
+ - humanities
20
+ - language
21
+ pipeline_tag: text-generation
22
+ datasets:
23
+ - CanisAI/teach-generalist-v1
24
  ---
25
 
26
+ # Canis.teach - Ministral-3B Instruct (Generalist)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
+ Generalist LoRA adapters for the Canis.teach suite, capable of tutoring across Math, Science, Humanities, and Language Arts.
29
 
30
+ - **Base Model**: unsloth/Ministral-3-3B-Instruct-2512
31
+ - **Release**: CanisAI/teach-generalist-ministral-3b-r2
32
+ - **Project**: Canis.teach - Learning that fits.
33
+ - **Subject**: Generalist (All Subjects)
34
 
35
+ ## What is this?
36
 
37
+ This repository provides LoRA adapters fine-tuned on Generalist (All Subjects) tutoring dialogues. Apply these adapters to the base model to enable subject-aware, didactic behavior without downloading a full merged checkpoint.
38
 
39
+ The model is designed to **teach, not just answer** - providing step-by-step explanations, hints, and pedagogically structured responses.
40
 
41
+ For ready-to-run merged models or Ollama-friendly GGUF quantizations, see the "Related Models" section.
42
 
43
+ ## Quick Start
44
 
45
+ ### Installation
46
+ ```bash
47
+ pip install transformers peft torch
48
+ ```
49
 
50
+ ### Usage (LoRA)
51
+ ```python
52
+ from transformers import AutoTokenizer, AutoModelForCausalLM
53
+ from peft import PeftModel
54
 
55
+ base = "unsloth/Ministral-3-3B-Instruct-2512"
56
+ adapter = "CanisAI/teach-generalist-ministral-3b-r2"
57
 
58
+ tokenizer = AutoTokenizer.from_pretrained(base, use_fast=True)
59
+ model = AutoModelForCausalLM.from_pretrained(
60
+ base,
61
+ device_map="auto",
62
+ torch_dtype="auto"
63
+ )
64
+ model = PeftModel.from_pretrained(model, adapter)
65
 
66
+ # Example prompt
67
+ prompt = "Explain the concept of entropy in simple terms."
68
+ inputs = tokenizer.apply_chat_template(
69
+ [{"role": "user", "content": prompt}],
70
+ add_generation_prompt=True,
71
+ return_tensors="pt"
72
+ ).to(model.device)
73
 
74
+ outputs = model.generate(
75
+ inputs,
76
+ max_new_tokens=512,
77
+ temperature=0.7,
78
+ top_p=0.8,
79
+ top_k=40,
80
+ do_sample=True
81
+ )
82
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
83
+ ```
84
 
85
  ## Training Details
86
 
87
+ - **Base Model**: unsloth/Ministral-3-3B-Instruct-2512
88
+ - **Training Method**: Supervised Fine-Tuning (SFT) with LoRA
89
+ - **Framework**: Unsloth + TRL/PEFT
90
+ - **Data**: Canis.lab-curated Generalist (All Subjects) tutoring dialogues
91
+ - **Target Modules**: Query, Key, Value, Output projections, MLP gates (gate, up, down)
92
+ - **Rank**: 32
93
+ - **Alpha**: 32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
 
95
+ ## Intended Use
96
 
97
+ - **Primary**: Subject-aware tutoring for Generalist (All Subjects) education
98
+ - **Applications**: Educational prototypes, tutoring systems, research
99
+ - **Approach**: Stepwise explanations, pedagogical hints, rubric-aligned responses
100
+ - **Target Audience**: Students, educators, researchers
101
 
102
+ ## Model Behavior
103
 
104
+ The model is optimized for:
105
+ - Clear, step-by-step explanations
106
+ - Appropriate difficulty progression
107
+ - Encouraging learning through hints rather than direct answers
108
+ - Subject-specific pedagogical approaches
109
+ - Maintaining educational standards and accuracy
110
 
111
+ ## Recommended Settings
112
 
113
+ For optimal tutoring behavior:
114
+ - **Temperature**: 0.6-0.8
115
+ - **Top-p**: 0.8-0.9
116
+ - **Top-k**: 20-40
117
+ - **Max tokens**: 512-1024
118
 
119
+ ## Safety and Limitations
120
 
121
+ **Important Considerations**:
122
+ - Human oversight required for educational use
123
+ - May occasionally hallucinate or oversimplify complex topics
124
+ - For fact-critical applications, consider RAG with verified curriculum sources
125
+ - Follow your institution's data privacy and AI usage policies
126
+ - Not a replacement for qualified human instruction
127
 
128
+ ## Related Models
129
 
130
+ | Type | Repository | Description |
131
+ |------|------------|-------------|
132
+ | **LoRA Adapters** | `CanisAI/teach-generalist-ministral-3b-r2` | This repository (lightweight) |
133
+ | **Merged Model** | (Coming Soon) | Ready-to-use full model |
134
+ | **GGUF Quantized** | (Coming Soon) | Ollama/llama.cpp compatible |
135
+ | **Dataset** | `CanisAI/teach-generalist-v1` | Training data |
136
 
137
+ ## License
138
 
139
+ This model inherits the license from the base model (unsloth/Ministral-3-3B-Instruct-2512). Please review the base model's license terms before use.
140
 
141
+ ## Citation
142
 
143
+ ```bibtex
144
+ @misc{canis-teach-teach-generalist,
145
+ title={Canis.teach Generalist Tutor},
146
+ author={CanisAI},
147
+ year={2026},
148
+ publisher={Hugging Face},
149
+ howpublished={\url{https://huggingface.co/CanisAI/teach-generalist-ministral-3b-r2}}
150
+ }
151
+ ```
152
 
153
+ ## Acknowledgments
154
 
155
+ - **MistralAI/Ministral Team** for the excellent base model
156
+ - **Unsloth** for efficient training tools
157
+ - **Hugging Face** ecosystem (Transformers, PEFT, TRL)
158
+ - Educators and contributors supporting the Canis.teach project
159
 
160
+ ---
161
+ **Canis.teach** - Learning that fits.