Matt Vaughn commited on
Commit
7c3210b
·
1 Parent(s): e52f442

Optimized CLAUDE.md for context efficiency

Browse files
Files changed (1) hide show
  1. CLAUDE.md +55 -322
CLAUDE.md CHANGED
@@ -1,86 +1,20 @@
1
  # CLAUDE.md
2
 
3
- This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
 
5
  ## Project Overview
6
- Reachy Language Partner - a language learning companion for the Reachy Mini robot. Practice conversational skills in French, Spanish, German, Italian, Portuguese, and other languages through natural dialogue with an expressive robot partner.
7
 
8
- **Key Features:**
9
- - Persistent memory across sessions (tracks progress, struggles, preferences)
10
- - Proactive engagement and gentle correction through recasting
11
- - Grammar deep-dive mode with complete explanations on demand
12
- - Error pattern tracking with proactive review at session start
13
- - Session summaries with highlights and next-steps recommendations
14
- - Expressive robot feedback (dances, emotions, celebrations)
15
-
16
- Powered by OpenAI's realtime API, vision processing, and choreographed motion.
17
-
18
- ## Recent Improvements
19
-
20
- **Privacy Protections (Jan 2026)**: Added comprehensive privacy guidelines to persistent memory system. Memory now implements moderate privacy level with clear allowed/excluded data categories.
21
-
22
- **Normalized Language Tutor Profiles (Jan 2026)**: All language tutors now follow consistent English-first instruction approach. Cultural content is properly framed within the teaching methodology rather than being a primary focus.
23
-
24
- **Shared Memory Tools (Jan 2026)**: Moved `recall` and `remember` tools from profile-specific implementations to shared `tools/` directory, eliminating 455 lines of duplicate code and fixing "unknown tool" errors for default profile.
25
 
26
- **Source Code Migration (Dec 2025)**: Migrated source code from `src/reachy_mini_language_tutor/` to root-level `reachy_mini_language_tutor/` directory for cleaner project structure.
27
 
28
- **SDK Compatibility Check (Jan 2026)**: Added defensive version checking for SDK methods (e.g., `clear_output_buffer()`), preventing crashes when using different SDK versions.
29
 
30
- **Template-Based Tutor Design (Dec 2025)**: Extracted 13 shared prompt components (~165 lines) into reusable templates, reducing tutor file sizes by 64-85% and ensuring consistent methodology across all language tutors.
31
 
32
- ## Important Resources
33
-
34
- **ALWAYS consult the Reachy Mini Python SDK documentation for canonical information about robot implementation and design:**
35
- - **SDK Documentation**: https://github.com/pollen-robotics/reachy_mini/blob/develop/docs/SDK/readme.md
36
-
37
- When working on robot control, motion systems, hardware interfaces, or any Reachy Mini-specific functionality, refer to the official SDK documentation first. This is the authoritative source for:
38
- - Robot API reference and methods
39
- - Hardware capabilities and limitations
40
- - Control loop best practices
41
- - Joint limits and coordinate systems
42
- - Official code examples and patterns
43
-
44
- ## Build & Run Commands
45
-
46
- ### Installation
47
- ```bash
48
- # Using uv (recommended)
49
- uv venv --python 3.12.1
50
- source .venv/bin/activate
51
- uv sync # Base install
52
- uv sync --extra all_vision # With all vision features
53
- uv sync --extra reachy_mini_wireless # For wireless robot support
54
- uv sync --group dev # Add dev tools (pytest, ruff, mypy)
55
-
56
- # Using pip
57
- python -m venv .venv
58
- source .venv/bin/activate
59
- pip install -e .
60
- pip install -e .[all_vision,dev]
61
- ```
62
-
63
- ### Running the App
64
- ```bash
65
- reachy-mini-language-tutor # Console mode (default)
66
- reachy-mini-language-tutor --gradio # Web UI at http://127.0.0.1:7860/
67
- reachy-mini-language-tutor --head-tracker mediapipe # With face tracking
68
- reachy-mini-language-tutor --local-vision # Local SmolVLM2 vision
69
- reachy-mini-language-tutor --wireless-version # For wireless robot
70
- reachy-mini-language-tutor --no-camera # Audio-only mode
71
- reachy-mini-language-tutor --profile <name> # Load custom profile
72
- ```
73
-
74
- ### Development Workflow
75
- **IMPORTANT: Always use `uv run` to execute development tools to ensure they run in the correct virtual environment.**
76
-
77
- ```bash
78
- uv run ruff check . # Lint and format check
79
- uv run ruff format . # Auto-format code
80
- uv run mypy reachy_mini_language_tutor/ # Type checking (strict mode enabled)
81
- uv run pytest # Run test suite
82
- uv run pytest tests/test_openai_realtime.py # Run specific test file
83
- ```
84
 
85
  ## Architecture
86
 
@@ -100,28 +34,13 @@ The robot uses a **compose-based motion blending** system with primary and secon
100
  **Control Loop**: `MovementManager` runs at **100Hz** in a dedicated thread, composing primary + secondary poses and calling `robot.set_target()`.
101
 
102
  ### Threading Model
103
- - **Main thread**: Launches Gradio/console UI
104
- - **MovementManager thread**: 100Hz motion control loop
105
- - **HeadWobbler thread**: Audio-reactive motion processing (speech detection)
106
- - **CameraWorker thread**: 30Hz+ frame capture + face tracking
107
- - **VisionManager thread**: Periodic local VLM inference (if `--local-vision` enabled)
108
-
109
- ### Tool Dispatch Pattern
110
- ```
111
- User audio → OpenaiRealtimeHandler (24kHz mono WebRTC)
112
- → LLM generates tool calls
113
- → dispatch_tool_call() routes to Tool subclass
114
- → Tool enqueues command in MovementManager
115
- → MovementManager executes and returns status via AdditionalOutputs
116
- → Response audio + motion sent to robot/user
117
- ```
118
-
119
- ### Vision Pipeline Options
120
- - **Default (gpt-realtime)**: Camera tool sends frames to OpenAI for vision
121
- - **Local vision (`--local-vision`)**: VisionManager processes frames with SmolVLM2 (on-device CPU/GPU/MPS)
122
- - **Face tracking options**:
123
- - `--head-tracker yolo`: YOLOv8-based face detection (requires `yolo_vision` extra)
124
- - `--head-tracker mediapipe`: MediaPipe from `reachy_mini_toolbox` (requires `mediapipe_vision` extra, pinned to 0.10.14)
125
 
126
  ## Key File Responsibilities
127
 
@@ -142,240 +61,54 @@ User audio → OpenaiRealtimeHandler (24kHz mono WebRTC)
142
  | `vision/yolo_head_tracker.py` | YOLO-based face detection for head tracking |
143
  | `profiles/` | Personality profiles with `instructions.txt` + `tools.txt` + optional custom tools |
144
 
145
- ## Language Profile System
146
-
147
- Six language tutor profiles available in `profiles/`:
148
- - **`default`**: Generic language partner that adapts to any language
149
- - **`french_tutor`**: Delphine, a French conversation partner with cultural context
150
- - **`spanish_tutor`**: Sofia, a Mexican Spanish conversation partner
151
- - **`german_tutor`**: Lukas, a German tutor teaching Standard German (Hochdeutsch)
152
- - **`italian_tutor`**: Chiara, an Italian tutor from Florence with cultural insights
153
- - **`portuguese_tutor`**: Rafael, a Brazilian Portuguese tutor from São Paulo
154
-
155
- Each profile in `reachy_mini_language_tutor/profiles/<name>/` contains:
156
- - **`instructions.txt`**: System prompt (uses `[placeholder]` syntax to compose from shared + unique content)
157
- - **`tools.txt`**: Enabled tools list (comment with `#`, one per line)
158
- - **`proactive.txt`**: Set to `true` for proactive greeting mode
159
- - **`language.txt`**: ISO language code for transcription (e.g., `es`, `fr`)
160
- - **`voice.txt`**: Voice name (e.g., `coral`, `sage`)
161
- - **Optional Python files**: Custom tool implementations (subclass `Tool` from `tools/core_tools.py`)
162
-
163
- ### Template-Based Instruction Design
164
-
165
- Tutor `instructions.txt` files compose from **shared prompts** (in `prompts/language_tutoring/`) + **unique content**:
166
-
167
- **Shared prompts** (13 files, ~165 lines total):
168
- - `[language_tutoring/proactive_engagement]` - Session start, memory recall, collecting personal info
169
- - `[language_tutoring/language_behavior]` - English-first instruction approach
170
- - `[language_tutoring/adaptive_support]` - Detecting and responding to learner struggle
171
- - `[language_tutoring/correction_style]` - Recasting and error handling
172
- - `[language_tutoring/grammar_explanation_structure]` - Framework for "why?" deep-dives
173
- - `[language_tutoring/conversation_topics]` - Topic guidance
174
- - `[language_tutoring/robot_expressiveness]` - Using robot capabilities for teaching
175
- - `[language_tutoring/response_guidelines]` - Response format guidance
176
- - `[language_tutoring/vocabulary_teaching]` - New word introduction pattern
177
- - `[language_tutoring/memory_usage]` - Recall/remember tool usage
178
- - `[language_tutoring/error_pattern_tracking]` - Specific error tracking with context
179
- - `[language_tutoring/session_wrap_up]` - End-of-session summary protocol
180
- - `[language_tutoring/final_notes]` - Core teaching philosophy
181
-
182
- **Unique content per tutor** (~100-150 lines):
183
- - IDENTITY (tutor name, personality, background)
184
- - Grammar explanation example (language-specific concept)
185
- - Language-specific teaching approach (e.g., MEXICAN SPANISH SPECIFICS)
186
- - Example interactions (dialogue in target language)
187
- - Cultural topics (region-specific insights)
188
-
189
- **Benefits**: Tutors are 64-85% smaller. Shared methodology updates propagate to all tutors automatically. New language profiles focus only on unique content.
190
-
191
- Load profiles via:
192
- - CLI: `--profile <name>`
193
- - Environment: `REACHY_MINI_CUSTOM_PROFILE=<name>` in `.env`
194
- - Gradio UI: Select and hot-reload instructions (tools require restart)
195
-
196
- ## Enhanced Feedback System
197
-
198
- The tutors implement a multi-layered feedback approach designed for effective language learning:
199
-
200
- ### Grammar Explanation Mode
201
- When learners ask "why?", "explain that", or show confusion, tutors:
202
- 1. Pause practice entirely
203
- 2. Switch to full English explanation mode
204
- 3. Provide complete grammar lessons with rules, examples, and memory tricks
205
- 4. Have learners practice the concept immediately
206
- 5. Store the explanation in memory for future reference
207
-
208
- Each tutor includes language-specific examples:
209
- - **French**: Passé composé with être, DR MRS VANDERTRAMP mnemonic
210
- - **Spanish**: Ser vs estar distinction, preterite vs imperfect
211
- - **German**: Akkusativ case changes, "AKK-use" mnemonic for direct objects
212
- - **Italian**: Gender agreement with O/A/I/E ending patterns
213
- - **Portuguese**: Ser vs estar, gerund usage (Brazilian vs European)
214
-
215
- ### Error Pattern Tracking
216
- Tutors store errors with specific context using the `remember` tool (category: `struggle`):
217
- - **Specificity**: "Confused 'ser' vs 'estar' describing emotions" not just "verb issues"
218
- - **Context**: "Used 'j'ai allé' instead of 'je suis allé' in past tense narrative"
219
- - **Frequency**: "Third time confusing gender of 'table'"
220
-
221
- At session start, tutors recall past struggles and incorporate review naturally.
222
-
223
- ### Session Summaries
224
- When sessions end, tutors provide spoken recaps:
225
- 1. Topics/vocabulary covered
226
- 2. One highlight (something done well)
227
- 3. One area to focus on next time
228
- 4. Store summary in memory (category: `progress`)
229
- 5. End with encouragement and celebratory dance
230
 
231
- ## Configuration (.env)
 
 
 
 
 
 
 
 
232
 
233
- Required:
234
- ```
235
- OPENAI_API_KEY=your_key_here
236
- ```
237
 
238
- Optional:
239
- ```
240
- REACHY_MINI_CUSTOM_PROFILE=french_tutor # Language profile to load
241
- SUPERMEMORY_API_KEY=your_key_here # Persistent memory for tutors
242
- MODEL_NAME=gpt-realtime # Override realtime model
243
- HF_HOME=./cache # Local VLM cache (--local-vision)
244
- HF_TOKEN=your_hf_token # For Hugging Face models/emotions
245
- LOCAL_VISION_MODEL=HuggingFaceTB/SmolVLM2-2.2B-Instruct # Local vision model path
246
- ```
247
 
248
- **Persistent Memory**: `SUPERMEMORY_API_KEY` enables cross-session memory via [supermemory.ai](https://supermemory.ai). When configured, tutors remember learner names, skill levels, error patterns, and progress. Powers the `recall` and `remember` tools.
 
 
249
 
250
- **Privacy Protections**: The memory system implements moderate privacy guidelines:
251
- - **ALLOWED**: First names, general region, occupation category, interests, learning data
252
- - **EXCLUDED**: Age, specific location, family names, sensitive details, travel dates
253
- - Implicit consent model via `SUPERMEMORY_API_KEY` configuration
254
 
255
  ## Available LLM Tools
256
 
257
- **Note**: The `recall` and `remember` tools are now in the shared `tools/` directory (as of recent refactor), making them available to all profiles without duplication.
258
-
259
- | Tool | Action | Dependencies |
260
- |------|--------|--------------|
261
- | `move_head` | Queue head pose change (left/right/up/down/front) | Core |
262
- | `camera` | Capture frame and send to vision model | Camera worker |
263
- | `head_tracking` | Enable/disable face-tracking offsets (not facial recognition) | Camera + head tracker |
264
- | `dance` | Queue dance from `reachy_mini_dances_library` | Core |
265
- | `stop_dance` | Clear dance queue | Core |
266
- | `play_emotion` | Play recorded emotion clip | Core + `HF_TOKEN` |
267
- | `stop_emotion` | Clear emotion queue | Core |
268
- | `do_nothing` | Explicitly remain idle | Core |
269
- | `recall` | Search persistent memory for learner information | `SUPERMEMORY_API_KEY` |
270
- | `remember` | Store observations about learner for future sessions | `SUPERMEMORY_API_KEY` |
271
-
272
- ## Code Style Conventions
273
-
274
- - **Type checking**: Strict mypy enabled (`uv run mypy reachy_mini_language_tutor/`)
275
- - **Formatting**: Ruff with 119-char line length (`uv run ruff format .`)
276
- - **Docstrings**: Required for all public functions/classes (ruff `D` rules enabled)
277
- - **Import sorting**: `isort` via ruff (local-folder: `reachy_mini_language_tutor`)
278
- - **Quote style**: Double quotes
279
- - **Async patterns**: Use `asyncio` for OpenAI realtime API, threading for robot control
280
- - **Tool execution**: Always use `uv run` for ruff, mypy, pytest, and other dev tools
281
 
282
  ## Development Tips
283
 
284
- ### Adding Custom Tools
285
- 1. Create Python file in `profiles/<profile_name>/` (e.g., `my_tool.py`) for profile-specific tools, or in `tools/` for shared tools
286
- 2. Subclass `reachy_mini_language_tutor.tools.core_tools.Tool`
287
- 3. Implement `name`, `description`, `parameters`, and `__call__()` method
288
- 4. Add tool name to `profiles/<profile_name>/tools.txt`
289
- 5. See `tools/recall.py` and `tools/remember.py` for examples of shared tools available to all profiles
290
-
291
- ### Motion Control Principles
292
- - **100Hz loop is sacred** - never block the `MovementManager` thread
293
- - Offload heavy work to separate threads/processes
294
- - Primary moves are queued and executed sequentially
295
- - Secondary offsets are blended additively in real-time
296
- - Use `BreathingMove` as idle fallback when queue is empty
297
-
298
- **Critical SDK Constraints:**
299
- - **65° Yaw Delta Limit**: Head yaw and body yaw cannot differ by more than 65°. The SDK auto-clamps violations, which means aggressive head tracking or wobble offsets may be silently limited if they push the combined pose beyond this constraint.
300
- - **set_target() vs goto_target()**: This app uses `set_target()` in the 100Hz loop because it bypasses interpolation, allowing real-time composition of primary + secondary poses. Using `goto_target()` would conflict with the manual control loop since it runs its own interpolation.
301
-
302
- ### Thread Safety
303
- - `CameraWorker` uses locks for frame buffer and tracking offsets
304
- - `MovementManager` uses queue for command dispatch
305
- - Never share mutable state between threads without synchronization
306
-
307
- ### Vision Processing
308
- - Default vision goes through OpenAI's gpt-realtime (when camera tool is called)
309
- - Local vision (`--local-vision`) runs SmolVLM2 periodically on-device
310
- - Face tracking is separate from vision - it only tracks face position for head offsets (not recognition)
311
-
312
- ### Working with Gradio UI
313
-
314
- **Architecture**:
315
- - Gradio UI runs in the main thread, launched by `main.py`
316
- - Provides web interface at `http://127.0.0.1:7860/` (default port)
317
- - Runs alongside robot control threads (does not block motion control)
318
- - Two main UI components:
319
- - `console.py`: `LocalStream` class handles audio I/O and settings routes for headless/console mode
320
- - `gradio_personality.py`: `PersonalityUI` class manages profile selection and live instruction reloading
321
-
322
- **Key UI Features**:
323
- - **Live audio streaming**: Bidirectional audio via Gradio's audio components
324
- - **Profile management**: Hot-reload personality instructions without restart (tools require restart)
325
- - **Settings routes**: Dynamic endpoints for UI configuration and state
326
- - **Real-time feedback**: Status updates and conversation display
327
-
328
- **Development Guidelines**:
329
- - **State management**: Gradio components should not hold critical state - use `MovementManager`, `OpenaiRealtimeHandler`, or `Config` as source of truth
330
- - **Thread safety**: UI callbacks may run in separate threads - always use proper synchronization when accessing shared resources
331
- - **Blocking operations**: Never perform long-running operations directly in Gradio callbacks - offload to background threads/queues
332
- - **Hot-reloading**: Changes to Gradio UI code require app restart (unlike profile instructions which can be hot-reloaded)
333
- - **Testing**: Test UI locally with `--gradio` flag before deploying changes
334
-
335
- **Common Patterns**:
336
- - Use `gr.Audio()` with streaming for real-time audio I/O
337
- - Use `gr.Dropdown()` for profile selection with dynamic refresh
338
- - Use `gr.Button()` callbacks to trigger actions via `MovementManager` queue
339
- - Use `gr.Textbox()` with `interactive=True` for live instruction editing
340
- - Return updates via component `.update()` methods for reactive UI
341
-
342
- **Gradio-Specific Considerations**:
343
- - Gradio apps auto-reload on file changes in debug mode (use `debug=True` in `gr.Interface.launch()`)
344
- - Share links (`share=True`) create public tunnels - avoid for production
345
- - Custom CSS/themes can be applied via `gr.themes` or custom CSS strings
346
- - Component visibility and interactivity can be toggled dynamically via `.update()`
347
-
348
- **Debugging**:
349
- - Check browser console for JavaScript errors
350
- - Use `print()` statements in callbacks (visible in terminal output)
351
- - Gradio exceptions are caught and displayed in UI - check both UI and terminal
352
- - Audio issues: Verify browser permissions for microphone/speaker access
353
-
354
- ## Testing
355
- - Tests located in `tests/`
356
- - Key test: `test_openai_realtime.py` (AsyncStreamHandler tests)
357
- - Audio fixtures available in `conftest.py`
358
- - Run with `uv run pytest` after `uv sync --group dev`
359
- - Always use `uv run pytest` to ensure tests run in the correct environment
360
-
361
- ## Dependencies & Extras
362
-
363
- | Extra | Purpose | Key Packages |
364
- |-------|---------|--------------|
365
- | Base | Core audio/vision/motion | `fastrtc`, `aiortc`, `openai`, `gradio`, `opencv-python`, `reachy_mini` |
366
- | `reachy_mini_wireless` | Wireless robot support | `PyGObject`, `gst-signalling` (GStreamer) |
367
- | `local_vision` | Local VLM processing | `torch`, `transformers`, `num2words` |
368
- | `yolo_vision` | YOLO head tracking | `ultralytics`, `supervision` |
369
- | `mediapipe_vision` | MediaPipe tracking | `mediapipe==0.10.14` |
370
- | `all_vision` | All vision features | Combination of above |
371
- | `dev` | Development tools | `pytest`, `ruff`, `mypy`, `pre-commit` |
372
-
373
- ## Common Troubleshooting
374
-
375
- **TimeoutError on startup**: Reachy Mini daemon not running. Install and start Reachy Mini SDK.
376
-
377
- **Vision not working**: Check that camera is connected and accessible. Use `--no-camera` to disable if not needed.
378
-
379
- **Wireless connection issues**: Ensure `--wireless-version` flag is used and daemon started with same flag. Requires `reachy_mini_wireless` extra.
380
-
381
- **Local vision slow**: SmolVLM2 benefits from GPU/MPS acceleration. Consider using default gpt-realtime vision if CPU-only.
 
1
  # CLAUDE.md
2
 
3
+ Guidance for Claude Code when working with this repository.
4
 
5
  ## Project Overview
6
+ Reachy Language Partner - language learning companion for Reachy Mini robot with multi-language support (French, Spanish, German, Italian, Portuguese). Features persistent memory, error tracking, grammar deep-dives, and expressive robot feedback. Powered by OpenAI realtime API, vision processing, and choreographed motion.
7
 
8
+ ## Important Resources
9
+ **SDK Documentation**: https://github.com/pollen-robotics/reachy_mini/blob/develop/docs/SDK/readme.md - Consult for robot control, motion systems, hardware interfaces, API reference, and best practices.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
+ ## Commands
12
 
13
+ **Setup**: `uv sync` (base), `uv sync --extra all_vision` (with vision), `uv sync --group dev` (dev tools)
14
 
15
+ **Run**: `reachy-mini-language-tutor` (console), `--gradio` (web UI), `--profile <name>` (custom profile), `--local-vision`, `--wireless-version`, `--no-camera`
16
 
17
+ **Dev**: Always use `uv run` prefix: `uv run ruff check .`, `uv run ruff format .`, `uv run mypy reachy_mini_language_tutor/`, `uv run pytest`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ## Architecture
20
 
 
34
  **Control Loop**: `MovementManager` runs at **100Hz** in a dedicated thread, composing primary + secondary poses and calling `robot.set_target()`.
35
 
36
  ### Threading Model
37
+ Main (UI), MovementManager (100Hz control), HeadWobbler (audio-reactive), CameraWorker (30Hz capture/tracking), VisionManager (local VLM if `--local-vision`)
38
+
39
+ ### Tool Dispatch
40
+ User audio OpenaiRealtimeHandler LLM tool calls → dispatch_tool_call() → Tool subclass → MovementManager queue → robot execution
41
+
42
+ ### Vision
43
+ Default: OpenAI gpt-realtime | Local: SmolVLM2 (`--local-vision`) | Face tracking: yolo/mediapipe (`--head-tracker`)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
  ## Key File Responsibilities
46
 
 
61
  | `vision/yolo_head_tracker.py` | YOLO-based face detection for head tracking |
62
  | `profiles/` | Personality profiles with `instructions.txt` + `tools.txt` + optional custom tools |
63
 
64
+ ## Language Profiles
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
+ Six tutor profiles in `profiles/`: `default`, `french_tutor`, `spanish_tutor`, `german_tutor`, `italian_tutor`, `portuguese_tutor`
67
+
68
+ **Profile structure** (`profiles/<name>/`):
69
+ - `instructions.txt`: System prompt with `[placeholder]` syntax (shared prompts from `prompts/language_tutoring/` + unique content)
70
+ - `tools.txt`: Enabled tools (one per line, `#` comments)
71
+ - `proactive.txt`, `language.txt`, `voice.txt`: Behavioral config
72
+ - Optional Python files: Custom tools (subclass `Tool` from `tools/core_tools.py`)
73
+
74
+ **Load**: `--profile <name>`, `REACHY_MINI_CUSTOM_PROFILE=<name>` in `.env`, or Gradio UI (instructions hot-reload, tools need restart)
75
 
76
+ ## Tutor Behavior
 
 
 
77
 
78
+ Tutors use grammar explanation mode (pause for English deep-dives), error pattern tracking (`remember` tool, category: `struggle`), and session summaries (category: `progress`). See profile `instructions.txt` for methodology details.
 
 
 
 
 
 
 
 
79
 
80
+ ## Configuration (.env)
81
+
82
+ **Required**: `OPENAI_API_KEY`
83
 
84
+ **Optional**: `REACHY_MINI_CUSTOM_PROFILE`, `SUPERMEMORY_API_KEY` (persistent memory, powers `recall`/`remember` tools), `MODEL_NAME`, `HF_HOME`, `HF_TOKEN`, `LOCAL_VISION_MODEL`
85
+
86
+ **Privacy**: Memory stores first names, general region, occupation, interests, learning data. Excludes age, specific location, family names, sensitive details.
 
87
 
88
  ## Available LLM Tools
89
 
90
+ `move_head`, `camera`, `head_tracking`, `dance`, `stop_dance`, `play_emotion`, `stop_emotion`, `do_nothing`, `recall` (requires `SUPERMEMORY_API_KEY`), `remember` (requires `SUPERMEMORY_API_KEY`)
91
+
92
+ ## Code Style
93
+
94
+ Strict mypy, Ruff (119-char lines), docstrings required, double quotes, `asyncio` for OpenAI API, threading for robot control. Always use `uv run` for dev tools.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
95
 
96
  ## Development Tips
97
 
98
+ **Custom Tools**: Subclass `Tool` from `tools/core_tools.py`, implement `name`/`description`/`parameters`/`__call__()`, add to `tools.txt`. See `tools/recall.py` for example.
99
+
100
+ **Motion Control**: Never block 100Hz `MovementManager` thread. Primary moves queue sequentially, secondary offsets blend additively. Use `set_target()` (not `goto_target()` which has own interpolation). **Critical**: 65° yaw delta limit (head-body), SDK auto-clamps violations.
101
+
102
+ **Thread Safety**: Use locks (`CameraWorker`) or queues (`MovementManager`). Never share mutable state without synchronization.
103
+
104
+ **Gradio UI**: Runs in main thread (`main.py`), port 7860. Components: `console.py` (`LocalStream`), `gradio_personality.py` (`PersonalityUI`). Never block callbacks - offload to threads/queues. State in `MovementManager`/`OpenaiRealtimeHandler`/`Config`, not UI. Instructions hot-reload, code/tools need restart.
105
+
106
+ ## Testing & Dependencies
107
+
108
+ **Tests**: `tests/` (key: `test_openai_realtime.py`). Run with `uv run pytest`.
109
+
110
+ **Extras**: `all_vision` (all features), `reachy_mini_wireless`, `local_vision`, `yolo_vision`, `mediapipe_vision`, `dev`
111
+
112
+ ## Troubleshooting
113
+
114
+ TimeoutError: Start Reachy Mini daemon | Vision issues: Check camera or use `--no-camera` | Wireless: Use `--wireless-version` + `reachy_mini_wireless` extra | Slow local vision: Use GPU/MPS or default gpt-realtime