Update README.md
Browse files
README.md
CHANGED
|
@@ -12,6 +12,16 @@ tags:
|
|
| 12 |
- apple-silicon
|
| 13 |
---
|
| 14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
# gemma-4-31B-it-JANG_2M
|
| 16 |
|
| 17 |
JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq).
|
|
|
|
| 12 |
- apple-silicon
|
| 13 |
---
|
| 14 |
|
| 15 |
+
## ⚠️ Low-bit quality warning
|
| 16 |
+
|
| 17 |
+
This is an aggressive quantization (2-bit average). At this compression level, output quality degrades noticeably — responses may start coherent but degenerate into repetition or garbage tokens toward the end of longer generations. This is expected behavior for 2-bit quantization on this architecture.
|
| 18 |
+
|
| 19 |
+
**Recommended for:** experimentation, quick testing, extreme memory constraints.
|
| 20 |
+
**Not recommended for:** production use, long-form generation, coding tasks.
|
| 21 |
+
|
| 22 |
+
For reliable output quality, use JANG_4M or higher profiles from this collection.
|
| 23 |
+
|
| 24 |
+
|
| 25 |
# gemma-4-31B-it-JANG_2M
|
| 26 |
|
| 27 |
JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq).
|