YTan2000 commited on
Commit
5d0ccf6
·
verified ·
1 Parent(s): 4b4605d

Update model card for text-only and image-to-text usage

Browse files
Files changed (1) hide show
  1. README.md +53 -4
README.md CHANGED
@@ -24,7 +24,10 @@ language:
24
 
25
  ## TQ3_4S Release
26
 
27
- This repository packages the model as a TurboQuant `TQ3_4S` GGUF for local deployment.
 
 
 
28
 
29
  ## Runtime Compatibility
30
 
@@ -36,7 +39,8 @@ This quant requires a TurboQuant-capable runtime. For llama.cpp, use the `turbo-
36
 
37
  | File | Quant | Size |
38
  | --- | --- | ---: |
39
- | `Qwen3.6-27B-TQ3_4S.gguf` | TQ3_4S | ~13.0 GB |
 
40
  | `chat_template.jinja` | chat template | text |
41
  | `thumbnail.png` | model card image | png |
42
 
@@ -62,11 +66,44 @@ Prompt processing:
62
 
63
  - Use a TurboQuant-capable llama.cpp build for best performance.
64
  - For llama.cpp, the intended runtime is the `turbo-tan/llama.cpp-tq3` fork.
65
- - The upstream family is multimodal-capable, but the public 27B repos used here do not currently expose a separate GGUF `mmproj` artifact.
 
66
  - For llama.cpp chat usage, keep `--jinja` enabled so the bundled chat template is honored.
67
  - Upstream guidance recommends keeping at least `128K` context when possible for reasoning-heavy workloads. On smaller local GPUs, reduce context as needed to fit memory.
68
  - Upstream default sampling guidance differs between thinking and non-thinking mode; follow the official Qwen card if you are trying to reproduce base-model behavior.
69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
  ## Recommended llama.cpp Settings
71
 
72
  Default prompt-processing settings on 16 GB:
@@ -81,11 +118,23 @@ llama-bench \
81
  -p 2048 -n 0 -r 3
82
  ```
83
 
84
- Default chat/server settings:
 
 
 
 
 
 
 
 
 
 
 
85
 
86
  ```bash
87
  llama-server \
88
  -m Qwen3.6-27B-TQ3_4S.gguf \
 
89
  --host 127.0.0.1 --port 8080 \
90
  -ngl 99 -c 4096 -np 1 \
91
  -ctk q4_0 -ctv tq3_0 -fa on \
 
24
 
25
  ## TQ3_4S Release
26
 
27
+ This repository packages the model as a TurboQuant `TQ3_4S` GGUF for local deployment. It can be used in two modes:
28
+
29
+ - **Text-only chat / coding:** use `Qwen3.6-27B-TQ3_4S.gguf` only.
30
+ - **Image-to-text / multimodal:** use `Qwen3.6-27B-TQ3_4S.gguf` together with `mmproj.gguf`.
31
 
32
  ## Runtime Compatibility
33
 
 
39
 
40
  | File | Quant | Size |
41
  | --- | --- | ---: |
42
+ | `Qwen3.6-27B-TQ3_4S.gguf` | TQ3_4S text model | ~13.0 GB |
43
+ | `mmproj.gguf` | Qwen3.6-27B vision projector | ~889 MB |
44
  | `chat_template.jinja` | chat template | text |
45
  | `thumbnail.png` | model card image | png |
46
 
 
66
 
67
  - Use a TurboQuant-capable llama.cpp build for best performance.
68
  - For llama.cpp, the intended runtime is the `turbo-tan/llama.cpp-tq3` fork.
69
+ - Text-only usage does not need `mmproj.gguf`.
70
+ - Image-to-text usage requires `mmproj.gguf`; pass it with `--mmproj mmproj.gguf` when using `llama-server` or other compatible llama.cpp tools.
71
  - For llama.cpp chat usage, keep `--jinja` enabled so the bundled chat template is honored.
72
  - Upstream guidance recommends keeping at least `128K` context when possible for reasoning-heavy workloads. On smaller local GPUs, reduce context as needed to fit memory.
73
  - Upstream default sampling guidance differs between thinking and non-thinking mode; follow the official Qwen card if you are trying to reproduce base-model behavior.
74
 
75
+
76
+ ## Text-Only vs Image-To-Text
77
+
78
+ ### Text-only
79
+
80
+ For normal chat, coding, and text generation, load only the main model:
81
+
82
+ ```bash
83
+ llama-server \
84
+ -m Qwen3.6-27B-TQ3_4S.gguf \
85
+ -ngl 99 -c 4096 -np 1 \
86
+ -ctk q4_0 -ctv tq3_0 -fa on \
87
+ --jinja --reasoning off --reasoning-budget 0
88
+ ```
89
+
90
+ ### Image-to-text
91
+
92
+ For vision/image prompts, also load the projector:
93
+
94
+ `mmproj.gguf` was smoke-tested with the `turbo-tan/llama.cpp-tq3` `llama-server` runtime on RTX 5060 Ti. The server loaded the projector as a Qwen-VL multimodal model and `/health` returned `ok`.
95
+
96
+ Validated smoke-test settings:
97
+
98
+ ```bash
99
+ llama-server \
100
+ -m Qwen3.6-27B-TQ3_4S.gguf \
101
+ --mmproj mmproj.gguf \
102
+ -ngl 99 -c 2048 -np 1 \
103
+ -ctk q4_0 -ctv tq3_0 -fa on \
104
+ --jinja --reasoning off --reasoning-budget 0
105
+ ```
106
+
107
  ## Recommended llama.cpp Settings
108
 
109
  Default prompt-processing settings on 16 GB:
 
118
  -p 2048 -n 0 -r 3
119
  ```
120
 
121
+ Default text-only chat/server settings:
122
+
123
+ ```bash
124
+ llama-server \
125
+ -m Qwen3.6-27B-TQ3_4S.gguf \
126
+ --host 127.0.0.1 --port 8080 \
127
+ -ngl 99 -c 4096 -np 1 \
128
+ -ctk q4_0 -ctv tq3_0 -fa on \
129
+ --jinja
130
+ ```
131
+
132
+ Image-to-text server settings:
133
 
134
  ```bash
135
  llama-server \
136
  -m Qwen3.6-27B-TQ3_4S.gguf \
137
+ --mmproj mmproj.gguf \
138
  --host 127.0.0.1 --port 8080 \
139
  -ngl 99 -c 4096 -np 1 \
140
  -ctk q4_0 -ctv tq3_0 -fa on \