Update README.md
Browse files
README.md
CHANGED
|
@@ -155,6 +155,18 @@ guidellm benchmark \
|
|
| 155 |
--output-path "Llama-4-Maverick-HumanEval.json" \
|
| 156 |
--backend-args '{"extra_body": {"chat_completions": {"temperature":0.6, "top_p":0.9}}}'
|
| 157 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 158 |
</details>
|
| 159 |
|
| 160 |
|
|
|
|
| 155 |
--output-path "Llama-4-Maverick-HumanEval.json" \
|
| 156 |
--backend-args '{"extra_body": {"chat_completions": {"temperature":0.6, "top_p":0.9}}}'
|
| 157 |
```
|
| 158 |
+
GuideLLM interface changed, so for compatibility with the latest version (v0.6.0), please use the following command:
|
| 159 |
+
```bash
|
| 160 |
+
GUIDELLM__PREFERRED_ROUTE="chat_completions" \
|
| 161 |
+
guidellm benchmark \
|
| 162 |
+
--target "http://localhost:8000/v1" \
|
| 163 |
+
--data "RedHatAI/speculator_benchmarks" \
|
| 164 |
+
--data-args '{"data_files": "HumanEval.jsonl"}' \
|
| 165 |
+
--profile sweep \
|
| 166 |
+
--max-seconds 1800 \
|
| 167 |
+
--output-path "my_output.json" \
|
| 168 |
+
--backend-args '{"extras": {"body": {"temperature":0.6, "top_p":0.95, "top_k":20}}}'
|
| 169 |
+
```
|
| 170 |
</details>
|
| 171 |
|
| 172 |
|