Update README.md
Browse files
README.md
CHANGED
|
@@ -99,6 +99,10 @@ for response, history in model.stream_chat(tokenizer, "Hello", history=[]):
|
|
| 99 |
|
| 100 |
## Deployment
|
| 101 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
### LMDeploy
|
| 103 |
|
| 104 |
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
|
|
|
|
| 99 |
|
| 100 |
## Deployment
|
| 101 |
|
| 102 |
+
### llama.cpp
|
| 103 |
+
|
| 104 |
+
[internlm/internlm2_5-7b-chat-gguf](https://huggingface.co/internlm/internlm2_5-7b-chat-gguf) offers `internlm2_5-7b-chat` models in GGUF format in both half precision and various low-bit quantized versions, including `q5_0`, `q5_k_m`, `q6_k`, and `q8_0`.
|
| 105 |
+
|
| 106 |
### LMDeploy
|
| 107 |
|
| 108 |
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
|