Upload folder using huggingface_hub
Browse files- .gitattributes +22 -0
- HuggingFaceTB_SmolLM3-3B.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_F16.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q2_K.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q2_K_L.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q2_K_XL.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q3_K_L.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q3_K_M.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q3_K_S.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q3_K_XL.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q3_K_XXL.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q4_K_L.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q4_K_M.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q4_K_S.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q4_K_XL.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q5_K_L.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q5_K_M.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q5_K_S.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q5_K_XL.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q6_K.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q6_K_L.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q6_K_XL.gguf +3 -0
- HuggingFaceTB_SmolLM3-3B_Q8_0.gguf +3 -0
- README.md +35 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,25 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
HuggingFaceTB_SmolLM3-3B.gguf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
HuggingFaceTB_SmolLM3-3B_F16.gguf filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
HuggingFaceTB_SmolLM3-3B_Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
HuggingFaceTB_SmolLM3-3B_Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
HuggingFaceTB_SmolLM3-3B_Q2_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
HuggingFaceTB_SmolLM3-3B_Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
HuggingFaceTB_SmolLM3-3B_Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 43 |
+
HuggingFaceTB_SmolLM3-3B_Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
HuggingFaceTB_SmolLM3-3B_Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
HuggingFaceTB_SmolLM3-3B_Q3_K_XXL.gguf filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
HuggingFaceTB_SmolLM3-3B_Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
HuggingFaceTB_SmolLM3-3B_Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 48 |
+
HuggingFaceTB_SmolLM3-3B_Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 49 |
+
HuggingFaceTB_SmolLM3-3B_Q4_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
|
| 50 |
+
HuggingFaceTB_SmolLM3-3B_Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
| 51 |
+
HuggingFaceTB_SmolLM3-3B_Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 52 |
+
HuggingFaceTB_SmolLM3-3B_Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 53 |
+
HuggingFaceTB_SmolLM3-3B_Q5_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
|
| 54 |
+
HuggingFaceTB_SmolLM3-3B_Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 55 |
+
HuggingFaceTB_SmolLM3-3B_Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
| 56 |
+
HuggingFaceTB_SmolLM3-3B_Q6_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
|
| 57 |
+
HuggingFaceTB_SmolLM3-3B_Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
HuggingFaceTB_SmolLM3-3B.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f985540178662bcc7641c66c15caec4be5b27a5055fa8305aee5037b51826a4a
|
| 3 |
+
size 6158339904
|
HuggingFaceTB_SmolLM3-3B_F16.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6c4122ee94376ba2152d0b5f8d54ef03c4e494bd18000fabb4c7b549c678f790
|
| 3 |
+
size 6158339904
|
HuggingFaceTB_SmolLM3-3B_Q2_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:35359d6823d3d43ace2bc9af0260cb99414129e74032f0877646377c57a97066
|
| 3 |
+
size 1253302080
|
HuggingFaceTB_SmolLM3-3B_Q2_K_L.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:05cee3808f46132d3daef25a9098eb5bd2b2abd351485c58502f98c3cde449f2
|
| 3 |
+
size 1316917056
|
HuggingFaceTB_SmolLM3-3B_Q2_K_XL.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b66e1dac41c7d8a42035285341a29509e0a5d99f7dc3ff9e19501beb1303b1f1
|
| 3 |
+
size 1563168576
|
HuggingFaceTB_SmolLM3-3B_Q3_K_L.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bf65f227d1dff5476091b72814d0ce8a2a5919ca0e020be6631d8150a80c9867
|
| 3 |
+
size 1690214208
|
HuggingFaceTB_SmolLM3-3B_Q3_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1538818235ffaea9e6590276f191edaed630fe6382eaf30b366916b787a68916
|
| 3 |
+
size 1571069760
|
HuggingFaceTB_SmolLM3-3B_Q3_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aa5a850bc24d304ce7c6caa5bbabee649e9c6de841b00140d34a649c5e609e87
|
| 3 |
+
size 1432313664
|
HuggingFaceTB_SmolLM3-3B_Q3_K_XL.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0ec51a2095fbd255526d245c408cd49025395f130b2c42a5cdf8d73f3f21da77
|
| 3 |
+
size 1753829184
|
HuggingFaceTB_SmolLM3-3B_Q3_K_XXL.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2f730bf479cf37259dae7abedcfbf0f8baf75b54e45c418c2f4fb269afbf11f9
|
| 3 |
+
size 2000080704
|
HuggingFaceTB_SmolLM3-3B_Q4_K_L.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:de03fa239f43bb3221a1317a3a74e550b52f805c929d3ac154d7594e2ca898e7
|
| 3 |
+
size 1978920768
|
HuggingFaceTB_SmolLM3-3B_Q4_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:327ec19dbef79f297963e37358c73191fbdda4c5473a2a1c5a9187db25d0047c
|
| 3 |
+
size 1915305792
|
HuggingFaceTB_SmolLM3-3B_Q4_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5624ab2225482553eacc00d830babaad5b2a46ca83fe860f05a60c30d32a259c
|
| 3 |
+
size 1817616192
|
HuggingFaceTB_SmolLM3-3B_Q4_K_XL.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:63c87ac8bdbc201c971bc22ceff5074b418253a94658bae7609ca750b3bb18fa
|
| 3 |
+
size 2225172288
|
HuggingFaceTB_SmolLM3-3B_Q5_K_L.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3defddaafb701d80dcbc83769a00ecac8e80f7976abc5ad89ed20a37fb1247db
|
| 3 |
+
size 2277371712
|
HuggingFaceTB_SmolLM3-3B_Q5_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:48ef556a88826cca7f057cf6b871fec59525ed9fc3338054ca11012f90ba51ab
|
| 3 |
+
size 2213756736
|
HuggingFaceTB_SmolLM3-3B_Q5_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4b4580fa34b64ecd68ad0bafb2f9ec35d6039f02df3d807ba21ade09c8127eec
|
| 3 |
+
size 2157354816
|
HuggingFaceTB_SmolLM3-3B_Q5_K_XL.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dadf6b19d5e3b7c5116fd62fd976e02958635e028abe1f4a1c5c5d0a50fadcda
|
| 3 |
+
size 2523623232
|
HuggingFaceTB_SmolLM3-3B_Q6_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:278c9c15e3568a45d740d3e026f8048d1a1396e6292ea03e645138ed9e361b44
|
| 3 |
+
size 2530860864
|
HuggingFaceTB_SmolLM3-3B_Q6_K_L.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c20a97c6b622b3a8cec6d883f18a93b479b5ed3f9778111d6102961e0cda36f6
|
| 3 |
+
size 2594475840
|
HuggingFaceTB_SmolLM3-3B_Q6_K_XL.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9f0b493b16158ef2427e092c72abd861ed3324e834d6021b686f70950592689f
|
| 3 |
+
size 2840727360
|
HuggingFaceTB_SmolLM3-3B_Q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f435a154a5100c049ef3c727fb09d30edb28c0b958232177710cc0d02845d310
|
| 3 |
+
size 3275575104
|
README.md
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model:
|
| 3 |
+
- HuggingFaceTB/SmolLM3-3B
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
|Quant|Size|Description|
|
| 8 |
+
|---|---|---|
|
| 9 |
+
|[Q2_K](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q2_K.gguf)|1.17 GB|Not recommended for most people. Very low quality.|
|
| 10 |
+
|[Q2_K_L](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q2_K_L.gguf)|1.23 GB|Not recommended for most people. Uses Q8_0 for output and embedding, and Q2_K for everything else. Very low quality.|
|
| 11 |
+
|[Q2_K_XL](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q2_K_XL.gguf)|1.46 GB|Not recommended for most people. Uses F16 for output and embedding, and Q2_K for everything else. Very low quality.|
|
| 12 |
+
|[Q3_K_S](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q3_K_S.gguf)|1.33 GB|Not recommended for most people. Prefer any bigger Q3_K quantization. Low quality.|
|
| 13 |
+
|[Q3_K_M](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q3_K_M.gguf)|1.46 GB|Not recommended for most people. Low quality.|
|
| 14 |
+
|[Q3_K_L](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q3_K_L.gguf)|1.57 GB|Not recommended for most people. Low quality.|
|
| 15 |
+
|[Q3_K_XL](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q3_K_XL.gguf)|1.63 GB|Not recommended for most people. Uses Q8_0 for output and embedding, and Q3_K_L for everything else. Low quality.|
|
| 16 |
+
|[Q3_K_XXL](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q3_K_XXL.gguf)|1.86 GB|Not recommended for most people. Uses F16 for output and embedding, and Q3_K_L for everything else. Low quality.|
|
| 17 |
+
|[Q4_K_S](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q4_K_S.gguf)|1.69 GB|Recommended. Slightly low quality.|
|
| 18 |
+
|[Q4_K_M](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q4_K_M.gguf)|1.78 GB|Recommended. Decent quality for most use cases.|
|
| 19 |
+
|[Q4_K_L](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q4_K_L.gguf)|1.84 GB|Recommended. Uses Q8_0 for output and embedding, and Q4_K_M for everything else. Decent quality.|
|
| 20 |
+
|[Q4_K_XL](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q4_K_XL.gguf)|2.07 GB|Recommended. Uses F16 for output and embedding, and Q4_K_M for everything else. Decent quality.|
|
| 21 |
+
|[Q5_K_S](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q5_K_S.gguf)|2.01 GB|Recommended. High quality.|
|
| 22 |
+
|[Q5_K_M](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q5_K_M.gguf)|2.06 GB|Recommended. High quality.|
|
| 23 |
+
|[Q5_K_L](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q5_K_L.gguf)|2.12 GB|Recommended. Uses Q8_0 for output and embedding, and Q5_K_M for everything else. High quality.|
|
| 24 |
+
|[Q5_K_XL](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q5_K_XL.gguf)|2.35 GB|Recommended. Uses F16 for output and embedding, and Q5_K_M for everything else. High quality.|
|
| 25 |
+
|[Q6_K](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q6_K.gguf)|2.36 GB|Recommended. Very high quality.|
|
| 26 |
+
|[Q6_K_L](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q6_K_L.gguf)|2.42 GB|Recommended. Uses Q8_0 for output and embedding, and Q6_K for everything else. Very high quality.|
|
| 27 |
+
|[Q6_K_XL](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q6_K_XL.gguf)|2.65 GB|Recommended. Uses F16 for output and embedding, and Q6_K for everything else. Very high quality.|
|
| 28 |
+
|[Q8_0](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_Q8_0.gguf)|3.05 GB|Recommended. Quality almost like F16.|
|
| 29 |
+
|[F16](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/output/HuggingFaceTB_SmolLM3-3B-GGUF/HuggingFaceTB_SmolLM3-3B_F16.gguf)|5.74 GB|Not recommended. Overkill. Prefer Q8_0.|
|
| 30 |
+
|[ORIGINAL (BF16)](https://huggingface.co/Alcoft/HuggingFaceTB_SmolLM3-3B-GGUF/resolve/main/HuggingFaceTB_SmolLM3-3B.gguf)|5.74 GB|Not recommended. Overkill. Prefer Q8_0.|
|
| 31 |
+
|
| 32 |
+
---
|
| 33 |
+
|
| 34 |
+
Quantized using [TAO71-AI AutoQuantizer](https://github.com/TAO71-AI/AutoQuantizer).
|
| 35 |
+
You can check out the original model template [here](https://huggingface.co/HuggingFaceTB/SmolLM3-3B).
|