Spaces:
Running
Running
Commit History
gguf : enforce that tensor names are unique (llama/6905) 22e446d
Xuan Son Nguyen slaren commited on
gguf : fix mismatch between alloc and free functions (llama/6929) d8fb433
slaren commited on
Merge pull request from GHSA-p5mv-gjc5-mwqv 72b368d
ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (llama/6906) f900de6
llamafile : improve sgemm.cpp (llama/6796) bfe2a5f
Justine Tunney commited on
ggml : group all experts in a single ggml_mul_mat_id (llama/6505) f0b5c67
ggml : fix llamafile sgemm wdata offsets (llama/6710) 5e756db
ggml : add llamafile sgemm (llama/6414) 093eec4
Justine Tunney commited on
metal : unify mul_mv_id kernels (llama/6556) e9910b5
slaren commited on
llama : add gguf_remove_key + remove split meta during quantize (llama/6591) 1706870
jiez z5269887 commited on
feat: implemented sigmoid function (ggml/806) cd0c122
Justina Cho commited on
llama : add Command R Plus support (llama/6491) 8cf7097 unverified
ggml : mul_mat_id use the same tensor for all the experts (llama/6387) 26fdc9f unverified
Vulkan k-quant mmq and ggml-backend offload functionality (llama/6155) 1ff7b08 unverified
ggml : fix bounds checking of zero size views (llama/6347) 80db462 unverified
slaren commited on
sync : ggml (#2001) cbbfa9e unverified
ggml, ci : Windows ARM runner and build fixes (llama/5979) 507b9dd unverified
Michael Podvitskiy commited on
ggml : remove old quantization functions (llama/5942) 11a2545 unverified
llama : support Mamba Selective State Space Models (llama/5328) 224fbc2 unverified
compilade commited on
ggml : use SYS_get_cpu if SYS_getcpu is not defined (llama/5906) 909dbdc unverified
ggml : fix unknown status (llama/0) 394e5d8 unverified
ggml : introduce ggml_status (ggml/750) 151c676 unverified
ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (llama/5760) 9a07f42 unverified
IQ4_XS: a 4.25 bpw quantization (llama/5747) 0ee1bfb unverified
add google magika inference example (ggml/748) 10ac4bb unverified
slaren commited on
code : normalize enum names (llama/5697) 93e0830 unverified
IQ3_S: a much better alternative to Q3_K (llama/5676) 32589c9 unverified
Introduce backend GUIDs (ggml/743) a7eb9f6 unverified
UEXTM.com slaren commited on
ggml : always define ggml_fp16_t as uint16_t (llama/5666) bc567d3 unverified
sync : llama.cpp (ggml/0) f8e8d34 unverified
Allow for Vulkan build with Accelerate. 7d255ac unverified
ggml : compute forward no longer pass src tensors (ggml/729) 4e31c82 unverified
Siddharth Ramakrishnan siddharthvader commited on
ggml : fix conv_2d batch mode (ggml/737) 99ece5c unverified
ggml : android and old glibc NUMA incompatibility bugfixes (llama/5557) 0206c2d unverified
ggml, common, examples, tests : fixed type arguments in printf (llama/5528) 2f3a004 unverified
1.5 bit quantization (llama/5453) 9c3aa6a unverified
ggml : add ALiBi support for ggml_soft_max_ext (llama/5488) 26c019a unverified
ggml : add numa options (llama/5377) 7c952d2 unverified
ggml : add mmla kernels for quantized GEMM (llama/4966) 0d50a29 unverified
snadampal commited on
ggml-alloc : v3 (ggml/727) 5cffd6f unverified
slaren commited on
Basic Vulkan Multi-GPU implementation (llama/5321) 5d130aa unverified
ggml : avoid duplicating function calls using MIN/MAX macros (llama/5325) 9bb2b0a unverified
llava : add MobileVLM support (llama/5132) f17a416 unverified
JidongZhang-THU slaren commited on
ggml : limit n_threads to the max n_tasks (llama/5238) 2645c33 unverified
slaren commited on