Spaces:
Running
Running
Commit History
ggml : try fix ppc64 (#0) df78c25
ggml : restore sigmoid decl order (ggml/0) 67c5387
ggml : resolve merge (ggml/0) d692b06
ggml : full ALiBi support (llama/7192) 192bda4
ggml : introduce bfloat16 support (llama/6412) 81ec961
Justine Tunney commited on
gguf-split: add --no-tensor-first-split (llama/7072) b9bc04d
Xuan Son Nguyen commited on
ggml : add Flash Attention (llama/5021) 34d3b03
gguf : enforce that tensor names are unique (llama/6905) 22e446d
Xuan Son Nguyen slaren commited on
gguf : fix mismatch between alloc and free functions (llama/6929) d8fb433
slaren commited on
Merge pull request from GHSA-p5mv-gjc5-mwqv 72b368d
ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (llama/6906) f900de6
llamafile : improve sgemm.cpp (llama/6796) bfe2a5f
Justine Tunney commited on
ggml : group all experts in a single ggml_mul_mat_id (llama/6505) f0b5c67
ggml : fix llamafile sgemm wdata offsets (llama/6710) 5e756db
ggml : add llamafile sgemm (llama/6414) 093eec4
Justine Tunney commited on
metal : unify mul_mv_id kernels (llama/6556) e9910b5
slaren commited on
llama : add gguf_remove_key + remove split meta during quantize (llama/6591) 1706870
jiez z5269887 commited on
feat: implemented sigmoid function (ggml/806) cd0c122
Justina Cho commited on
llama : add Command R Plus support (llama/6491) 8cf7097 unverified
ggml : mul_mat_id use the same tensor for all the experts (llama/6387) 26fdc9f unverified
Vulkan k-quant mmq and ggml-backend offload functionality (llama/6155) 1ff7b08 unverified
ggml : fix bounds checking of zero size views (llama/6347) 80db462 unverified
slaren commited on
sync : ggml (#2001) cbbfa9e unverified
ggml, ci : Windows ARM runner and build fixes (llama/5979) 507b9dd unverified
Michael Podvitskiy commited on
ggml : remove old quantization functions (llama/5942) 11a2545 unverified
llama : support Mamba Selective State Space Models (llama/5328) 224fbc2 unverified
compilade commited on
ggml : use SYS_get_cpu if SYS_getcpu is not defined (llama/5906) 909dbdc unverified
ggml : fix unknown status (llama/0) 394e5d8 unverified
ggml : introduce ggml_status (ggml/750) 151c676 unverified
ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (llama/5760) 9a07f42 unverified
IQ4_XS: a 4.25 bpw quantization (llama/5747) 0ee1bfb unverified
add google magika inference example (ggml/748) 10ac4bb unverified
slaren commited on
code : normalize enum names (llama/5697) 93e0830 unverified
IQ3_S: a much better alternative to Q3_K (llama/5676) 32589c9 unverified
Introduce backend GUIDs (ggml/743) a7eb9f6 unverified
UEXTM.com slaren commited on
ggml : always define ggml_fp16_t as uint16_t (llama/5666) bc567d3 unverified
sync : llama.cpp (ggml/0) f8e8d34 unverified
Allow for Vulkan build with Accelerate. 7d255ac unverified
ggml : compute forward no longer pass src tensors (ggml/729) 4e31c82 unverified
Siddharth Ramakrishnan siddharthvader commited on