ggml : add predefined list of CPU backend variants to build (llama/10626) 1794b43 Diego Devesa commited on Dec 4, 2024
ggml : add support for dynamic loading of backends (llama/10469) b73266f Diego Devesa ggerganov HF Staff commited on Nov 25, 2024
Add required ggml-base and backend libs to cmake pkg (llama/10407) 8fdd994 bandoti commited on Nov 19, 2024
sycl : Add option to set the SYCL architecture for all targets (llama/10266) 0d836df Romain Biessy commited on Nov 19, 2024
CUDA: remove DMMV, consolidate F16 mult mat vec (llama/10318) e446f60 JohannesGaessler commited on Nov 17, 2024
backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (llama/9921) 3541ee8 Charles Xu Diego Devesa commited on Nov 15, 2024
ggml : build backends as libraries (llama/10256) 3dc93f3 Diego Devesa ggerganov HF Staff R0CKSTAR commited on Nov 14, 2024
metal : opt-in compile flag for BF16 (llama/10218) 5f667d1 ggerganov HF Staff commited on Nov 8, 2024
ggml : add ggml-cpu.h to the public headers (llama/10204) 936a35f Diego Devesa commited on Nov 7, 2024
cmake : do not hide GGML options + rename option (llama/9465) 8c32d36 ggerganov HF Staff commited on Sep 16, 2024
cmake : remove unused option GGML_CURL (llama/9011) 12634fc ggerganov HF Staff commited on Aug 14, 2024
ggml : move sgemm sources to llamafile subfolder (llama/8394) 1554348 ggerganov HF Staff commited on Jul 10, 2024
cmake : only enable GGML_NATIVE and x86 flags if not crosscompiling (ggml/885) 0456299 stanimirovb commited on Jul 12, 2024
ggml : add GGML_CUDA_USE_GRAPHS option, restore GGML_CUDA_FORCE_CUBLAS (cmake) (llama/8140) e83fdad unverified slaren commited on Jun 26, 2024
whisper : reorganize source code + improve CMake (#2256) f75c2e3 unverified ggerganov HF Staff commited on Jun 26, 2024