whisper.cpp / ggml

Commit History

ggml : sync sycl (skip) (#0)
bf6ccee

ggerganov HF Staff commited on

ggml : remove unnecessary UNUSED macro call (ggml/880)
ab9a7d0

danbev commited on

cmake : add GGML_BUILD and GGML_SHARED macro definitions (llama/8281)
a8f9bda

KafuuChino commited on

Enabled more data types for oneMKL gemm_batch (llama/8236)
08501f8

Ouadie EL FAROUKI commited on

CUDA: MMQ support for iq4_nl, iq4_xs (llama/8278)
8411e3c

JohannesGaessler commited on

CUDA: revert part of the RDNA1 optimizations (llama/8309)
fcd0c52

Daniele commited on

CUDA: fix MMQ stream-k rounding if ne00 % 128 != 0 (llama/8311)
04d4209

JohannesGaessler commited on

Fix WARP_SIZE=16 bug of Intel GPU (llama/8266)
1ce11e2

KevinLy commited on

rm get_work_group_size() by local cache for performance (llama/8286)
08fd758

Neo Zhang Jianyu arthw commited on

Define and optimize RDNA1 (llama/8085)
6aa5a89

Daniele commited on

fix typo (llama/8267)
0c9c7c8

Judd Judd commited on

Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (llama/8258)
cc49462

HanClinto commited on

cuda : update supports_op for matrix multiplication (llama/8245)
2314334

slaren commited on

Fix win build conflict of math library (llama/8230)
5a33963

KevinLy commited on

Fix the sub group size of Intel (llama/8106)
2dd429e

KevinLy commited on

CUDA: refactor and optimize IQ MMVQ (llama/8215)
afa1447

JohannesGaessler commited on

Update SYCL-Rope op and Refactor (llama/8157)
06acee2

zhentaoyu commited on

CUDA: fix MMQ stream-k for --split-mode row (llama/8167)
ef3d018

JohannesGaessler commited on

feat: cuda implementation for `ggml_conv_transpose_1d` (ggml/854)
025493b

John Balis slaren commited on

ggml : add GGML_CUDA_USE_GRAPHS option, restore GGML_CUDA_FORCE_CUBLAS (cmake) (llama/8140)
e83fdad
unverified

slaren commited on

whisper : reorganize source code + improve CMake (#2256)
f75c2e3
unverified

ggerganov HF Staff commited on