This is the repository card of kernels-community/sage-attention that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.
How to use
# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel
kernel_module = get_kernel("kernels-community/sage-attention")
per_block_int8 = kernel_module.per_block_int8
per_block_int8(...)
Available functions
per_block_int8per_warp_int8sub_meanper_channel_fp8sageattnsageattn3_blackwell
Benchmarks
No benchmark available yet.
- Downloads last month
- 118
apache-2.0
Supported hardwares new
CUDA 8.09.0a
- OS
- linux
- Arch
- x86_64aarch64



