ScatterMoE LoRA
Standalone Hugging Face Kernel Hub package for Axolotl's ScatterMoE LoRA Triton kernels.
This package is generated from Axolotl's in-tree scattermoe_lora sources and is exported as a universal kernel because the implementation is Python/Triton rather than a precompiled C++/CUDA extension.
from kernels import get_kernel
scattermoe_lora = get_kernel("axolotl-ai-co/scattermoe-lora")
Export metadata:
- source package:
src/axolotl/integrations/kernels/libs/scattermoe_lora - source revision:
9a0d3016d - source content hash:
7c6d1b2873 - torch custom op namespace:
_scattermoe_lora_7c6d1b2873
The generated build/torch-universal/scattermoe_lora directory is the shippable Hub artifact. torch-ext/scattermoe_lora is included so kernel-builder build-and-copy can regenerate the universal build tree if desired.
- Downloads last month
- -
kernel
kernels
apache-2.0