File size: 400 Bytes
d865c91
 
 
 
 
 
a804b53
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
torch
transformers>=4.45
accelerate>=0.34
bitsandbytes>=0.45
spaces
sentencepiece
# Note: flash-linear-attention + causal-conv1d would make Qwen3.6's
# GatedDeltaNet layers take a fast path (~5-10x faster), but HF Spaces'
# build container has no nvcc so they can't compile. The torch fallback
# is plenty fast at inference seq lengths; the "fast path not available"
# log line at startup is benign.