DPT Large โ€” BonfyreFPQ v12

Native .fpq v12 quantized weights for Intel/dpt-large.

Model Info

Property Value
Architecture Monocular Depth Estimation
Parameters 344M
Original Size 1304 MB (safetensors)
FPQ v12 Size 329 MB
Compression 4.0ร—
Avg Cosine 0.999742
Worst Cosine 0.998460

Format

BonfyreFPQ v12 native format: rANS entropy-coded E8 lattice coordinates with 6-bit packed tiles and FP16 scales.

Decode to safetensors

bonfyre-fpqx decode dpt-large-v12-fpq.fpq output.safetensors

Source

Converted from Intel/dpt-large using BonfyreFPQ v12.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support