How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="ddh0/L3.3-Electra-R1-70b-GGUF")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("ddh0/L3.3-Electra-R1-70b-GGUF", dtype="auto")
Quick Links

Steelskull/L3.3-Electra-R1-70b-GGUF

This repo provides several GGUF imatrix quantizations of Steelskull/L3.3-Electra-R1-70b.

Quantizations (worst to best)

  • IQ2_M
  • IQ3_XS
  • IQ3_M
  • Q4_K_S
  • IQ4_XS
  • Q4_K_M
  • Q5_K_S
  • Q5_K_M
  • Q6_K
  • Q8_0

The imatrix was generated using the same calibration data as Bartowski, and both the calibration data as well as the imatrix itself are provided here.

Downloads last month
1,482
GGUF
Model size
71B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ddh0/L3.3-Electra-R1-70b-GGUF