tags: - gguf - quantized - blasphemer license: other

Ministral-3-14B-AviationQA-SFT-f16-GGUF

This GGUF model was uploaded using Blasphemer.

File Information

  • Filename: Ministral-3-14B-AviationQA-SFT-f16.gguf
  • Size: 25.17 GB
  • Upload Date: 2025-12-10

Usage

With llama.cpp

# Download the model
huggingface-cli download sunkencity/Ministral-3-14B-AviationQA-SFT-f16-GGUF Ministral-3-14B-AviationQA-SFT-f16.gguf

# Run with llama.cpp
./llama.cpp/main -m Ministral-3-14B-AviationQA-SFT-f16.gguf -p "Your prompt here"

With LM Studio

  1. Open LM Studio
  2. Go to "Download" or search for sunkencity/Ministral-3-14B-AviationQA-SFT-f16-GGUF
  3. Download and load the model
  4. Start chatting!

With Python (llama-cpp-python)

from llama_cpp import Llama

llm = Llama(model_path="Ministral-3-14B-AviationQA-SFT-f16.gguf")
output = llm("Your prompt here", max_tokens=100)
print(output['choices'][0]['text'])

Citation

If you use this model, please cite:

@software{blasphemer2025,
  author = {Bradford, Christopher},
  title = {Blasphemer: Advanced Model Modification Toolkit},
  year = {2025},
  url = {https://github.com/sunkencity999/blasphemer}
}
Downloads last month
5
GGUF
Model size
14B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support