How to use from
Lemonade
Pull the model
# Download Lemonade from https://lemonade-server.ai/
lemonade pull dumbequation/Qwen2.5-7B-GRPO-1M-Context-Medical-Reasoning-f16-GGUF:F16
Run and chat with the model
lemonade run user.Qwen2.5-7B-GRPO-1M-Context-Medical-Reasoning-f16-GGUF-F16
List all available models
lemonade list
Quick Links

Qwen2.5 7B trained to think and reason like Deepseek R1, specifically on Diagnostic Medicine.

Use this to aid your differential diagnosis or ask questions or even just test it's reasoning.

Use the system prompt below for better results

Respond in the following format:
<reasoning>
...
</reasoning>
<answer>
...
</answer>

Uploaded model

  • Developed by: dumbequation
  • License: apache-2.0
  • Finetuned from model : Qwen/Qwen2.5-7B-Instruct-1M
Downloads last month
34
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dumbequation/Qwen2.5-7B-GRPO-1M-Context-Medical-Reasoning-f16-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(67)
this model

Collection including dumbequation/Qwen2.5-7B-GRPO-1M-Context-Medical-Reasoning-f16-GGUF