Instructions to use Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed") model = AutoModel.from_pretrained("Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed
- SGLang
How to use Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed with Docker Model Runner:
docker model run hf.co/Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed
Use Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'Qwen2.5-VL-7B-Captioner-Relaxed
Introduction
Qwen2.5-VL-7B-Captioner-Relaxed is an instruction-tuned version of Qwen/Qwen2.5-VL-7B-Instruct, an advanced multimodal large language model. This is an updated version of Ertugrul/Qwen2-VL-7B-Captioner-Relaxed, retrained using the Qwen2.5 base model. This fine-tuned version is based on a hand-curated dataset for text-to-image models, providing significantly more detailed descriptions of given images. It is built upon the improved Qwen2.5 architecture.
Key Features:
- Enhanced Detail: Generates more comprehensive and nuanced image descriptions.
- Relaxed Constraints: Offers less restrictive image descriptions compared to the base model.
- Natural Language Output: Describes different subjects in the image while specifying their locations using natural language.
- Optimized for Image Generation: Produces captions in formats compatible with state-of-the-art text-to-image generation models.
- Improved Base Model: Leverages the advancements of Qwen2.5, potentially leading to better overall performance and understanding.
Note: This fine-tuned model is optimized for creating text-to-image datasets. As a result, performance on other tasks may be lower compared to the original model.
Requirements
If you encounter errors such as KeyError: 'qwen2_vl' or ImportError: cannot import name 'Qwen2VLForConditionalGeneration' from 'transformers', try installing the latest version of the transformers library from source:
pip install git+https://github.com/huggingface/transformers accelerate
Quickstart
import torch
from PIL import Image
from transformers import (
AutoModelForImageTextToText,
AutoProcessor
)
model_id = "Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed"
image_path = "path/to/your/image.jpg"
# the model requires more than 16GB of VRAM,
# if you don't have you can use bitsandbytes to quantize the model to 8bit or 4bit
model = AutoModelForImageTextToText.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2", # Use "flash_attention_2" when running on Ampere or newer GPU or use "eager" for older GPUs
)
#### For lower precision less than 12GB VRAM ####
# Configure 4-bit quantization using BitsAndBytesConfig
#from transformers import BitsAndBytesConfig
# quantization_config = BitsAndBytesConfig(
# load_in_4bit=True,
# bnb_4bit_use_double_quant=True,
# bnb_4bit_quant_type="nf4",
# bnb_4bit_compute_dtype=torch.bfloat16,
# bnb_4bit_quant_storage=torch.bfloat16,
# )
# model = AutoModelForImageTextToText.from_pretrained(
# model_id,
# device_map="auto",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2", # Use "flash_attention_2" when running on Ampere or newer GPU or use "eager" for older GPUs
# quantization_config=quantization_config, # Use BitsAndBytesConfig instead of load_in_4bit
# )
########################################################################
# you can change the min and max pixels to fit your needs to decrease compute cost to trade off quality
min_pixels = 256*28*28
max_pixels = 1280*28*28
processor = AutoProcessor.from_pretrained(model_id, max_pixels=max_pixels, min_pixels=min_pixels)
system_message = "You are an expert image describer."
def generate_description(path, model, processor):
image_inputs = Image.open(path).convert("RGB")
messages = [
{
"role": "system",
"content": [{"type": "text", "text": system_message}],
},
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image."},
{"type": "image", "image": image_inputs},
],
},
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
inputs = processor(
text=[text],
images=image_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to(model.device)
# min_p and temperature are experemental parameters, you can change them to fit your needs
generated_ids = model.generate(**inputs, max_new_tokens=512, min_p=0.1, do_sample=True, temperature=1.5)
generated_ids_trimmed = [out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
return output_text[0]
description = generate_description(image_path, model, processor)
print(description)
Acknowledgements
For more detailed options, refer to the Qwen/Qwen2.5-VL-7B-Instruct documentation.
- Downloads last month
- 630
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'