SeqResize-Qwen2.5-VL-3B (ViDoRe)

This model uses SeqResize (sequence resizing) to compress multi-vector visual document representations for ColBERT-style late interaction retrieval. Model weights are initialized from Qwen2.5-VL-3B-Instruct and finetuned on the ColPali train set for text-to-visual-document retrieval with bidirectional attention.

SeqResize compresses ~1300 visual document token vectors into a fixed budget of 64 vectors (95.1% compression) through projection along the sequence dimension.

arXiv GitHub License

Method Overview

SeqResize is a simple compression baseline: a MLP (or linear) layer projects the sequence dimension from a fixed input length to a fixed output length. Variable-length token sequences are trim-or-padded to the input length, then projected to the target number of vectors for ColBERT-style MaxSim retrieval. It is not the main method in our paper; we include it as a baseline.

Method

Results on ViDoRe v2

Method Tokens nDCG@5 (Avg) Bio Econ ESG-R ESG-H
ColPali – 53.3 56.5 49.9 55.7 51.1
ColQwenOmni – 56.5 56.5 53.2 54.2 62.2
MetaEmbed 64 58.8 58.7 55.5 57.4 63.7
Baseline (Ours, uncompressed) 1297 60.0 61.4 53.9 57.0 67.6
SeqResize (This model) 64 51.7 54.7 53.5 45.2 53.5
MemTok 64 54.3 56.8 53.0 46.4 61.4
H-Pool 64 56.4 59.6 52.1 53.4 60.6
AGC 64 56.7 59.0 54.5 55.8 57.3

Model Details

Initial weights Qwen2.5-VL-3B-Instruct
Architecture Qwen2.5-VL with bidirectional attention
Hidden dimension 2048
Budget 64 vectors per document
Compression method SeqResize (learned sequence projection)
Resizer input size 1024 (fixed sequence length before projection)
Resizer output size 64 (Budget)
Resizer hidden size 256 (MLP bottleneck)
Scoring ColBERT-style MaxSim (late interaction)
Normalization L2-normalized embeddings
Query prefix "Query: "
Passage prefix "Passage: "
Precision bfloat16
Max image tokens 1280

Usage

Important: When loading this model you must set resizer_input_size, resizer_output_size, and resizer_hidden_size to match the trained checkpoint (1024, 64, and 256 for this release). The extra_encoder_state.safetensors file should also be placed in the model directory so the sequence resizer weights are loaded.

import torch
from transformers import AutoProcessor
from qwen_vl_utils import process_vision_info

from src.arguments import ModelArguments
from src.encoder.resize_encoder import SequenceResizerEncoder
from src.models.qwen2_5_vl_embed.qwen2_5_vl_embed import Qwen2_5ForEmbedding

MODEL_ID = "hltcoe/SeqReSize_qwen2.5-vl_colpali"
IMAGE_PATH = "PLACEHOLDER"
RESIZER_INPUT_SIZE = 1024
RESIZER_OUTPUT_SIZE = 64
RESIZER_HIDDEN_SIZE = 256

# --- Setup ---
model_args = ModelArguments(
    model_name_or_path=MODEL_ID,
    pooling="resize",
    normalize=True,
    resizer_input_size=RESIZER_INPUT_SIZE,
    resizer_output_size=RESIZER_OUTPUT_SIZE,
    resizer_hidden_size=RESIZER_HIDDEN_SIZE,
    attn_implementation="flash_attention_2",
)

processor = AutoProcessor.from_pretrained(MODEL_ID)
model = SequenceResizerEncoder.load(
    Qwen2_5ForEmbedding,
    model_args,
    attn_implementation=model_args.attn_implementation,
    dtype=torch.bfloat16,
)
model = model.to("cuda").eval()

# --- Encode an image document ---
passage_messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Passage: "},
            {"type": "image", "image": IMAGE_PATH, "max_pixels": 1003520, "min_pixels": 614656},
        ],
    }
]
text = processor.apply_chat_template(passage_messages, tokenize=False, add_generation_prompt=False)
image_inputs, video_inputs = process_vision_info(passage_messages)
passage_inputs = processor(
    text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt",
).to("cuda")

with torch.amp.autocast(device_type="cuda", dtype=torch.bfloat16):
    with torch.inference_mode():
        doc_embeddings, doc_mask = model.encode(passage_inputs, is_query=False)
        print(doc_embeddings.shape)
        # doc_embeddings: (1, 64, 2048) — 64 compressed vectors

# --- Encode a text query ---
query_messages = [{"role": "user", "content": [{"type": "text", "text": "Query: What types of tissues are unable to regenerate spontaneously?"}]}]
query_text = processor.apply_chat_template(query_messages, tokenize=False, add_generation_prompt=False)
query_inputs = processor(text=[query_text], padding=True, return_tensors="pt").to("cuda")

with torch.amp.autocast(device_type="cuda", dtype=torch.bfloat16):
    with torch.inference_mode():
        query_embeddings, query_mask = model.encode(query_inputs, is_query=True)
        print(query_embeddings.shape)

# --- ColBERT MaxSim scoring ---
score = model.compute_similarity(query_embeddings, doc_embeddings, query_mask, doc_mask)
print(f"Similarity score: {score.item():.4f}")

Command line usage

For running inference and evaluation from the command line, see the Quick Start section.

Citation

@misc{qin2026multivectorindexcompressionmodality,
      title={Multi-Vector Index Compression in Any Modality}, 
      author={Hanxiang Qin and Alexander Martin and Rohan Jha and Chunsheng Zuo and Reno Kriz and Benjamin Van Durme},
      year={2026},
      eprint={2602.21202},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2602.21202}, 
}
Downloads last month
2
Safetensors
Model size
756k params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hltcoe/SeqReSize_qwen2.5-vl_colpali

Finetuned
(719)
this model

Dataset used to train hltcoe/SeqReSize_qwen2.5-vl_colpali

Collection including hltcoe/SeqReSize_qwen2.5-vl_colpali

Paper for hltcoe/SeqReSize_qwen2.5-vl_colpali