| --- |
| license: apache-2.0 |
| task_categories: |
| - visual-question-answering |
| - video-classification |
| language: |
| - en |
| viewer: false |
| configs: |
| - config_name: splits |
| data_files: |
| - split: eval |
| path: |
| - "video_tasks" |
| - "image_tasks" |
| --- |
| |
| # MMEB-V2 (Massive Multimodal Embedding Benchmark) |
|
|
| Building upon on our original [**MMEB**](https://arxiv.org/abs/2410.05160), **MMEB-V2** expands the evaluation scope to include five new tasks: four video-based tasks β Video Retrieval, Moment Retrieval, Video Classification, and Video Question Answering β and one task focused on visual documents, Visual Document Retrieval. This comprehensive suite enables robust evaluation of multimodal embedding models across static, temporal, and structured visual data settings. |
|
|
| **This Hugging Face repository contains only the raw image and video files used in MMEB-V2, which need to be downloaded in advance.** |
| The test data for each task in MMEB-V2 is available [here](https://huggingface.co/VLM2Vec) and will be automatically downloaded and used by our code. More details on how to set it up are provided in the following sections. |
|
|
| |[**Github**](https://github.com/TIGER-AI-Lab/VLM2Vec) | [**πLeaderboard**](https://huggingface.co/spaces/TIGER-Lab/MMEB) | [**πMMEB-V2/VLM2Vec-V2 Paper (TBA)**](https://arxiv.org/abs/2410.05160) | | [**πMMEB-V1/VLM2Vec-V1 Paper**](https://arxiv.org/abs/2410.05160) | |
|
|
|
|
| ## π What's New |
| - **\[2025.05\]** Initial release of MMEB-V2. |
|
|
|
|
| ## Dataset Overview |
|
|
| We present an overview of the MMEB-V2 dataset below: |
| <img width="900" alt="abs" src="overview.png"> |
|
|
|
|
| ## Dataset Structure |
|
|
| The directory structure of this Hugging Face repository is shown below. |
| For video tasks, we provide both sampled frames and raw videos (the latter will be released later). For image tasks, we provide the raw images. |
| Files from each meta-task are zipped together, resulting in six files. For example, ``video_cls.tar.gz`` contains the sampled frames for the video classification task. |
|
|
| ``` |
| |
| β video-tasks/ |
| βββ frames/ |
| β βββ video_cls.tar.gz |
| β βββ video_qa.tar.gz |
| β βββ video_ret.tar.gz |
| β βββ video_mret.tar.gz |
| βββ raw videos/ (To be released) |
| |
| β image-tasks/ |
| βββ mmeb_v1.tar.gz |
| βββ visdoc.tar.gz |
| |
| ``` |
|
|
| After downloading and unzipping these files locally, you can organize them as shown below. (You may choose to use ``Git LFS`` or ``wget`` for downloading.) |
| Then, simply specify the correct file path in the configuration file used by your code. |
|
|
| ``` |
| |
| β MMEB |
| βββ video-tasks/ |
| β βββ frames/ |
| β βββ video_cls/ |
| β β βββ UCF101/ |
| β β β βββ video_1/ # video ID |
| β β β βββ frame1.png # frame from video_1 |
| β β β βββ frame2.png |
| β β β βββ ... |
| β β βββ HMDB51/ |
| β β βββ Breakfast/ |
| β β βββ ... # other datasets from video classification category |
| β βββ video_qa/ |
| β β βββ ... # video QA datasets |
| β βββ video_ret/ |
| β β βββ ... # video retrieval datasets |
| β βββ video_mret/ |
| β βββ ... # moment retrieval datasets |
| βββ image-tasks/ |
| β βββ mmeb_v1/ |
| β β βββ OK-VQA/ |
| β β β βββ image1.png |
| β β β βββ image2.png |
| β β β βββ ... |
| β β βββ ImageNet-1K/ |
| β β βββ ... # other datasets from MMEB-V1 category |
| β βββ visdoc/ |
| β βββ ... # visual document retrieval datasets |
| |
| |
| ``` |