# TGI-Bench TGI-Bench is a benchmark for text-conditioned generative inbetweening. It provides image sequences together with text annotations for evaluating whether a model can generate intermediate frames that are both temporally plausible and aligned with natural-language descriptions. The dataset includes four annotation files for different sequence lengths: - `25_dataset_caption.json` - `33_dataset_caption.json` - `65_dataset_caption.json` - `81_dataset_caption.json` Each sample is stored in its own folder, and each JSON file contains annotations for the corresponding sequence length. ## Dataset Structure ```text TGI-Dataset/ ├── aerobatics/ │ ├── 00000.jpg │ ├── 00001.jpg │ └── ... ├── air_conditioner/ │ ├── 00000.jpg │ ├── 00001.jpg │ └── ... ├── ... ├── 25_dataset_caption.json ├── 33_dataset_caption.json ├── 65_dataset_caption.json └── 81_dataset_caption.json ``` ## Annotation Format Each JSON file is a list of entries like this: ```json { "folder": "aerobatics", "first_image_desc": "Man in cockpit of glider flying over patchwork fields.", "last_image_desc": "Man in cockpit of glider banking over fields and clouds.", "challenge": "Large motion", "caption": "Glider floats over fields with man holding control stick." } ``` ## Fields - `folder`: folder name containing the image sequence - `first_image_desc`: description of the first frame - `last_image_desc`: description of the last frame - `challenge`: challenge category - `caption`: text description of the sequence ## Usage This dataset can be used for research on: - text-conditioned generative inbetweening - video frame interpolation - vision-language evaluation - text-video alignment