Papers
arxiv:2206.01127

VL-BEiT: Generative Vision-Language Pretraining

Published on Jun 2, 2022
Authors:
,

Abstract

VL-BEiT, a bidirectional multimodal Transformer, is pre-trained through masked prediction tasks on image-text pairs, texts, and images, achieving strong performance across vision-language tasks and competitive results in image classification and semantic segmentation.

AI-generated summary

We introduce a vision-language foundation model called VL-BEiT, which is a bidirectional multimodal Transformer learned by generative pretraining. Our minimalist solution conducts masked prediction on both monomodal and multimodal data with a shared Transformer. Specifically, we perform masked vision-language modeling on image-text pairs, masked language modeling on texts, and masked image modeling on images. VL-BEiT is learned from scratch with one unified pretraining task, one shared backbone, and one-stage training. Our method is conceptually simple and empirically effective. Experimental results show that VL-BEiT obtains strong results on various vision-language benchmarks, such as visual question answering, visual reasoning, and image-text retrieval. Moreover, our method learns transferable visual features, achieving competitive performance on image classification, and semantic segmentation.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2206.01127
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2206.01127 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2206.01127 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2206.01127 in a Space README.md to link it from this page.

Collections including this paper 1