File size: 5,127 Bytes
80e9035
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
license: mit
language:
- en
tags:
- fashion
- visual search
pretty_name: LAION  Referred Visual Search  Fashion
size_categories:
- 1M<n<10M
---

# **LAION — Referred Visual Search — Fashion**

*Introduced in **Weakly-Supervised Conditional Embedding for Referred Visual Search***

**[CRITEO AI Lab](https://ailab.criteo.com)** x **[ENPC](https://imagine-lab.enpc.fr)**

[Simon Lepage](https://simon-lepage.github.io), Jérémie Mary, [David Picard](https://davidpicard.github.io)

[[`Paper`](TODO)] 
[[`Demo`](TODO)] 
[[`Code`](https://github.com/Simon-Lepage/CondViT-LRVSF)] 
[[`BibTeX`](#citing-the-dataset)]

---


## **Composition**

LAION-RVS-Fashion is composed of images from : 
- **[LAION 2B EN](https://huggingface.co/datasets/laion/laion2B-en)**
- **[LAION 2B MULTI TRANSLATED](https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en)**
- **[LAION 1B NOLANG TRANSLATED](https://huggingface.co/datasets/laion/laion1B-nolang-joined-translated-to-en)**

These images have been grouped based on extracted product IDs. Each product in the training set is composed of at least a single image (isolated product), and a complex image (scene). We added categorical metadata and BLIP2 captions to each product. Please see the [samples](#samples) and refer to [our paper](TODO) for additional details.

|Split|Products|Distractors|
|-:|:-:|:-:|
|Train|272,457|-|
|Valid|400|99,541|
|Test|2,000|2,000,014|

**Total number of training images :** 841,718.


## **Samples**

<table style='text-align:center'>
    <tbody>
        <tr>
            <td></td>
            <td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/97969.0.jpg" style="height:200px"></td>
            <td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/97969.1.jpg" style="height:200px"></td>
            <td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/219924.0.jpg" style="height:200px"></td>
            <td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/219924.1.jpg" style="height:200px"></td>
        </tr>
        <tr>
            <td><b>Categories</b></td>
            <td colspan=2>Neck</td>
            <td colspan=2>Lower Body</td>
        </tr>
        <tr>
            <td><b>BLIP2 Captions</b></td>
            <td colspan=2>a scarf with multi-coloured stripes</td>
            <td colspan=2>stella pants - dark suede</td>
        </tr>
        <tr></tr>
        <tr>
            <td></td>
            <td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/72317.0.jpg" style="height:200px"></td>
            <td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/72317.1.jpg" style="height:200px"></td>
            <td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/108856.0.jpg" style="height:200px"></td>
            <td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/108856.1.jpg" style="height:200px"></td>
        </tr>
        <tr>
            <td><b>Categories</b></td>
            <td colspan=2>Feet</td>
            <td colspan=2>Bags</td>
        </tr>
        <tr>
            <td><b>BLIP2 Captions</b></td>
            <td colspan=2>neon green patent leather heels with studs</td>
            <td colspan=2>the burberry small leather bag is brown and leather</td>
        </tr>
    </tbody>
</table>


## **Attributes**

- **URL**, **WIDTH**, **HEIGHT**, **punsafe**, **pwatermark**, **language**: Original LAION fields. Please refer to their repository. 
- **TEXT**: Text originally associated with the image. 
- **ENG_TEXT** : Translated version for MULTI/NOLANG, copy of TEXT for EN.
- **TYPE**: SIMPLE (isolated products), COMPLEX (scenes), PARTIAL_COMPLEX (zommed-in scenes)
- **PRODUCT_ID**: Product identifier, allows to group together images depicting the same product. 
- **INDEX_SRC**: ID of parquet file originally storing this image. 
- **CATEGORY**: Categories of the products - `Bags, Feet, Hands, Head, Lower Body, Neck, Outwear, Upper Body, Waist, Whole Body` for the products, and `NonClothing` for some distractors.
- **blip2_caption1, blip2_caption2**: [BLIP2-FlanT5XL](https://huggingface.co/Salesforce/blip2-flan-t5-xl)-generated captions.


We also release `bootstrap_IDs.pkl`, the file used to generate the bootstrapped results of the paper. `test_subsets` is composed of [product IDs](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L229), while `dist_{N}_subsets` are [row indices](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L248).

---

## Citing the dataset

To cite our work, please use the following BibTeX entry : 
```
@article{lepage2023condvit,
  title={Weakly-Supervised Conditional Embedding for Referred Visual Search},
  author={Lepage, Simon and Mary, Jérémie and Picard, David},
  journal={arXiv:xxxx.xxxx},
  year={2023}
}
```