Safetensors
English
llama
hbfuk5 commited on
Commit
8c57a06
·
verified ·
1 Parent(s): 3c10a09

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -1
README.md CHANGED
@@ -8,4 +8,16 @@ language:
8
  base_model:
9
  - laion/CLIP-ViT-H-14-laion2B-s32B-b79K
10
  - lmsys/vicuna-7b-v1.5
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  base_model:
9
  - laion/CLIP-ViT-H-14-laion2B-s32B-b79K
10
  - lmsys/vicuna-7b-v1.5
11
+ ---
12
+ # OpenCLIP-LLaVA
13
+ OpenLVLM-MIA: A Controlled Benchmark Revealing the Limits of Membership Inference Attacks on Large Vision-Language Models
14
+
15
+ ## Overview
16
+ - OpenLVLM-MIA offers a controlled benchmark to reassess membership inference attacks (MIA) on large vision-language models beyond dataset-induced biases.
17
+ - The benchmark consists of a 6,000-image dataset with controlled member/non-member distributions and ground-truth membership at three training stages.
18
+ - On this setup, state-of-the-art MIA approaches perform at chance level, clarifying the true difficulty of the problem and motivating more robust privacy defenses.
19
+
20
+ ## Other Resources
21
+
22
+ - **Code**: [yamanalab/openlvlm-mia](https://github.com/yamanalab/openlvlm-mia)
23
+ - **Dataset**: [paper-2229/openlvlm-mia](https://huggingface.co/datasets/paper-2229/openlvlm-mia)