A newer version of the Gradio SDK is available: 6.13.0
metadata
title: TRIBE V2 — Brain Response Prediction
emoji: 🧠
colorFrom: indigo
colorTo: yellow
sdk: gradio
sdk_version: 6.11.0
python_version: '3.12'
app_file: app.py
pinned: false
license: cc-by-nc-4.0
hardware: zero-a10g
TRIBE V2 — Brain Response Prediction
Predicts fMRI brain responses to video, audio, and text using Meta's TRIBE V2 foundation model.
Features
- Text Scorer — Paste a script/hook, get brain engagement scores (~30s)
- Video Scorer — Upload a video for full multimodal analysis (~2-5 min)
- A/B Tester — Compare two text versions head-to-head
- API — Programmatic JSON access
How It Works
TRIBE V2 combines LLaMA 3.2-3B (text), V-JEPA2 (video), and Wav2Vec-BERT (audio) to predict cortical surface activations across 20,484 brain vertices. Scores are derived from region-of-interest analysis using the Destrieux atlas.
Scores
- Attention Capture — Will they stop scrolling?
- Emotional Valence — Does it trigger feelings?
- Language Processing — Is the message clear?
- Visual Imagery — Are visuals compelling?
- Viral Potential — Composite engagement score
Citation
@article{tribe2024,
title={A Foundation Model of Vision, Audition, and Language for In-Silico Neuroscience},
author={Meta FAIR},
year={2024}
}