Datasets:
SeafloorAI: A Large-scale AI-Ready Dataset for Seafloor Geological Survey
πΎ Project Information
PI: Dr. Xi Peng, Associate Professor, Department of Computer and Information Sciences, University of Delaware
Co-PI: Dr. Arthur Trembanis, Professor, School of Marine Science and Policy, University of Delaware
Project: "Advancing Trustworthy AI in Seafloor Mapping," DoD DEPSCoR (FA95502310494), PI: Xi Peng, 2023β2026.
Publication: SeafloorAI: A Large-scale Vision-Language Dataset for Seafloor Geological Survey. NeurIPS, 2024.
π§ How to Use SeafloorAI Dataset?
Read our NeurIPS paper for details;
Open-source data processing code: GitHub;
Download the dataset: Huggingface (this page).
π₯ How to Download SeafloorAI Dataset?
Step 1: Fill in Your Online Request (Top)
Step 2: Wait for Approval
Step 3: Download the Data
π Dataset Abstract
A major obstacle to the advancements of machine learning models in marine science, particularly in sonar imagery analysis, is the scarcity of AI-ready datasets. While there have been efforts to make AI-ready sonar image dataset publicly available, they suffer from limitations in terms of environment setting and scale. To bridge this gap, we introduce SeafloorAI, the first extensive AI-ready dataset for seafloor mapping across 5 geological layers that is curated in collaboration with marine scientists. We further extend the dataset to SeafloorGenAI by incorporating the language component in order to facilitate the development of both visionand language-capable machine learning models for sonar imagery. The dataset consists of 62 geo-distributed data surveys spanning 17,300 square kilometers, with 696K sonar images, 827K annotated segmentation masks, 696K detailed language descriptions and approximately 7M question-answer pairs. By making our data processing source code publicly available, we aim to engage the marine science community to enrich the data pool and inspire the machine learning community to develop more robust models. This collaborative approach will enhance the capabilities and applications of our datasets within both fields.
π Dataset Overview
The full dataset comprises 696K sonar images and 827K annotated segmentation masks across 9 regions. This release includes 353K sonar images from 7 regions and the complete set of 827K annotated masks, totaling 178 GB. The dataset is organized into 7 region folders and a split folder that defines the train/validation/test partitions.
Input & Annotation Layers
The dataset includes 11 layers:
- Raw Input Signals: Backscatter, Bathymetry, Slope, Rugosity, Longitude, Latitude.
- Annotations: Sediment, Physiographic Zone, Habitat, Fault, Fold.
π Dataset Structure
The dataset is organized by region with corresponding input signals and annotation layers. Each region contains multi-channel input data and task-specific annotations.
Directory Organization
SeafloorAI/
βββ region{1,2,5,6,7}/ # Regions with sediment & physiographic zone
β βββ input/ # 6-channel input signals
β β βββ region*_*.npy # Shape: (6, 224, 224)
β βββ sed/ # Sediment annotations
β β βββ region*_*.npy # Shape: (224, 224)
β βββ pzone/ # Physiographic zone annotations
β βββ region*_*.npy # Shape: (224, 224)
β
βββ region{3,4}/ # Regions with habitat, fault & fold
β βββ input/ # 6-channel input signals
β β βββ region*_*.npy # Shape: (6, 224, 224)
β βββ habitat/ # Habitat annotations
β β βββ region*_*.npy # Shape: (224, 224)
β βββ fault/ # Fault annotations
β β βββ region*_*.npy # Shape: (224, 224)
β βββ fold/ # Fold annotations
β βββ region*_*.npy # Shape: (224, 224)
β
βββ split/ # Train/validation/test splits
βββ sed/ # Splits for sediment task
β βββ region{1,2,5,6,7}/
β βββ train.json
β βββ val.json
β βββ test.json
βββ pzone/ # Splits for physiographic zone task
β βββ region{1,2,5,6,7}/
β βββ train.json
β βββ val.json
β βββ test.json
βββ habitat/ # Splits for habitat task
β βββ region{3,4}/
β βββ train.json
β βββ val.json
β βββ test.json
βββ fault/ # Splits for fault task
β βββ region{3,4}/
β βββ train.json
β βββ val.json
β βββ test.json
βββ fold/ # Splits for fold task
βββ region{3,4}/
βββ train.json
βββ val.json
βββ test.json
Data Format Details
Input Files (input/):
- 6-channel NumPy arrays with shape
(6, 224, 224) - Channels: [backscatter, bathymetry, slope, rugosity, longitude, latitude]
- Naming:
region{N}_{row}_{col}.npy
Annotation Files:
- Single-channel NumPy arrays with shape
(224, 224) - Integer labels corresponding to class indices
- Naming matches corresponding input file
Split Files:
- JSON files containing lists of sample identifiers
- Organized by annotation type and region
πΌοΈ Samples
Region 1 - Sediment & Physiographic Zone
Region 3 - Habitat, Fault & Fold
π» Visualization & Dataloader
Simple Visualization
To visualize samples and segmentation masks in the dataset, please refer to visualization.ipynb.
PyTorch Dataset Integration
To integrate with deep learning workflows, refer to the SeafloorDataset implementation in seafloor_dataset.py.
The following example demonstrates how to import and use it with a PyTorch DataLoader:
from torch.utils.data import DataLoader
from seafloor_dataset import SeafloorDataset
# Initialize Dataset
dataset = SeafloorDataset(
data_path='./SeafloorAI',
anno_path='./SeafloorAI/split',
layer='sed',
regions=['region1'],
split='train',
input_transform=None, # Optional: transforms for input
mask_transform=None # Optional: transforms for mask
)
# Create DataLoader
loader = DataLoader(dataset, batch_size=4, shuffle=True, num_workers=4)
π ToDo
- Release data and annotations for regions 1β7
- Add sample visualizations and PyTorch dataloader support
- Release fault and fold labels for region3 and region4
- Release unlabeled data for region8 and region9
- Release SeafloorGenAI Dataset
π Citation
If you find SeafloorAI useful, please cite the following paper:
@inproceedings{nguyen2024seafloorai,
title={SeafloorAI: A Large-scale Vision-Language Dataset for Seafloor Geological Survey},
author={Kien X. Nguyen and Fengchun Qiao and Arthur Trembanis and Xi Peng},
booktitle={Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track},
year={2024}
}
π§ Contact & Acknowledgments
We sincerely thank the Department of Defense (DoD) for supporting this research.
We also acknowledge the U.S. Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA) for providing the raw survey data.
- Downloads last month
- 16


