Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 6 new columns ({'PM25', 'PM10', 'O3', 'CO', 'NO2', 'SO2'}) and 7 missing columns ({'RAIN', 'wd_sin', 'DEWP', 'PRES', 'TEMP', 'wd_cos', 'WSPM'}).

This happened while the csv dataset builder was generating data using

hf://datasets/metric-shift/metric-shift-benchmark/air_quality/labels.csv (at revision b35520fa6e576197d3f94ab102a5cb7d48754ea6), [/tmp/hf-datasets-cache/medium/datasets/18349109359032-config-parquet-and-info-metric-shift-metric-shift-64dfd7db/hub/datasets--metric-shift--metric-shift-benchmark/snapshots/b35520fa6e576197d3f94ab102a5cb7d48754ea6/air_quality/labels.csv (origin=hf://datasets/metric-shift/metric-shift-benchmark@b35520fa6e576197d3f94ab102a5cb7d48754ea6/air_quality/labels.csv)]

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1893, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
                  self._write_table(pa_table, writer_batch_size=writer_batch_size)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in _write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2227, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              sample_id: int64
              PM25: double
              PM10: double
              SO2: double
              NO2: double
              CO: double
              O3: double
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1020
              to
              {'sample_id': Value('int64'), 'TEMP': Value('float64'), 'PRES': Value('float64'), 'DEWP': Value('float64'), 'RAIN': Value('float64'), 'WSPM': Value('float64'), 'wd_sin': Value('float64'), 'wd_cos': Value('float64')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1895, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 6 new columns ({'PM25', 'PM10', 'O3', 'CO', 'NO2', 'SO2'}) and 7 missing columns ({'RAIN', 'wd_sin', 'DEWP', 'PRES', 'TEMP', 'wd_cos', 'WSPM'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/metric-shift/metric-shift-benchmark/air_quality/labels.csv (at revision b35520fa6e576197d3f94ab102a5cb7d48754ea6), [/tmp/hf-datasets-cache/medium/datasets/18349109359032-config-parquet-and-info-metric-shift-metric-shift-64dfd7db/hub/datasets--metric-shift--metric-shift-benchmark/snapshots/b35520fa6e576197d3f94ab102a5cb7d48754ea6/air_quality/labels.csv (origin=hf://datasets/metric-shift/metric-shift-benchmark@b35520fa6e576197d3f94ab102a5cb7d48754ea6/air_quality/labels.csv)]
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

sample_id
int64
TEMP
float64
PRES
float64
DEWP
float64
RAIN
float64
WSPM
float64
wd_sin
float64
wd_cos
float64
0
-0.7
1,023
-18.8
0
4.4
-0.382683
0.92388
1
-1.1
1,023.2
-18.2
0
4.7
0
1
2
-1.1
1,023.5
-18.2
0
5.6
-0.382683
0.92388
3
-1.4
1,024.5
-19.4
0
3.1
-0.707107
0.707107
4
-2
1,025.2
-19.5
0
2
0
1
5
-2.2
1,025.6
-19.6
0
3.7
0
1
6
-2.6
1,026.5
-19.1
0
2.5
0.382683
0.92388
7
-1.6
1,027.4
-19.1
0
3.8
-0.382683
0.92388
8
0.1
1,028.3
-19.2
0
4.1
-0.382683
0.92388
9
1.2
1,028.5
-19.3
0
2.6
0
1
10
1.9
1,028.2
-19.4
0
3.6
-0.382683
0.92388
11
2.9
1,028.2
-20.5
0
3.7
0
1
12
3.9
1,027.3
-19.7
0
5.1
-0.382683
0.92388
13
5.3
1,026.2
-19.3
0
4.3
-0.707107
0.707107
14
6
1,025.9
-19.6
0
4.4
-0.707107
0.707107
15
6.2
1,025.7
-18.6
0
2.8
0.382683
0.92388
16
5.9
1,025.6
-18.1
0
3.9
-0.382683
0.92388
17
4.3
1,026.3
-18.7
0
2.8
0.382683
0.92388
18
3.1
1,027.4
-18.4
0
2.1
0.382683
0.92388
19
2.3
1,028.3
-18.4
0
2.8
0
1
20
1.7
1,029.1
-17.3
0
2.1
0
1
21
0.6
1,030.1
-16.7
0
0.8
0.92388
0.382683
22
0.9
1,030.5
-17.4
0
1.8
0.92388
0.382683
23
-0.2
1,030.5
-17.4
0
1.4
0.92388
0.382683
24
-0.4
1,031
-17.6
0
1.4
0.92388
0.382683
25
-1
1,031.3
-17.3
0
1.1
0.382683
0.92388
26
-1.5
1,030.9
-16.9
0
1.7
1
0
27
-1.4
1,030.6
-17.6
0
1.4
0.382683
0.92388
28
-1.5
1,030.8
-17.7
0
0.9
-0.382683
0.92388
29
-1.8
1,030.1
-17.5
0
2
0.382683
0.92388
30
-2.5
1,029.6
-17.7
0
0.7
-0.707107
0.707107
31
-1.7
1,029.8
-17
0
1.2
0.707107
0.707107
32
-0.4
1,029.6
-17.6
0
1.8
0.707107
0.707107
33
0.6
1,029.7
-16.7
0
1.7
0.707107
0.707107
34
1.7
1,028.9
-16.3
0
1.6
-1
-0
35
2.2
1,028.2
-16.8
0
3.1
-0.382683
-0.92388
36
2.7
1,027.3
-16.4
0
2.7
-0.92388
-0.382683
37
3.3
1,025.7
-16.4
0
1.5
-1
-0
38
3.8
1,024.8
-16
0
1.3
0.707107
-0.707107
39
3.9
1,024.1
-16.5
0
1.2
-0.92388
0.382683
40
3.5
1,023.6
-15.2
0
2.5
-0.92388
-0.382683
41
2.2
1,023.4
-13.6
0
0.4
0.382683
-0.92388
42
1.2
1,023.2
-12.3
0
0.9
-1
-0
43
1.5
1,022.9
-13.8
0
1.2
0.707107
0.707107
44
1.2
1,022.9
-14.1
0
1.2
0.382683
0.92388
45
0.6
1,022.6
-14.2
0
1.3
0.707107
0.707107
46
-0.6
1,022.3
-13.9
0
1.4
0.382683
0.92388
47
-0.8
1,021.1
-13.4
0
1.3
0
1
48
-1.4
1,020.4
-13
0
1.2
0.382683
0.92388
49
-2
1,019.4
-13.2
0
1.2
0.707107
0.707107
50
-2.8
1,018.3
-12.3
0
1.4
0.707107
0.707107
51
-2.6
1,017.2
-13.2
0
1.2
0.707107
0.707107
52
-4.3
1,016.9
-11.8
0
0.9
0.382683
0.92388
53
-5.6
1,016.3
-10.9
0
0.8
0
1
54
-5.8
1,016.2
-11
0
0.8
0.382683
0.92388
55
-3.1
1,016.5
-11.1
0
1.5
0.707107
0.707107
56
0.7
1,016.2
-13
0
2.3
0.382683
0.92388
57
3.5
1,015.8
-13.8
0
0.4
0.382683
0.92388
58
6.5
1,014.9
-13.2
0
1.3
0
1
59
11.4
1,013.8
-10.8
0
0.9
0.92388
-0.382683
60
13.8
1,012.5
-13.3
0
1.1
-1
-0
61
16
1,011.5
-13.5
0
5.9
-1
-0
62
16.7
1,010.8
-14
0
4.3
-0.92388
0.382683
63
16.9
1,010.5
-12.7
0
2
-0.92388
0.382683
64
16.4
1,010.6
-15.4
0
2.9
-0.707107
0.707107
65
13.1
1,011.1
-10.1
0
1.7
-0.92388
-0.382683
66
8.7
1,012.3
-11.3
0
1.6
-0.707107
0.707107
67
12.2
1,013.4
-13.7
0
2
-0.382683
0.92388
68
11.7
1,013.5
-12.6
0
0.9
0
1
69
5.3
1,013.6
-10.9
0
0
0
1
70
3.1
1,014.1
-10.2
0
1.8
0.382683
0.92388
71
5.2
1,014.8
-10.6
0
1.7
-0.382683
0.92388
72
7.7
1,015.7
-11.1
0
2.6
0
1
73
8.2
1,016.7
-11.7
0
2.8
0
1
74
2.7
1,018.6
-10.9
0
0.5
0.382683
-0.92388
75
3.8
1,018.9
-11.4
0
2.3
0
1
76
6.5
1,019.7
-10.7
0
2.3
0
1
77
9
1,020.7
-12.2
0
3.1
0.707107
0.707107
78
10.6
1,020.8
-12.1
0
1.3
0.382683
0.92388
79
11.9
1,020.4
-12.4
0
2.1
0.707107
0.707107
80
13.1
1,020
-13
0
3
0.707107
0.707107
81
14.2
1,018.9
-13.9
0
2.7
0
1
82
15.3
1,017.8
-13
0
2
0
-1
83
15.2
1,017
-13.1
0
0.7
-0.707107
-0.707107
84
15.3
1,016.3
-13
0
2.4
-0.382683
-0.92388
85
14.5
1,015.7
-13.7
0
2.6
0
-1
86
12.7
1,015.9
-11.8
0
2.6
-0.707107
-0.707107
87
11.6
1,016.5
-11.3
0
3.4
-0.382683
-0.92388
88
10.9
1,016.6
-10.6
0
3.5
-0.707107
-0.707107
89
9.9
1,016.9
-10.3
0
3.1
-0.382683
-0.92388
90
8.4
1,016.9
-10
0
2.1
-0.382683
-0.92388
91
8.6
1,016.3
-9.9
0
2.4
-0.382683
-0.92388
92
7.7
1,015.7
-9.3
0
2.1
-0.707107
-0.707107
93
4.7
1,015.2
-9.1
0
1.6
-0.707107
-0.707107
94
1.1
1,014.7
-7.9
0
0
-0.707107
-0.707107
95
-0.6
1,014.4
-8.1
0
0
0
-1
96
0.2
1,014
-8.8
0
1.2
0.707107
0.707107
97
1.8
1,013.7
-9.5
0
1.5
0.707107
0.707107
98
0.6
1,013.9
-9.4
0
0.8
0
1
99
-0.2
1,013.4
-8.6
0
1.3
0.707107
0.707107
End of preview.

Metric Shift Benchmark

A cross-domain benchmark for predicting expensive scientific measurements from cheap surrogates, spanning 6 scientific fields and 134 valid (y1, y2) pairs with a standardized evaluation protocol.

Paper: Metric Shift: A Benchmark for Predicting Expensive Scientific Measurements from Cheap Surrogates (NeurIPS 2026 Evaluations & Datasets Track, under review)

Benchmark Overview

Dataset Domain Samples Feat. dim Labels Valid pairs License
zinc250k Drug Chemistry 249,455 14 3 6 ZINC academic-use, f...
air_quality Environmental Science 382,168 7 6 28 CC-BY-4.0 (UCI ML Re...
jarvis_materials Materials Science 10,800 14 6 30 Public domain / NIST...
protein_fitness_expanded Protein Biology 61,704 22 24 38 MIT (ProteinGym aggr...
drug_admet Pharmacology 1,523 14 4 12 CC-BY-4.0 (Polaris H...
climate_stations Climate Science 28,488 5 5 20 CC-BY-4.0, dual attr...
Total --- 734,138 --- --- 134 ---

Problem: Metric Shift

Given a shared entity x (molecule, material, protein variant), a cheap source metric y1, and an expensive target metric y2: can we use universally available y1 to improve prediction of the sparsely labeled y2?

Key properties:

  • y1 is always available at test time (cheap to measure for any new candidate)
  • The input distribution p(x) is fixed; only the prediction target changes
  • Unlike domain adaptation (shifts p(x)) or multi-task learning (co-predicts)

Evaluation Protocol

  • Split: 60% train / 20% val / 20% test at split_seed=42
  • Labeled ratio: 20% of train (main setting); 1% and 5% for ablation
  • Seeds: 5 model seeds per pair
  • Metrics: R-squared and Spearman rho
  • Significance: Paired t-test across seeds + Benjamini-Hochberg FDR at q=0.05
  • Aggregation: Macro-median (per-dataset median, then cross-dataset median)
  • StandardScaler: fit on labeled train only

Usage

import pandas as pd

# Load one sub-dataset
features = pd.read_csv("zinc250k/features.csv")
labels = pd.read_csv("zinc250k/labels.csv")

# Each (source, target) column pair in labels defines a Metric Shift task
# See metadata.json for the list of valid pairs with Spearman correlations

One-command reproduction of all tables and figures:

pip install metric-shift-benchmark
python -m metric_shift.run_all

Dataset Details

zinc250k β€” Drug Chemistry

249,455 drug-like molecules, 14 RDKit descriptors, 3 labels (logP, QED, SAS), 6 pairs

  • Source: ZINC database (Irwin & Shoichet 2005; Sterling & Irwin 2015)
  • License: ZINC academic-use, free redistribution with attribution
  • Features (14d): MolWt, HeavyAtomCount, NumHeteroatoms, NumValenceElectrons, TPSA, MolMR, HBA, HBD, NumRotatableBonds, RingCount, NumAromaticRings, FractionCSP3, BalabanJ, BertzCT
  • Labels (3col): logP, QED, SAS

air_quality β€” Environmental Science

382,168 hourly records, 7 meteo features, 6 pollutants, 28 pairs

  • Source: Beijing Multi-Site Air-Quality Dataset (Zhang et al. 2017)
  • License: CC-BY-4.0 (UCI ML Repository)
  • Features (7d): TEMP, PRES, DEWP, RAIN, WSPM, wd_sin, wd_cos
  • Labels (6col): PM25, PM10, SO2, NO2, CO, O3

jarvis_materials β€” Materials Science

10,800 inorganic crystals, 14 composition descriptors, 6 labels, 30 pairs

  • Source: JARVIS-DFT 3D (Choudhary et al. 2020)
  • License: Public domain / NIST (17 USC Β§105)
  • Features (14d): mean_Z, std_Z, mean_X, std_X, mean_row, std_row, mean_group, std_group, mean_atomic_mass, std_atomic_mass, density, volume_per_atom, n_sites, packing_fraction
  • Labels (6col): formation_energy_peratom, optb88vdw_bandgap, bulk_modulus_kv, shear_modulus_gv, n_seebeck, p_seebeck

protein_fitness_expanded β€” Protein Biology

61,704 variants, 22-d mutation features, 24 DMS assays, 38 within-protein pairs

  • Source: ProteinGym substitution benchmark (Notin et al. 2023)
  • License: MIT (ProteinGym aggregation)
  • Features (22d): protein_id, n_mutations, AA_A_diff, AA_C_diff, AA_D_diff, AA_E_diff, AA_F_diff, AA_G_diff, AA_H_diff, AA_I_diff, AA_K_diff, AA_L_diff, AA_M_diff, AA_N_diff, AA_P_diff, AA_Q_diff, AA_R_diff, AA_S_diff, AA_T_diff, AA_V_diff, AA_W_diff, AA_Y_diff
  • Labels (24col): p53_null_etoposide, p53_null_nutlin, p53_wt_nutlin, blat_deng_2012, blat_firnberg_2014, blat_jacquier_2013, blat_stiffler_2015, pten_matreyek_2021, pten_mighell_2018, cp2c9_amorosi_abundance_2021, cp2c9_amorosi_activity_2021, hsp82_flynn_2019, hsp82_mishra_2016, spike_starr_bind_2020, spike_starr_expr_2020, a0a2z5u3z0_doud_2016, a0a2z5u3z0_wu_2014, rl401_mavor_2016, rl401_roscoe_2013, rl401_roscoe_2014, ccdb_adkar_2012, ccdb_tripathi_2016, vkor1_chiasson_abundance_2020, vkor1_chiasson_activity_2020

drug_admet β€” Pharmacology

1,523 compounds, 14 RDKit descriptors, 4 ADME endpoints, 12 pairs

  • Source: Biogen ADME-Fang v1 (Fang et al. 2023)
  • License: CC-BY-4.0 (Polaris Hub)
  • Features (14d): MolWt, HeavyAtomCount, NumHBD, NumHBA, TPSA, MolLogP, NumRotatableBonds, RingCount, NumAromaticRings, FractionCSP3, MolMR, BertzCT, BalabanJ, NumHeteroatoms
  • Labels (4col): LOG_HLM_CLint, LOG_RLM_CLint, LOG_SOLUBILITY, LOG_MDR1-MDCK_ER

climate_stations β€” Climate Science

28,488 daily records, 5 context features, 5 climate variables, 20 pairs

  • Source: Open-Meteo Historical Weather API / ERA5 reanalysis
  • License: CC-BY-4.0, dual attribution to Open-Meteo and Copernicus C3S/ERA5
  • Features (5d): lat, lon, day_sin, day_cos, year_norm
  • Labels (5col): temp_max, temp_min, precip, windspeed, solar_radiation

Responsible AI

  • Personal / sensitive data: None. All datasets contain scientific measurements on molecules, materials, proteins, pollutants, or climate variables. No human subjects, no personally identifiable information.
  • Intended use: Benchmarking ML methods for the Metric Shift problem. Not intended for direct clinical, regulatory, or safety-critical deployment.
  • Known limitations: (1) All six datasets are re-curations of existing public sources; our contribution is pair construction, validity filter, and protocol. (2) Domain coverage spans chemistry, biology, materials, environment, and climate --- not yet high-energy physics, astronomy, or social science. (3) Feature spaces are intentionally low-dimensional (5--22d) to isolate the contribution of y1; higher-dimensional encoders may change relative method rankings.
  • Potential misuse: drug_admet contains ADME measurements that could theoretically inform adverse drug design; however, the 1,523-compound dataset is far too small and coarse for such purposes, and all data is already public.

Maintenance

The authors commit to maintaining this repository for at least 2 years post-publication, with semantic versioning (v1.0, v1.1, ...) and a CHANGELOG for every split, filter, or protocol change.

Citation

@inproceedings{metric_shift_2026,
  title={Metric Shift: A Benchmark for Predicting Expensive Scientific Measurements from Cheap Surrogates},
  author={Anonymous},
  booktitle={NeurIPS 2026 Evaluations and Datasets Track},
  year={2026},
  note={Under review}
}
Downloads last month
57