Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: ValueError
Message: Dataset 'population' has length 24 but expected 21600
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 80, in _generate_tables
num_rows = _check_dataset_lengths(h5, self.info.features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 359, in _check_dataset_lengths
raise ValueError(f"Dataset '{path}' has length {dset.shape[0]} but expected {num_rows}")
ValueError: Dataset 'population' has length 24 but expected 21600Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
π LandScan Global Population Dataset β pop_2000-23.h5
A production-ready, chunked HDF5 tensor of LandScan Global annual population estimates from 2000 to 2023 β packaged for efficient use in deep learning, geospatial analysis, and HPC workflows. No more downloading 24 separate GeoTIFFs.
π¦ Dataset at a Glance
| Property | Value |
|---|---|
| Source | LandScan Global β Oak Ridge National Laboratory (ORNL) |
| Years covered | 2000 β 2023 (24 time steps) |
| Spatial resolution | ~1 km (30 arc-seconds) |
| Spatial extent | Global (180Β°Wβ180Β°E, 90Β°Sβ90Β°N) |
| Master grid | 21,600 rows Γ 43,200 columns |
| CRS | WGS84 / EPSG:4326 |
| Unit | Ambient population count per pixel |
| Data type | float32 |
| Format | HDF5 (chunked + GZIP compressed) |
| Chunk shape | (1, 256, 256) β time Γ lat Γ lon |
What is LandScan? LandScan represents ambient population β the average number of people present in a location over 24 hours β rather than residential census counts. It integrates census data, land cover, roads, slope, and remote sensing to model where people actually are, not just where they live.
ποΈ File Structure
pop_2000-23.h5
β
βββ /population float32 (24, 21600, 43200) β main data tensor
β dim[0] β time 24 annual steps (2000β2023)
β dim[1] β lat 21,600 latitude rows (90Β°N β 90Β°S)
β dim[2] β lon 43,200 longitude cols (180Β°W β 180Β°E)
β
βββ /coords
β βββ years int32 (24,) [2000, 2001, β¦, 2023]
β βββ lat float64 (21600,) centre latitude of each row (Β°N)
β βββ lon float64 (43200,) centre longitude of each col (Β°E)
β
βββ /native_extent
β βββ years int32 (24,) year index
β βββ n_rows int32 (24,) native row count per year
β βββ n_cols int32 (24,) native col count per year
β
βββ /stats
βββ mean float32 (24,) mean over inhabited pixels
βββ std float32 (24,) std over inhabited pixels
βββ max float32 (24,) max pixel value per year
βββ total_pop float32 (24,) global population sum per year
βββ nan_fraction float32 (24,) fraction of NaN pixels per year
NaN semantics
There are two distinct sources of NaN in this dataset:
| NaN type | Meaning |
|---|---|
| Within native extent, flagged as nodata | Ocean, permanent ice, or uninhabited area |
| Beyond native extent | That LandScan release had a smaller grid (2001β2012 were 20,880 rows) β data simply did not exist |
The /native_extent group tells you exactly how many rows and columns contained real data for each year, so your code can mask accordingly.
β‘ Quickstart β Partial Reads (No Full Download Needed)
HuggingFace supports HTTP range requests on .h5 files. The HDF5 chunked layout (1, 256, 256) means only the chunks you touch are transferred over the network β you never need to download the full file.
Install dependencies
pip install h5py numpy fsspec huggingface_hub
Open the file remotely
import h5py
import numpy as np
from huggingface_hub import hf_hub_url
# Stream directly from HuggingFace β no full download
url = hf_hub_url(
repo_id = "Daksh17440/landscan-global-population",
filename = "pop_2000-23.h5",
repo_type = "dataset",
)
# ROS3 driver = HTTP range-request backend for HDF5
f = h5py.File(url, "r", driver="ros3")
pop = f["population"] # shape (24, 21600, 43200) β not yet loaded
lat = f["coords/lat"][:]
lon = f["coords/lon"][:]
yrs = f["coords/years"][:] # [2000, 2001, β¦, 2023]
Tip: Install
hdf5with ROS3 support:conda install -c conda-forge h5py(includes it by default). For pip:pip install h5py[ros3].
π Usage Examples
1. Read a single year
# Year 2020 is at index 20 (2020 - 2000 = 20)
pop_2020 = f["population"][20, :, :] # shape (21600, 43200)
# Only ~3.4 GB RAM; only touched chunks downloaded over network
2. Look up the index for any year
years = f["coords/years"][:]
def year_idx(y):
idx = np.where(years == y)[0]
if len(idx) == 0:
raise ValueError(f"Year {y} not in dataset")
return int(idx[0])
pop_2015 = f["population"][year_idx(2015), :, :]
3. Spatial crop β bounding box query
lat = f["coords/lat"][:]
lon = f["coords/lon"][:]
def bbox_slice(lat_min, lat_max, lon_min, lon_max):
"""Return numpy index slices for a lat/lon bounding box."""
row = np.where((lat >= lat_min) & (lat <= lat_max))[0]
col = np.where((lon >= lon_min) & (lon <= lon_max))[0]
return slice(row[0], row[-1]+1), slice(col[0], col[-1]+1)
# South Asia: 5β35Β°N, 65β95Β°E
rs, cs = bbox_slice(5, 35, 65, 95)
# Single year crop β minimal network transfer
south_asia_2023 = f["population"][23, rs, cs]
# All years crop β full time series for the region
south_asia_all = f["population"][:, rs, cs] # shape (24, ~3334, ~3334)
4. Time-range + spatial crop together
# India, 2010β2020
years = f["coords/years"][:]
t_mask = np.where((years >= 2010) & (years <= 2020))[0]
rs, cs = bbox_slice(8, 37, 68, 97)
india_decade = f["population"][t_mask[0]:t_mask[-1]+1, rs, cs]
# shape: (11, H_india, W_india)
5. Country / region centroids β point time series
# Population at a single point over all years (full time series)
# New Delhi: 28.6Β°N, 77.2Β°E
lat_idx = int(np.argmin(np.abs(lat - 28.6)))
lon_idx = int(np.argmin(np.abs(lon - 77.2)))
delhi_series = f["population"][:, lat_idx, lon_idx] # shape (24,)
# Extremely fast β 24 single-pixel reads
6. Global population trend (no pixel reads needed)
# Pre-computed β instant, no pixel data transferred
total_pop = f["stats/total_pop"][:]
years = f["coords/years"][:]
for yr, pop in zip(years, total_pop):
print(f" {yr}: {pop/1e9:.3f} billion")
7. Use with xarray (NetCDF-style labelled arrays)
import xarray as xr
import h5py
import numpy as np
with h5py.File("pop_2000-23.h5", "r") as f:
# Load a region into an xarray DataArray with named coords
rs, cs = bbox_slice(5, 35, 65, 95)
data = f["population"][:, rs, cs]
years = f["coords/years"][:]
lats = f["coords/lat"][rs]
lons = f["coords/lon"][cs]
da = xr.DataArray(
data,
dims = ["time", "lat", "lon"],
coords = {"time": years, "lat": lats, "lon": lons},
name = "population",
attrs = {"units": "persons per pixel", "source": "LandScan Global"}
)
# Now use xarray operations
annual_mean = da.mean(dim=["lat", "lon"])
trend = da.sel(time=slice(2010, 2020))
8. PyTorch β lazy streaming Dataset
import h5py
import numpy as np
import torch
from torch.utils.data import Dataset
class LandScanDataset(Dataset):
"""
Streams spatial patches on demand.
Never loads the full tensor into RAM.
Parameters
----------
h5_path : local path or remote ROS3 URL
year_range : (start_year, end_year) inclusive, e.g. (2010, 2020)
bbox : (lat_min, lat_max, lon_min, lon_max) or None for global
patch_size : spatial size of each returned patch (pixels)
stride : step between patch centres
"""
def __init__(self, h5_path, year_range=(2000, 2023),
bbox=None, patch_size=256, stride=128):
self.f = h5py.File(h5_path, "r")
self.pop = self.f["population"]
years = self.f["coords/years"][:]
lat = self.f["coords/lat"][:]
lon = self.f["coords/lon"][:]
# Time axis
t_mask = np.where((years >= year_range[0]) & (years <= year_range[1]))[0]
self.t0, self.t1 = int(t_mask[0]), int(t_mask[-1]) + 1
self.T = self.t1 - self.t0
# Spatial axis
if bbox:
lat_m = np.where((lat >= bbox[0]) & (lat <= bbox[1]))[0]
lon_m = np.where((lon >= bbox[2]) & (lon <= bbox[3]))[0]
self.r0, self.r1 = int(lat_m[0]), int(lat_m[-1]) + 1
self.c0, self.c1 = int(lon_m[0]), int(lon_m[-1]) + 1
else:
self.r0, self.r1 = 0, self.pop.shape[1]
self.c0, self.c1 = 0, self.pop.shape[2]
H, W = self.r1 - self.r0, self.c1 - self.c0
self.ps = patch_size
# All valid patch top-left corners
self.patches = [
(r, c)
for r in range(0, H - patch_size, stride)
for c in range(0, W - patch_size, stride)
]
def __len__(self):
return len(self.patches) * self.T
def __getitem__(self, idx):
t_rel = idx % self.T
p_idx = idx // self.T
r, c = self.patches[p_idx]
t = self.t0 + t_rel
r_abs, c_abs = self.r0 + r, self.c0 + c
patch = self.pop[t, r_abs:r_abs+self.ps, c_abs:c_abs+self.ps]
patch = patch.astype(np.float32)
# Replace NaN with 0 for model input (or use a mask)
nan_mask = np.isnan(patch)
patch = np.nan_to_num(patch, nan=0.0)
return {
"population" : torch.from_numpy(patch[None]), # (1, ps, ps)
"nan_mask" : torch.from_numpy(nan_mask[None]), # (1, ps, ps)
"year" : torch.tensor(self.t0 + t_rel + 2000 - self.t0),
}
def __del__(self):
self.f.close()
# ββ Example usage βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
from torch.utils.data import DataLoader
ds = LandScanDataset(
h5_path = "pop_2000-23.h5",
year_range = (2015, 2023),
bbox = (5, 35, 65, 95), # South Asia
patch_size = 256,
stride = 128,
)
loader = DataLoader(ds, batch_size=8, shuffle=True, num_workers=4)
for batch in loader:
x = batch["population"] # (8, 1, 256, 256)
mask = batch["nan_mask"] # (8, 1, 256, 256)
yr = batch["year"]
break
9. Normalize using pre-computed stats
with h5py.File("pop_2000-23.h5", "r") as f:
means = f["stats/mean"][:] # (24,) β per year
stds = f["stats/std"][:] # (24,)
years = f["coords/years"][:]
# Normalize a patch for year 2018
t = int(np.where(years == 2018)[0])
pop_2018_patch = f["population"][t, 5000:5256, 8000:8256]
normalized = (pop_2018_patch - means[t]) / (stds[t] + 1e-8)
10. HPC / MPI parallel reads
# h5py supports MPI-IO for multi-node HPC jobs
# Launch with: mpirun -n 8 python script.py
from mpi4py import MPI
import h5py
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
with h5py.File("pop_2000-23.h5", "r", driver="mpio", comm=comm) as f:
T = f["population"].shape[0]
my_years = np.array_split(np.arange(T), size)[rank]
for t in my_years:
slab = f["population"][t, :, :]
# Each rank independently processes its years β no contention
result = slab[~np.isnan(slab)].sum()
print(f" rank={rank} t={t} total={result/1e9:.3f}B")
πΊοΈ Native Extent per Year
Years 2001β2012 have fewer rows than the master grid because older LandScan releases used a slightly cropped polar extent. Pixels beyond the native extent are NaN.
with h5py.File("pop_2000-23.h5", "r") as f:
ext_years = f["native_extent/years"][:]
n_rows = f["native_extent/n_rows"][:]
n_cols = f["native_extent/n_cols"][:]
for yr, h, w in zip(ext_years, n_rows, n_cols):
flag = " β cropped" if h < 21600 else ""
print(f" {yr}: {h} Γ {w}{flag}")
Expected output:
2000: 21600 Γ 43200
2001: 20880 Γ 43200 β cropped
...
2012: 20880 Γ 43200 β cropped
2013: 21600 Γ 43200
...
2023: 21600 Γ 43200
π Coordinate Reference
Top-left pixel centre : 89.9917Β°N, 179.9917Β°W
Bottom-right pixel centre: 89.9917Β°S, 179.9917Β°E
Pixel size : 0.008333Β° (~0.926 km at equator, ~1 km average)
# Convert lat/lon to row/col index
def latlon_to_idx(lat_val, lon_val, lat_arr, lon_arr):
row = int(np.argmin(np.abs(lat_arr - lat_val)))
col = int(np.argmin(np.abs(lon_arr - lon_val)))
return row, col
β οΈ Known Issues & Limitations
- 2001β2012 polar crop: 720 rows missing at the poles (β₯ ~83.5Β°N / β€ ~83.5Β°S). These are ocean/ice β NaN fill has no impact on population analysis.
- NaN β zero population: Do not fill NaN with 0 indiscriminately β ocean pixels and missing-extent pixels are both NaN but have different meanings. Use
/native_extentto distinguish them if needed. - Ambient vs residential: LandScan is ambient population. It differs from census residential counts β commuters, transit zones, and commercial areas inflate daytime values.
- Population redistribution, not growth only: Year-to-year changes reflect both demographic change and model improvements across LandScan releases.
π Citation
If you use this dataset, please cite the original LandScan source:
@dataset{landscan_global,
author = {Oak Ridge National Laboratory},
title = {LandScan Global Population Database},
year = {2023},
publisher = {Oak Ridge National Laboratory},
url = {https://landscan.ornl.gov},
note = {Annual releases 2000--2023}
}
π Links
π License
The original LandScan Global data is made available under the Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: UT-Battelle, LLC, Oak Ridge National Laboratory. This HDF5 repackaging does not alter the underlying data.
- Downloads last month
- 34