Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code: ConfigNamesError
Exception: TypeError
Message: list_() takes at least 1 positional argument (0 given)
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 612, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 379, in from_dataset_card_data
dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2138, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2134, in from_yaml_inner
return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2123, in from_yaml_inner
Value(obj["dtype"])
File "<string>", line 5, in __init__
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 552, in __post_init__
self.pa_type = string_to_arrow(self.dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 156, in string_to_arrow
return pa.__dict__[datasets_dtype + "_"]()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/types.pxi", line 4942, in pyarrow.lib.list_
TypeError: list_() takes at least 1 positional argument (0 given)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Wikimedia Pageview Time Series
Preprocessed time series dataset derived from Wikimedia pageview statistics. Contains fixed-length windows of Wikipedia article pageview counts at hourly and daily resolution, plus STL seasonal-trend decomposition components.
Dataset Summary
| Subset | Series Count | Series Length | Size | Description |
|---|---|---|---|---|
wiki_hourly |
3,715,121 | 1025 | 2.5 GB | Hourly pageview counts |
wiki_daily |
1,990,244 | 1025 | 2.5 GB | Daily aggregated pageview counts |
wiki_stl_residual |
530,731 | 1025 | 2.0 GB | STL decomposition — residual component |
wiki_stl_seasonal |
371,512 | 1025 | 1.4 GB | STL decomposition — seasonal component |
wiki_stl_trend |
159,219 | 1025 | 587 MB | STL decomposition — trend component |
Total: ~6.77M time series, ~9 GB
Schema
Each parquet file contains three columns:
| Column | Type | Description |
|---|---|---|
series |
fixed_size_list<float32>[1025] |
The time series values (1024 input steps + 1 target) |
source_id |
uint8 |
Numeric identifier for the data source/component |
meta |
string |
Human-readable component name |
Source IDs
source_id |
meta value |
Description |
|---|---|---|
| 1 | wiki_hourly |
Raw hourly pageview counts |
| 2 | wiki_daily |
Daily aggregated pageview counts |
| 3 | wiki_stl_residual |
Residual after STL decomposition |
| 4 | wiki_stl_seasonal |
Seasonal component from STL decomposition |
| 5 | wiki_stl_trend |
Trend component from STL decomposition |
Data Origin
- Source: Wikimedia pageview complete dumps
- Date range: December 2011 — October 2016
- Filtering: Pages with fewer than 10 daily views are excluded
- Processing pipeline:
- Raw hourly
.bz2dumps downloaded from Wikimedia - Parsed and aggregated into weekly parquet files
- Stitched into fixed-length windows of T=1025 time steps (1024 + 1)
- STL seasonal-trend decomposition applied to extract trend, seasonal, and residual components
- Daily aggregation computed from hourly data
- Raw hourly
Usage
from datasets import load_dataset
# Load a specific subset
ds = load_dataset("jeremycochoy/wikimedia-pageview-timeseries", "wiki_daily")
# Access a time series
series = ds["train"][0]["series"] # list of 1025 floats
Or load directly with PyArrow:
import pyarrow.parquet as pq
table = pq.read_table("wiki_daily/wiki_daily_file000_00000.parquet")
df = table.to_pandas()
series = df["series"].iloc[0] # numpy array of shape (1025,)
Data Characteristics
- Patterns present: viral spikes, seasonal cycles (holidays, sports events, school calendars), slow decays, flat/stable pages, multi-language diversity (all Wikimedia projects)
- Languages: All Wikimedia language editions included (English, German, French, Japanese, Russian, etc.)
- Use cases: Time series foundation model pretraining, forecasting benchmarks, transfer learning
License
The underlying Wikimedia pageview data is released under CC0 1.0 (Public Domain). This preprocessed dataset inherits that license.
- Downloads last month
- 603