url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 48 51 | id int64 600M 3.67B | node_id stringlengths 18 24 | number int64 2 7.88k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 4 | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 18:18:51 2025-11-26 16:16:56 | updated_at timestamp[s]date 2020-04-29 09:23:05 2025-11-30 03:52:07 | closed_at timestamp[s]date 2020-04-29 09:23:05 2025-11-21 12:31:19 ⌀ | author_association stringclasses 4
values | type null | active_lock_reason null | draft null | pull_request null | body stringlengths 0 228k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 4
values | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool 1
class | closed_at_time_taken duration[s] |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7719/comments | https://api.github.com/repos/huggingface/datasets/issues/7719/events | https://github.com/huggingface/datasets/issues/7719 | 3,285,928,491 | I_kwDODunzps7D20or | 7,719 | Specify dataset columns types in typehint | {
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://a... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [] | 2025-08-02T13:22:31 | 2025-08-02T13:22:31 | null | NONE | null | null | null | null | ### Feature request
Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131
### Motivation
In MTEB we're using a lot of datasets objects, but they... | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7719/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7717/comments | https://api.github.com/repos/huggingface/datasets/issues/7717/events | https://github.com/huggingface/datasets/issues/7717 | 3,282,855,127 | I_kwDODunzps7DrGTX | 7,717 | Cached dataset is not used when explicitly passing the cache_dir parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "h... | [] | open | false | null | [] | [
"Hi, I've investigated this issue and can confirm the bug. Here are my findings:\n\n**1. Reproduction:**\nI was able to reproduce the issue on the latest `main` branch. Using the provided code snippet, `snapshot_download` correctly populates the custom `cache_dir`, but `load_dataset` with the same `cache_dir` trigg... | 2025-08-01T07:12:41 | 2025-08-05T19:19:36 | null | NONE | null | null | null | null | ### Describe the bug
Hi, we are pre-downloading a dataset using snapshot_download(). When loading this exact dataset with load_dataset() the cached snapshot is not used. In both calls, I provide the cache_dir parameter.
### Steps to reproduce the bug
```
from datasets import load_dataset, concatenate_datasets
from h... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7717/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7717/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7709/comments | https://api.github.com/repos/huggingface/datasets/issues/7709/events | https://github.com/huggingface/datasets/issues/7709 | 3,276,677,990 | I_kwDODunzps7DTiNm | 7,709 | Release 4.0.0 breaks usage patterns of with_format | {
"avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4",
"events_url": "https://api.github.com/users/wittenator/events{/privacy}",
"followers_url": "https://api.github.com/users/wittenator/followers",
"following_url": "https://api.github.com/users/wittenator/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | [
"This is a breaking change with 4.0 which introduced `Column` objects. To get the numpy array from a `Column` you can `col[i]`, `col[i:j]` or even `col[:]` if you want the full column as a numpy array:\n\n```python\nfrom datasets import load_dataset\ndataset = load_dataset(...)\ndataset = dataset.with_format(\"nump... | 2025-07-30T11:34:53 | 2025-08-07T08:27:18 | 2025-08-07T08:27:18 | NONE | null | null | null | null | ### Describe the bug
Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memo... | {
"avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4",
"events_url": "https://api.github.com/users/wittenator/events{/privacy}",
"followers_url": "https://api.github.com/users/wittenator/followers",
"following_url": "https://api.github.com/users/wittenator/following{/other_user}",
"gists_url":... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7709/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7709/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 7 days, 20:52:25 |
https://api.github.com/repos/huggingface/datasets/issues/7707 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7707/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7707/comments | https://api.github.com/repos/huggingface/datasets/issues/7707/events | https://github.com/huggingface/datasets/issues/7707 | 3,271,867,998 | I_kwDODunzps7DBL5e | 7,707 | load_dataset() in 4.0.0 failed when decoding audio | {
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_... | [] | closed | false | null | [] | [
"Hi @lhoestq . Would you please have a look at it? I use the official NV Docker ([NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`) on A100 and encountered this issue, but I don't know how to fix it.",
"Use !pip install -U datasets[audio]... | 2025-07-29T03:25:03 | 2025-10-05T06:41:38 | 2025-08-01T05:15:45 | NONE | null | null | null | null | ### Describe the bug
Cannot decode audio data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
print(dataset[0]["audio"]["array"])
```
1st round run, got
```
File "/usr/local/lib/python3.1... | {
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_... | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7707/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7707/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 3 days, 1:50:42 |
https://api.github.com/repos/huggingface/datasets/issues/7705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7705/comments | https://api.github.com/repos/huggingface/datasets/issues/7705/events | https://github.com/huggingface/datasets/issues/7705 | 3,269,070,499 | I_kwDODunzps7C2g6j | 7,705 | Can Not read installed dataset in dataset.load(.) | {
"avatar_url": "https://avatars.githubusercontent.com/u/52521165?v=4",
"events_url": "https://api.github.com/users/HuangChiEn/events{/privacy}",
"followers_url": "https://api.github.com/users/HuangChiEn/followers",
"following_url": "https://api.github.com/users/HuangChiEn/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | [
"You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n\n```python\ndataset = load_dataset(local_directory_path)\n```",
"> You can download the dataset lo... | 2025-07-28T09:43:54 | 2025-08-05T01:24:32 | null | NONE | null | null | null | null | Hi, folks, I'm newbie in huggingface dataset api.
As title, i'm facing the issue that the dataset.load api can not connect to the installed dataset.
code snippet :
<img width="572" height="253" alt="Image" src="https://github.com/user-attachments/assets/10f48aaf-d6ca-4239-b1cf-145d74f125d1" />
data path :
"/xxx/jose... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7705/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7705/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7703/comments | https://api.github.com/repos/huggingface/datasets/issues/7703/events | https://github.com/huggingface/datasets/issues/7703 | 3,265,648,942 | I_kwDODunzps7Cpdku | 7,703 | [Docs] map() example uses undefined `tokenizer` — causes NameError | {
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
... | [] | open | false | null | [] | [
"I've submitted PR #7704 which adds documentation to clarify the behavior of `map()` when returning `None`."
] | 2025-07-26T13:35:11 | 2025-07-27T09:44:35 | null | CONTRIBUTOR | null | null | null | null | ## Description
The current documentation example for `datasets.Dataset.map()` demonstrates batched processing but uses a `tokenizer` object without defining or importing it. This causes an error every time it's copied.
Here is the problematic line:
```python
# process a batch of examples
>>> ds = ds.map(lambda examp... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7703/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7703/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7700/comments | https://api.github.com/repos/huggingface/datasets/issues/7700/events | https://github.com/huggingface/datasets/issues/7700 | 3,263,922,255 | I_kwDODunzps7Ci4BP | 7,700 | [doc] map.num_proc needs clarification | {
"avatar_url": "https://avatars.githubusercontent.com/u/196988264?v=4",
"events_url": "https://api.github.com/users/sfc-gh-sbekman/events{/privacy}",
"followers_url": "https://api.github.com/users/sfc-gh-sbekman/followers",
"following_url": "https://api.github.com/users/sfc-gh-sbekman/following{/other_user}",
... | [] | open | false | null | [] | [] | 2025-07-25T17:35:09 | 2025-07-25T17:39:36 | null | NONE | null | null | null | null | https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Dataset.map.num_proc
```
num_proc (int, optional, defaults to None) — Max number of processes when generating cache. Already cached
shards are loaded sequentially.
```
for batch:
```
num_proc (int, optional, defaults to None): The n... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7700/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7700/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7699/comments | https://api.github.com/repos/huggingface/datasets/issues/7699/events | https://github.com/huggingface/datasets/issues/7699 | 3,261,053,171 | I_kwDODunzps7CX7jz | 7,699 | Broken link in documentation for "Create a video dataset" | {
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": ... | [] | open | false | null | [] | [
"The URL is ok but it seems the webdataset website is down. There seems to be a related issue here: https://github.com/webdataset/webdataset/issues/155\n\nFeel free to ask the authors there for an update. Otherwise happy to witch the link to the mirror shared in that issue"
] | 2025-07-24T19:46:28 | 2025-07-25T15:27:47 | null | NONE | null | null | null | null | The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken.
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" /> | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7699/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7699/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7698/comments | https://api.github.com/repos/huggingface/datasets/issues/7698/events | https://github.com/huggingface/datasets/issues/7698 | 3,255,350,916 | I_kwDODunzps7CCLaE | 7,698 | NotImplementedError when using streaming=True in Google Colab environment | {
"avatar_url": "https://avatars.githubusercontent.com/u/100470741?v=4",
"events_url": "https://api.github.com/users/Aniket17200/events{/privacy}",
"followers_url": "https://api.github.com/users/Aniket17200/followers",
"following_url": "https://api.github.com/users/Aniket17200/following{/other_user}",
"gists_... | [] | open | false | null | [] | [
"Hi, @Aniket17200, try upgrading datasets using '!pip install -U datasets'. I hope this will resolve your issue.",
"Thank you @tanuj-rai, it's working great "
] | 2025-07-23T08:04:53 | 2025-07-23T15:06:23 | null | NONE | null | null | null | null | ### Describe the bug
When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7698/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7697/comments | https://api.github.com/repos/huggingface/datasets/issues/7697/events | https://github.com/huggingface/datasets/issues/7697 | 3,254,526,399 | I_kwDODunzps7B_CG_ | 7,697 | - | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.git... | [] | closed | false | null | [] | [] | 2025-07-23T01:30:32 | 2025-07-25T15:21:39 | 2025-07-25T15:21:39 | NONE | null | null | null | null | - | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7697/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7697/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 2 days, 13:51:07 |
https://api.github.com/repos/huggingface/datasets/issues/7696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7696/comments | https://api.github.com/repos/huggingface/datasets/issues/7696/events | https://github.com/huggingface/datasets/issues/7696 | 3,253,433,350 | I_kwDODunzps7B63QG | 7,696 | load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility | {
"avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
"events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
"followers_url": "https://api.github.com/users/Manalelaidouni/followers",
"following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
... | [] | closed | false | null | [] | [
"Hi ! This is because `datasets` now uses the FFmpeg-based library `torchcodec` instead of the libsndfile-based library `soundfile` to decode audio data. Those two have different decoding implementations",
"I’m all for torchcodec, good luck with the migration!"
] | 2025-07-22T17:02:17 | 2025-07-30T14:22:21 | 2025-07-30T14:22:21 | NONE | null | null | null | null | ### Describe the bug
In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below).
### Steps to reproduce the bug
```python
from dat... | {
"avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
"events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
"followers_url": "https://api.github.com/users/Manalelaidouni/followers",
"following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7696/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7696/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 7 days, 21:20:04 |
https://api.github.com/repos/huggingface/datasets/issues/7694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7694/comments | https://api.github.com/repos/huggingface/datasets/issues/7694/events | https://github.com/huggingface/datasets/issues/7694 | 3,247,600,408 | I_kwDODunzps7BknMY | 7,694 | Dataset.to_json consumes excessive memory, appears to not be a streaming operation | {
"avatar_url": "https://avatars.githubusercontent.com/u/49603999?v=4",
"events_url": "https://api.github.com/users/ycq0125/events{/privacy}",
"followers_url": "https://api.github.com/users/ycq0125/followers",
"following_url": "https://api.github.com/users/ycq0125/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | [
"Hi ! to_json is memory efficient and writes the data by batch:\n\nhttps://github.com/huggingface/datasets/blob/d9861d86be222884dabbd534a2db770c70c9b558/src/datasets/io/json.py#L153-L159\n\nWhat memory are you mesuring ? If you are mesuring RSS, it is likely that it counts the memory mapped data of the dataset. Mem... | 2025-07-21T07:51:25 | 2025-07-25T14:42:21 | null | NONE | null | null | null | null | ### Describe the bug
When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation.
This behavior ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7694/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7694/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7693/comments | https://api.github.com/repos/huggingface/datasets/issues/7693/events | https://github.com/huggingface/datasets/issues/7693 | 3,246,369,678 | I_kwDODunzps7Bf6uO | 7,693 | Dataset scripts are no longer supported, but found superb.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/114297534?v=4",
"events_url": "https://api.github.com/users/edwinzajac/events{/privacy}",
"followers_url": "https://api.github.com/users/edwinzajac/followers",
"following_url": "https://api.github.com/users/edwinzajac/following{/other_user}",
"gists_url... | [] | open | false | null | [] | [
"I got a pretty similar issue when I try to load bigbio/neurotrial_ner dataset. \n`Dataset scripts are no longer supported, but found neurotrial_ner.py`",
"Same here. I was running this tutorial and got a similar error: https://github.com/openai/whisper/discussions/654 (I'm a first-time transformers library user)... | 2025-07-20T13:48:06 | 2025-09-04T10:32:12 | null | NONE | null | null | null | null | ### Describe the bug
Hello,
I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions.
I then get the error :
```
--------------------------------------------------------------------------
... | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7693/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7693/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7692/comments | https://api.github.com/repos/huggingface/datasets/issues/7692/events | https://github.com/huggingface/datasets/issues/7692 | 3,246,268,635 | I_kwDODunzps7BfiDb | 7,692 | xopen: invalid start byte for streaming dataset with trust_remote_code=True | {
"avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4",
"events_url": "https://api.github.com/users/sedol1339/events{/privacy}",
"followers_url": "https://api.github.com/users/sedol1339/followers",
"following_url": "https://api.github.com/users/sedol1339/following{/other_user}",
"gists_url": "h... | [] | open | false | null | [] | [
"Hi ! it would be cool to convert this dataset to Parquet. This will make it work for `datasets>=4.0`, enable the Dataset Viewer and make it more reliable to load/stream (currently it uses a loading script in python and those are known for having issues sometimes)\n\nusing `datasets==3.6.0`, here is the command to ... | 2025-07-20T11:08:20 | 2025-07-25T14:38:54 | null | NONE | null | null | null | null | ### Describe the bug
I am trying to load YODAS2 dataset with datasets==3.6.0
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True)))
```
And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7692/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7692/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7691/comments | https://api.github.com/repos/huggingface/datasets/issues/7691/events | https://github.com/huggingface/datasets/issues/7691 | 3,245,547,170 | I_kwDODunzps7Bcx6i | 7,691 | Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming | {
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": ... | [] | open | false | null | [] | [
"It seems the error occurs right here, as it tries to infer the Features: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L90",
"It seems to me that if we have something that is so large that it cannot fit in pa.table, the fallback method should be to j... | 2025-07-19T18:40:27 | 2025-07-25T08:51:10 | null | NONE | null | null | null | null | ### Describe the bug
I am creating a large WebDataset-format dataset for sign language processing research, and a number of the videos are over 2GB. The instant I hit one of the shards with one of those videos, I get a ArrowCapacityError, even with streaming.
I made a config for the dataset that specifically inclu... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7691/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7691/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7689/comments | https://api.github.com/repos/huggingface/datasets/issues/7689/events | https://github.com/huggingface/datasets/issues/7689 | 3,242,580,301 | I_kwDODunzps7BRdlN | 7,689 | BadRequestError for loading dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/45011687?v=4",
"events_url": "https://api.github.com/users/WPoelman/events{/privacy}",
"followers_url": "https://api.github.com/users/WPoelman/followers",
"following_url": "https://api.github.com/users/WPoelman/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | [
"Same here, for `HuggingFaceFW/fineweb`. Code that worked with no issues for the last 2 months suddenly fails today. Tried updating `datasets`, `huggingface_hub`, `fsspec` to newest versions, but the same error occurs.",
"I'm also hitting this issue, with `mandarjoshi/trivia_qa`; My dataset loading was working su... | 2025-07-18T09:30:04 | 2025-07-18T11:59:51 | 2025-07-18T11:52:29 | NONE | null | null | null | null | ### Describe the bug
Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error:
```
huggingface_hub.errors.BadRequestError: (Request ID: ...)
Bad request:
* Invalid input: expected array, received string * at paths * Invalid... | {
"avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4",
"events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}",
"followers_url": "https://api.github.com/users/sergiopaniego/followers",
"following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}",
"g... | {
"+1": 23,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7689/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7689/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 2:22:25 |
https://api.github.com/repos/huggingface/datasets/issues/7688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7688/comments | https://api.github.com/repos/huggingface/datasets/issues/7688/events | https://github.com/huggingface/datasets/issues/7688 | 3,238,851,443 | I_kwDODunzps7BDPNz | 7,688 | No module named "distributed" | {
"avatar_url": "https://avatars.githubusercontent.com/u/45058324?v=4",
"events_url": "https://api.github.com/users/yingtongxiong/events{/privacy}",
"followers_url": "https://api.github.com/users/yingtongxiong/followers",
"following_url": "https://api.github.com/users/yingtongxiong/following{/other_user}",
"g... | [] | open | false | null | [] | [
"The error ModuleNotFoundError: No module named 'datasets.distributed' means your installed datasets library is too old or incompatible with the version of Library you are using(in my case it was BEIR). The datasets.distributed module was removed in recent versions of the datasets library.\n\nDowngrade datasets to ... | 2025-07-17T09:32:35 | 2025-07-25T15:14:19 | null | NONE | null | null | null | null | ### Describe the bug
hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this?
### Steps to reproduce the bug
1. pip install datasets
2. from datasets.di... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7688/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7687/comments | https://api.github.com/repos/huggingface/datasets/issues/7687/events | https://github.com/huggingface/datasets/issues/7687 | 3,238,760,301 | I_kwDODunzps7BC49t | 7,687 | Datasets keeps rebuilding the dataset every time i call the python script | {
"avatar_url": "https://avatars.githubusercontent.com/u/58883113?v=4",
"events_url": "https://api.github.com/users/CALEB789/events{/privacy}",
"followers_url": "https://api.github.com/users/CALEB789/followers",
"following_url": "https://api.github.com/users/CALEB789/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | [
"here is the code to load the dataset form the cache:\n\n```python\ns = load_dataset('databricks/databricks-dolly-15k')['train']\n```\n\nif you pass the location of a local directory it will create a new cache based on that directory content"
] | 2025-07-17T09:03:38 | 2025-07-25T15:21:31 | null | NONE | null | null | null | null | ### Describe the bug
Every time it runs, somehow, samples increase.
This can cause a 12mb dataset to have other built versions of 400 mbs+
<img width="363" height="481" alt="Image" src="https://github.com/user-attachments/assets/766ce958-bd2b-41bc-b950-86710259bfdc" />
### Steps to reproduce the bug
`from datasets... | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7687/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7687/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7686/comments | https://api.github.com/repos/huggingface/datasets/issues/7686/events | https://github.com/huggingface/datasets/issues/7686 | 3,237,201,090 | I_kwDODunzps7A88TC | 7,686 | load_dataset does not check .no_exist files in the hub cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/3627235?v=4",
"events_url": "https://api.github.com/users/jmaccarl/events{/privacy}",
"followers_url": "https://api.github.com/users/jmaccarl/followers",
"following_url": "https://api.github.com/users/jmaccarl/following{/other_user}",
"gists_url": "http... | [] | open | false | null | [] | [] | 2025-07-16T20:04:00 | 2025-07-16T20:04:00 | null | NONE | null | null | null | null | ### Describe the bug
I'm not entirely sure if this should be submitted as a bug in the `datasets` library or the `huggingface_hub` library, given it could be fixed at different levels of the stack.
The fundamental issue is that the `load_datasets` api doesn't use the `.no_exist` files in the hub cache unlike other wr... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7686/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7686/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7685/comments | https://api.github.com/repos/huggingface/datasets/issues/7685/events | https://github.com/huggingface/datasets/issues/7685 | 3,236,979,340 | I_kwDODunzps7A8GKM | 7,685 | Inconsistent range request behavior for parquet REST api | {
"avatar_url": "https://avatars.githubusercontent.com/u/21327470?v=4",
"events_url": "https://api.github.com/users/universalmind303/events{/privacy}",
"followers_url": "https://api.github.com/users/universalmind303/followers",
"following_url": "https://api.github.com/users/universalmind303/following{/other_use... | [] | open | false | null | [] | [
"This is a weird bug, is it a range that is supposed to be satisfiable ? I mean, is it on the boundraries ?\n\nLet me know if you'r e still having the issue, in case it was just a transient bug",
"@lhoestq yes the ranges are supposed to be satisfiable, and _sometimes_ they are. \n\nThe head requests show that it ... | 2025-07-16T18:39:44 | 2025-08-11T08:16:54 | null | NONE | null | null | null | null | ### Describe the bug
First off, I do apologize if this is not the correct repo for submitting this issue. Please direct me to another one if it's more appropriate elsewhere.
The datasets rest api is inconsistently giving `416 Range Not Satisfiable` when using a range request to get portions of the parquet files. Mor... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7685/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7685/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7682/comments | https://api.github.com/repos/huggingface/datasets/issues/7682/events | https://github.com/huggingface/datasets/issues/7682 | 3,229,687,253 | I_kwDODunzps7AgR3V | 7,682 | Fail to cast Audio feature for numpy arrays in datasets 4.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/163345686?v=4",
"events_url": "https://api.github.com/users/luatil-cloud/events{/privacy}",
"followers_url": "https://api.github.com/users/luatil-cloud/followers",
"following_url": "https://api.github.com/users/luatil-cloud/following{/other_user}",
"gis... | [] | closed | false | null | [] | [
"thanks for reporting, I opened a PR and I'll make a patch release soon ",
"> thanks for reporting, I opened a PR and I'll make a patch release soon\n\nThank you very much @lhoestq!"
] | 2025-07-14T18:41:02 | 2025-07-15T12:10:39 | 2025-07-15T10:24:08 | NONE | null | null | null | null | ### Describe the bug
Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails
in version 4.0.0 but not in version 3.6.0
### Steps to reproduce the bug
The following `uv script` should be able to reproduce the bug in version 4.0.0
and pass in version 3.6.0 on a macOS ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7682/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7682/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 15:43:06 |
https://api.github.com/repos/huggingface/datasets/issues/7681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7681/comments | https://api.github.com/repos/huggingface/datasets/issues/7681/events | https://github.com/huggingface/datasets/issues/7681 | 3,227,112,736 | I_kwDODunzps7AWdUg | 7,681 | Probabilistic High Memory Usage and Freeze on Python 3.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"events_url": "https://api.github.com/users/ryan-minato/events{/privacy}",
"followers_url": "https://api.github.com/users/ryan-minato/followers",
"following_url": "https://api.github.com/users/ryan-minato/following{/other_user}",
"gists_u... | [] | open | false | null | [] | [] | 2025-07-14T01:57:16 | 2025-07-14T01:57:16 | null | NONE | null | null | null | null | ### Describe the bug
A probabilistic issue encountered when processing datasets containing PIL.Image columns using the huggingface/datasets library on Python 3.10. The process occasionally experiences a sudden and significant memory spike, reaching 100% utilization, leading to a complete freeze. During this freeze, th... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7681/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7681/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7680 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7680/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7680/comments | https://api.github.com/repos/huggingface/datasets/issues/7680/events | https://github.com/huggingface/datasets/issues/7680 | 3,224,824,151 | I_kwDODunzps7ANulX | 7,680 | Question about iterable dataset and streaming | {
"avatar_url": "https://avatars.githubusercontent.com/u/73541181?v=4",
"events_url": "https://api.github.com/users/Tavish9/events{/privacy}",
"followers_url": "https://api.github.com/users/Tavish9/followers",
"following_url": "https://api.github.com/users/Tavish9/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | [
"> If we have already loaded the dataset, why doing to_iterable_dataset? Does it go through the dataset faster than map-style dataset?\n\nyes, it makes a faster DataLoader for example (otherwise DataLoader uses `__getitem__` which is slower than iterating)\n\n> load_dataset(streaming=True) is useful for huge datase... | 2025-07-12T04:48:30 | 2025-08-01T13:01:48 | null | NONE | null | null | null | null | In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78
I am confused,
1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style datase... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7680/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7680/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7679/comments | https://api.github.com/repos/huggingface/datasets/issues/7679/events | https://github.com/huggingface/datasets/issues/7679 | 3,220,787,371 | I_kwDODunzps6_-VCr | 7,679 | metric glue breaks with 4.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | [
"I released `evaluate` 0.4.5 yesterday to fix the issue - sorry for the inconvenience:\n\n```\npip install -U evaluate\n```",
"Thanks so much, @lhoestq!"
] | 2025-07-10T21:39:50 | 2025-07-11T17:42:01 | 2025-07-11T17:42:01 | CONTRIBUTOR | null | null | null | null | ### Describe the bug
worked fine with 3.6.0, and with 4.0.0 `eval_metric = metric.compute()` in HF Accelerate breaks.
The code that fails is:
https://huggingface.co/spaces/evaluate-metric/glue/blob/v0.4.0/glue.py#L84
```
def simple_accuracy(preds, labels):
print(preds, labels)
print(f"{preds==labels}")
r... | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7679/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7679/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 20:02:11 |
https://api.github.com/repos/huggingface/datasets/issues/7678 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7678/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7678/comments | https://api.github.com/repos/huggingface/datasets/issues/7678/events | https://github.com/huggingface/datasets/issues/7678 | 3,218,625,544 | I_kwDODunzps6_2FQI | 7,678 | To support decoding audio data, please install 'torchcodec'. | {
"avatar_url": "https://avatars.githubusercontent.com/u/48163702?v=4",
"events_url": "https://api.github.com/users/alpcansoydas/events{/privacy}",
"followers_url": "https://api.github.com/users/alpcansoydas/followers",
"following_url": "https://api.github.com/users/alpcansoydas/following{/other_user}",
"gist... | [] | closed | false | null | [] | [
"Hi ! yes you should `!pip install -U datasets[audio]` to have the required dependencies.\n\n`datasets` 4.0 now relies on `torchcodec` for audio decoding. The `torchcodec` AudioDecoder enables streaming from HF and also allows to decode ranges of audio",
"Same issues on Colab.\n\n> !pip install -U datasets[audio]... | 2025-07-10T09:43:13 | 2025-07-22T03:46:52 | 2025-07-11T05:05:42 | NONE | null | null | null | null |
In the latest version of datasets==4.0.0, i cannot print the audio data on the Colab notebook. But it works on the 3.6.0 version.
!pip install -q -U datasets huggingface_hub fsspec
from datasets import load_dataset
downloaded_dataset = load_dataset("ymoslem/MediaSpeech", "tr", split="train")
print(downloaded_datase... | {
"avatar_url": "https://avatars.githubusercontent.com/u/48163702?v=4",
"events_url": "https://api.github.com/users/alpcansoydas/events{/privacy}",
"followers_url": "https://api.github.com/users/alpcansoydas/followers",
"following_url": "https://api.github.com/users/alpcansoydas/following{/other_user}",
"gist... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7678/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7678/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 19:22:29 |
https://api.github.com/repos/huggingface/datasets/issues/7677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7677/comments | https://api.github.com/repos/huggingface/datasets/issues/7677/events | https://github.com/huggingface/datasets/issues/7677 | 3,218,044,656 | I_kwDODunzps6_z3bw | 7,677 | Toxicity fails with datasets 4.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/82044803?v=4",
"events_url": "https://api.github.com/users/serena-ruan/events{/privacy}",
"followers_url": "https://api.github.com/users/serena-ruan/followers",
"following_url": "https://api.github.com/users/serena-ruan/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | [
"Hi ! You can fix this by upgrading `evaluate`:\n\n```\npip install -U evaluate\n```",
"Thanks, verified evaluate 0.4.5 works!"
] | 2025-07-10T06:15:22 | 2025-07-11T04:40:59 | 2025-07-11T04:40:59 | NONE | null | null | null | null | ### Describe the bug
With the latest 4.0.0 release, huggingface toxicity evaluation module fails with error: `ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).`
### Steps to reproduce the bug
Repro:... | {
"avatar_url": "https://avatars.githubusercontent.com/u/82044803?v=4",
"events_url": "https://api.github.com/users/serena-ruan/events{/privacy}",
"followers_url": "https://api.github.com/users/serena-ruan/followers",
"following_url": "https://api.github.com/users/serena-ruan/following{/other_user}",
"gists_u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7677/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7677/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 22:25:37 |
https://api.github.com/repos/huggingface/datasets/issues/7676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7676/comments | https://api.github.com/repos/huggingface/datasets/issues/7676/events | https://github.com/huggingface/datasets/issues/7676 | 3,216,857,559 | I_kwDODunzps6_vVnX | 7,676 | Many things broken since the new 4.0.0 release | {
"avatar_url": "https://avatars.githubusercontent.com/u/37179323?v=4",
"events_url": "https://api.github.com/users/mobicham/events{/privacy}",
"followers_url": "https://api.github.com/users/mobicham/followers",
"following_url": "https://api.github.com/users/mobicham/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | [
"Happy to take a look, do you have a list of impacted datasets ?",
"Thanks @lhoestq , related to lm-eval, at least `winogrande`, `mmlu` and `hellaswag`, based on my tests yesterday. But many others like <a href=\"https://huggingface.co/datasets/lukaemon/bbh\">bbh</a>, most probably others too. ",
"Hi @mobicham ... | 2025-07-09T18:59:50 | 2025-09-18T16:33:34 | null | NONE | null | null | null | null | ### Describe the bug
The new changes in 4.0.0 are breaking many datasets, including those from lm-evaluation-harness.
I am trying to revert back to older versions, like 3.6.0 to make the eval work but I keep getting:
``` Python
File /venv/main/lib/python3.12/site-packages/datasets/features/features.py:1474, in genera... | null | {
"+1": 23,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7676/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7676/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7675/comments | https://api.github.com/repos/huggingface/datasets/issues/7675/events | https://github.com/huggingface/datasets/issues/7675 | 3,216,699,094 | I_kwDODunzps6_uu7W | 7,675 | common_voice_11_0.py failure in dataset library | {
"avatar_url": "https://avatars.githubusercontent.com/u/98793855?v=4",
"events_url": "https://api.github.com/users/egegurel/events{/privacy}",
"followers_url": "https://api.github.com/users/egegurel/followers",
"following_url": "https://api.github.com/users/egegurel/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | [
"Hi ! This dataset is not in a supported format and `datasets` 4 doesn't support datasets that based on python scripts which are often source of errors. Feel free to ask the dataset authors to convert the dataset to a supported format at https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/discussio... | 2025-07-09T17:47:59 | 2025-07-22T09:35:42 | null | NONE | null | null | null | null | ### Describe the bug
I tried to download dataset but have got this error:
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True)
---------------------------------------------------------------------------
RuntimeError Tr... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7675/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7675/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7671 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7671/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7671/comments | https://api.github.com/repos/huggingface/datasets/issues/7671/events | https://github.com/huggingface/datasets/issues/7671 | 3,213,223,886 | I_kwDODunzps6_hefO | 7,671 | Mapping function not working if the first example is returned as None | {
"avatar_url": "https://avatars.githubusercontent.com/u/46325823?v=4",
"events_url": "https://api.github.com/users/dnaihao/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaihao/followers",
"following_url": "https://api.github.com/users/dnaihao/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | [
"Hi, map() always expect an output.\n\nIf you wish to filter examples, you should use filter(), in your case it could be something like this:\n\n```python\nds = ds.map(my_processing_function).filter(ignore_long_prompts)\n```",
"Realized this! Thanks a lot, I will close this issue then."
] | 2025-07-08T17:07:47 | 2025-07-09T12:30:32 | 2025-07-09T12:30:32 | NONE | null | null | null | null | ### Describe the bug
https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37
Here we can see the writer is initialized on `i==0`. However, there can be cases where in the user mapping function, the first example is filtered out (length cons... | {
"avatar_url": "https://avatars.githubusercontent.com/u/46325823?v=4",
"events_url": "https://api.github.com/users/dnaihao/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaihao/followers",
"following_url": "https://api.github.com/users/dnaihao/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7671/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7671/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 19:22:45 |
https://api.github.com/repos/huggingface/datasets/issues/7669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7669/comments | https://api.github.com/repos/huggingface/datasets/issues/7669/events | https://github.com/huggingface/datasets/issues/7669 | 3,203,541,091 | I_kwDODunzps6-8ihj | 7,669 | How can I add my custom data to huggingface datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/219205504?v=4",
"events_url": "https://api.github.com/users/xiagod/events{/privacy}",
"followers_url": "https://api.github.com/users/xiagod/followers",
"following_url": "https://api.github.com/users/xiagod/following{/other_user}",
"gists_url": "https://... | [] | open | false | null | [] | [
"Hey @xiagod \n\nThe easiest way to add your custom data to Hugging Face Datasets is to use the built-in load_dataset function with your local files. Some examples include:\n\nCSV files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"csv\", data_files=\"my_file.csv\")\n\nJSON or JSONL files:\nfrom dat... | 2025-07-04T19:19:54 | 2025-07-05T18:19:37 | null | NONE | null | null | null | null | I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7669/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7669/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7668/comments | https://api.github.com/repos/huggingface/datasets/issues/7668/events | https://github.com/huggingface/datasets/issues/7668 | 3,199,039,322 | I_kwDODunzps6-rXda | 7,668 | Broken EXIF crash the whole program | {
"avatar_url": "https://avatars.githubusercontent.com/u/30485844?v=4",
"events_url": "https://api.github.com/users/Seas0/events{/privacy}",
"followers_url": "https://api.github.com/users/Seas0/followers",
"following_url": "https://api.github.com/users/Seas0/following{/other_user}",
"gists_url": "https://api.... | [] | open | false | null | [] | [
"There are other discussions about error handling for images decoding here : https://github.com/huggingface/datasets/issues/7632 https://github.com/huggingface/datasets/issues/7612\n\nand a PR here: https://github.com/huggingface/datasets/pull/7638 (would love your input on the proposed solution !)"
] | 2025-07-03T11:24:15 | 2025-07-03T12:27:16 | null | NONE | null | null | null | null | ### Describe the bug
When parsing this image in the ImageNet1K dataset, the `datasets` crashs whole training process just because unable to parse an invalid EXIF tag.

### Steps to reproduce the bug
Use the `datasets.Image.decod... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7668/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7668/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7665/comments | https://api.github.com/repos/huggingface/datasets/issues/7665/events | https://github.com/huggingface/datasets/issues/7665 | 3,193,239,955 | I_kwDODunzps6-VPmT | 7,665 | Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files | {
"avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4",
"events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}",
"followers_url": "https://api.github.com/users/zdzichukowalski/followers",
"following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}",... | [] | closed | false | null | [] | [
"Somehow I created the issue twice🙈 This one is an exact duplicate of #7664."
] | 2025-07-01T17:14:53 | 2025-07-01T17:17:48 | 2025-07-01T17:17:48 | NONE | null | null | null | null | ### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema.
In my case there is a field `body:` with a string value
```
"### Describe the bug (...) ,action:... | {
"avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4",
"events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}",
"followers_url": "https://api.github.com/users/zdzichukowalski/followers",
"following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7665/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7665/timeline | null | duplicate | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 0:02:55 |
https://api.github.com/repos/huggingface/datasets/issues/7664 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7664/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7664/comments | https://api.github.com/repos/huggingface/datasets/issues/7664/events | https://github.com/huggingface/datasets/issues/7664 | 3,193,239,035 | I_kwDODunzps6-VPX7 | 7,664 | Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files | {
"avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4",
"events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}",
"followers_url": "https://api.github.com/users/zdzichukowalski/followers",
"following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}",... | [] | open | false | null | [] | [
"Hey @zdzichukowalski, I was not able to reproduce this on python 3.11.9 and datasets 3.6.0. The contents of \"body\" are correctly parsed as a string and no other fields like timestamps are created. Could you try reproducing this in a fresh environment, or posting the complete code where you encountered that stack... | 2025-07-01T17:14:32 | 2025-07-09T13:14:11 | null | NONE | null | null | null | null | ### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema.
In my case there is a field `body:` with a string value
```
"### Describe the bug (...) ,action:... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7664/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7664/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7662/comments | https://api.github.com/repos/huggingface/datasets/issues/7662/events | https://github.com/huggingface/datasets/issues/7662 | 3,190,805,531 | I_kwDODunzps6-L9Qb | 7,662 | Applying map after transform with multiprocessing will cause OOM | {
"avatar_url": "https://avatars.githubusercontent.com/u/26482910?v=4",
"events_url": "https://api.github.com/users/JunjieLl/events{/privacy}",
"followers_url": "https://api.github.com/users/JunjieLl/followers",
"following_url": "https://api.github.com/users/JunjieLl/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | [
"Hi ! `add_column` loads the full column data in memory:\n\nhttps://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021\n\na workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a ... | 2025-07-01T05:45:57 | 2025-07-10T06:17:40 | null | NONE | null | null | null | null | ### Describe the bug
I have a 30TB dataset. When I perform add_column and cast_column operations on it and then execute a multiprocessing map, it results in an OOM (Out of Memory) error. However, if I skip the add_column and cast_column steps and directly run the map, there is no OOM. After debugging step by step, I f... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7662/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7662/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7660/comments | https://api.github.com/repos/huggingface/datasets/issues/7660/events | https://github.com/huggingface/datasets/issues/7660 | 3,189,028,251 | I_kwDODunzps6-FLWb | 7,660 | AttributeError: type object 'tqdm' has no attribute '_lock' | {
"avatar_url": "https://avatars.githubusercontent.com/u/44766273?v=4",
"events_url": "https://api.github.com/users/Hypothesis-Z/events{/privacy}",
"followers_url": "https://api.github.com/users/Hypothesis-Z/followers",
"following_url": "https://api.github.com/users/Hypothesis-Z/following{/other_user}",
"gist... | [] | open | false | null | [] | [
"Deleting a class (**not instance**) attribute might be invalid in this case, which is `tqdm` doing in `ensure_lock`.\n\n```python\nfrom tqdm import tqdm as old_tqdm\n\nclass tqdm1(old_tqdm):\n def __delattr__(self, attr):\n try:\n super().__delattr__(attr)\n except AttributeError:\n ... | 2025-06-30T15:57:16 | 2025-07-03T15:14:27 | null | NONE | null | null | null | null | ### Describe the bug
`AttributeError: type object 'tqdm' has no attribute '_lock'`
It occurs when I'm trying to load datasets in thread pool.
Issue https://github.com/huggingface/datasets/issues/6066 and PR https://github.com/huggingface/datasets/pull/6067 https://github.com/huggingface/datasets/pull/6068 tried to f... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7660/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7660/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7650/comments | https://api.github.com/repos/huggingface/datasets/issues/7650/events | https://github.com/huggingface/datasets/issues/7650 | 3,182,745,315 | I_kwDODunzps69tNbj | 7,650 | `load_dataset` defaults to json file format for datasets with 1 shard | {
"avatar_url": "https://avatars.githubusercontent.com/u/6965756?v=4",
"events_url": "https://api.github.com/users/iPieter/events{/privacy}",
"followers_url": "https://api.github.com/users/iPieter/followers",
"following_url": "https://api.github.com/users/iPieter/following{/other_user}",
"gists_url": "https:/... | [] | open | false | null | [] | [] | 2025-06-27T12:54:25 | 2025-06-27T12:54:25 | null | NONE | null | null | null | null | ### Describe the bug
I currently have multiple datasets (train+validation) saved as 50MB shards. For one dataset the validation pair is small enough to fit into a single shard and this apparently causes problems when loading the dataset. I created the datasets using a DatasetDict, saved them as 50MB arrow files for st... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7650/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7650/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7647 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7647/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7647/comments | https://api.github.com/repos/huggingface/datasets/issues/7647/events | https://github.com/huggingface/datasets/issues/7647 | 3,178,952,517 | I_kwDODunzps69evdF | 7,647 | loading mozilla-foundation--common_voice_11_0 fails | {
"avatar_url": "https://avatars.githubusercontent.com/u/5703039?v=4",
"events_url": "https://api.github.com/users/pavel-esir/events{/privacy}",
"followers_url": "https://api.github.com/users/pavel-esir/followers",
"following_url": "https://api.github.com/users/pavel-esir/following{/other_user}",
"gists_url":... | [] | open | false | null | [] | [
"@claude Could you please address this issue",
"kinda related: https://github.com/huggingface/datasets/issues/7675"
] | 2025-06-26T12:23:48 | 2025-07-10T14:49:30 | null | NONE | null | null | null | null | ### Describe the bug
Hello everyone,
i am trying to load `mozilla-foundation--common_voice_11_0` and it fails. Reproducer
```
import datasets
datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True)
```
and it fails with
```
File ~/opt/envs/.../lib/py... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7647/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7647/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7637/comments | https://api.github.com/repos/huggingface/datasets/issues/7637/events | https://github.com/huggingface/datasets/issues/7637 | 3,171,883,522 | I_kwDODunzps69DxoC | 7,637 | Introduce subset_name as an alias of config_name | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"I second this! When you come from the Hub, the intuitive question is \"how do I set the subset name\", and it's not easily answered from the docs: `subset_name` would answer this directly.",
"I've submitted PR [#7657](https://github.com/huggingface/datasets/pull/7657) to introduce subset_name as a user-facing al... | 2025-06-24T12:49:01 | 2025-07-01T16:08:33 | null | MEMBER | null | null | null | null | ### Feature request
Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata).
### Motivation
The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically call... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7637/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7637/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7636 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7636/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7636/comments | https://api.github.com/repos/huggingface/datasets/issues/7636/events | https://github.com/huggingface/datasets/issues/7636 | 3,170,878,167 | I_kwDODunzps68_8LX | 7,636 | "open" in globals()["__builtins__"], an error occurs: "TypeError: argument of type 'module' is not iterable" | {
"avatar_url": "https://avatars.githubusercontent.com/u/51187979?v=4",
"events_url": "https://api.github.com/users/kuanyan9527/events{/privacy}",
"followers_url": "https://api.github.com/users/kuanyan9527/followers",
"following_url": "https://api.github.com/users/kuanyan9527/following{/other_user}",
"gists_u... | [] | open | false | null | [] | [
"@kuanyan9527 Your query is indeed valid. Following could be its reasoning:\n\nQuoting from https://stackoverflow.com/a/11181607:\n\"By default, when in the `__main__` module,` __builtins__` is the built-in module `__builtin__` (note: no 's'); when in any other module, `__builtins__` is an alias for the dictionary ... | 2025-06-24T08:09:39 | 2025-07-10T04:13:16 | null | NONE | null | null | null | null | When I run the following code, an error occurs: "TypeError: argument of type 'module' is not iterable"
```python
print("open" in globals()["__builtins__"])
```
Traceback (most recent call last):
File "./main.py", line 2, in <module>
print("open" in globals()["__builtins__"])
^^^^^^^^^^^^^^^^^^^^^^
TypeE... | {
"avatar_url": "https://avatars.githubusercontent.com/u/51187979?v=4",
"events_url": "https://api.github.com/users/kuanyan9527/events{/privacy}",
"followers_url": "https://api.github.com/users/kuanyan9527/followers",
"following_url": "https://api.github.com/users/kuanyan9527/following{/other_user}",
"gists_u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7636/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7636/timeline | null | reopened | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7633 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7633/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7633/comments | https://api.github.com/repos/huggingface/datasets/issues/7633/events | https://github.com/huggingface/datasets/issues/7633 | 3,168,399,637 | I_kwDODunzps682fEV | 7,633 | Proposal: Small Tamil Discourse Coherence Dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/66418501?v=4",
"events_url": "https://api.github.com/users/bikkiNitSrinagar/events{/privacy}",
"followers_url": "https://api.github.com/users/bikkiNitSrinagar/followers",
"following_url": "https://api.github.com/users/bikkiNitSrinagar/following{/other_use... | [] | open | false | null | [] | [] | 2025-06-23T14:24:40 | 2025-06-23T14:24:40 | null | NONE | null | null | null | null | I’m a beginner from NIT Srinagar proposing a dataset of 50 Tamil text pairs for discourse coherence (coherent/incoherent labels) to support NLP research in low-resource languages.
- Size: 50 samples
- Format: CSV with columns (text1, text2, label)
- Use case: Training NLP models for coherence
I’ll use GitHub’s web edit... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7633/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7633/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7632/comments | https://api.github.com/repos/huggingface/datasets/issues/7632/events | https://github.com/huggingface/datasets/issues/7632 | 3,168,283,589 | I_kwDODunzps682CvF | 7,632 | Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/37377515?v=4",
"events_url": "https://api.github.com/users/ganiket19/events{/privacy}",
"followers_url": "https://api.github.com/users/ganiket19/followers",
"following_url": "https://api.github.com/users/ganiket19/following{/other_user}",
"gists_url": "... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"Hi! This is now handled in PR #7638",
"Thank you for implementing the suggestion it would be great help in our use case. "
] | 2025-06-23T13:49:24 | 2025-07-08T06:52:53 | null | NONE | null | null | null | null | ### Feature request
Currently, when using dataset.cast_column("image", Image(decode=True)), the pipeline throws an error and halts if any image in the dataset is invalid or corrupted (e.g., truncated files, incorrect formats, unreachable URLs). This behavior disrupts large-scale processing where a few faulty samples a... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7632/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7630/comments | https://api.github.com/repos/huggingface/datasets/issues/7630/events | https://github.com/huggingface/datasets/issues/7630 | 3,164,650,900 | I_kwDODunzps68oL2U | 7,630 | [bug] resume from ckpt skips samples if .map is applied | {
"avatar_url": "https://avatars.githubusercontent.com/u/23004953?v=4",
"events_url": "https://api.github.com/users/felipemello1/events{/privacy}",
"followers_url": "https://api.github.com/users/felipemello1/followers",
"following_url": "https://api.github.com/users/felipemello1/following{/other_user}",
"gist... | [] | open | false | null | [] | [
"Thanks for reporting this — it looks like a separate but related bug to #7538, which involved sample loss when resuming an `IterableDataset` wrapped in `FormattedExamplesIterable`. That was resolved in #7553 by re-batching the iterable to track offset correctly.\n\nIn this case, the issue seems to arise specifical... | 2025-06-21T01:50:03 | 2025-06-29T07:51:32 | null | NONE | null | null | null | null | ### Describe the bug
resume from ckpt skips samples if .map is applied
Maybe related: https://github.com/huggingface/datasets/issues/7538
### Steps to reproduce the bug
```python
from datasets import Dataset
from datasets.distributed import split_dataset_by_node
# Create dataset with map transformation
def create... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7630/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7630/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7627/comments | https://api.github.com/repos/huggingface/datasets/issues/7627/events | https://github.com/huggingface/datasets/issues/7627 | 3,160,544,390 | I_kwDODunzps68YhSG | 7,627 | Creating a HF Dataset from lakeFS with S3 storage takes too much time! | {
"avatar_url": "https://avatars.githubusercontent.com/u/118734142?v=4",
"events_url": "https://api.github.com/users/Thunderhead-exe/events{/privacy}",
"followers_url": "https://api.github.com/users/Thunderhead-exe/followers",
"following_url": "https://api.github.com/users/Thunderhead-exe/following{/other_user}... | [] | closed | false | null | [] | [
"### > Update\n\nThe bottleneck, from what I understand, was making one network request per file\n\nFor 30k images, this meant 30k separate GET requests to the MinIO server through the S3 API, and that was killing the performance\n\nUsing webDataset to transform the large number of files to few .tar files and passi... | 2025-06-19T14:28:41 | 2025-06-23T12:39:10 | 2025-06-23T12:39:10 | NONE | null | null | null | null | Hi,
I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_
Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!
From what I understand, it is loading the images into cache then buil... | {
"avatar_url": "https://avatars.githubusercontent.com/u/118734142?v=4",
"events_url": "https://api.github.com/users/Thunderhead-exe/events{/privacy}",
"followers_url": "https://api.github.com/users/Thunderhead-exe/followers",
"following_url": "https://api.github.com/users/Thunderhead-exe/following{/other_user}... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7627/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7627/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 3 days, 22:10:29 |
https://api.github.com/repos/huggingface/datasets/issues/7624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7624/comments | https://api.github.com/repos/huggingface/datasets/issues/7624/events | https://github.com/huggingface/datasets/issues/7624 | 3,156,136,624 | I_kwDODunzps68HtKw | 7,624 | #Dataset Make "image" column appear first in dataset preview UI | {
"avatar_url": "https://avatars.githubusercontent.com/u/98875217?v=4",
"events_url": "https://api.github.com/users/jcerveto/events{/privacy}",
"followers_url": "https://api.github.com/users/jcerveto/followers",
"following_url": "https://api.github.com/users/jcerveto/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | [
"Hi ! It should follow the same order as the order of the keys in the metadata file",
"Hi! Thank you for your answer. \n\nAs you said it, I I forced every key in every JSON to have an order using `collections. OrderedDict` in Python. Now, it works!\n\nTY"
] | 2025-06-18T09:25:19 | 2025-06-20T07:46:43 | 2025-06-20T07:46:43 | NONE | null | null | null | null | Hi!
#Dataset
I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub.
However, at the moment, the `"im... | {
"avatar_url": "https://avatars.githubusercontent.com/u/98875217?v=4",
"events_url": "https://api.github.com/users/jcerveto/events{/privacy}",
"followers_url": "https://api.github.com/users/jcerveto/followers",
"following_url": "https://api.github.com/users/jcerveto/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7624/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 1 day, 22:21:24 |
https://api.github.com/repos/huggingface/datasets/issues/7619 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7619/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7619/comments | https://api.github.com/repos/huggingface/datasets/issues/7619/events | https://github.com/huggingface/datasets/issues/7619 | 3,153,058,517 | I_kwDODunzps6779rV | 7,619 | `from_list` fails while `from_generator` works for large datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/4028948?v=4",
"events_url": "https://api.github.com/users/abdulfatir/events{/privacy}",
"followers_url": "https://api.github.com/users/abdulfatir/followers",
"following_url": "https://api.github.com/users/abdulfatir/following{/other_user}",
"gists_url":... | [] | open | false | null | [] | [
"@lhoestq any thoughts on this? ",
"Thanks for the report! This behavior is expected due to how `from_list()` and `from_generator()` differ internally.\n\n- `from_list()` builds the entire dataset in memory at once, which can easily exceed limits (especially with variable-length arrays or millions of rows). The A... | 2025-06-17T10:58:55 | 2025-06-29T16:34:44 | null | NONE | null | null | null | null | ### Describe the bug
I am constructing a large time series dataset and observed that first constructing a list of entries and then using `Dataset.from_list` led to a crash as the number of items became large. However, this is not a problem when using `Dataset.from_generator`.
### Steps to reproduce the bug
#### Snip... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7619/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7619/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7617 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7617/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7617/comments | https://api.github.com/repos/huggingface/datasets/issues/7617/events | https://github.com/huggingface/datasets/issues/7617 | 3,148,102,085 | I_kwDODunzps67pDnF | 7,617 | Unwanted column padding in nested lists of dicts | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | [
"Answer from @lhoestq:\n\n> No\n> This is because Arrow and Parquet a columnar format: they require a fixed type for each column. So if you have nested dicts, each item should have the same subfields\n\nThe way around I found is the handle it after sampling with this function:\n\n```python\ndef remove_padding(examp... | 2025-06-15T22:06:17 | 2025-06-16T13:43:31 | 2025-06-16T13:43:31 | MEMBER | null | null | null | null | ```python
from datasets import Dataset
dataset = Dataset.from_dict({
"messages": [
[
{"a": "...",},
{"b": "...",},
],
]
})
print(dataset[0])
```
What I get:
```
{'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]}
```
What I want:
```
{'messages': [{'a': '... | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7617/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7617/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 15:37:14 |
https://api.github.com/repos/huggingface/datasets/issues/7612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7612/comments | https://api.github.com/repos/huggingface/datasets/issues/7612/events | https://github.com/huggingface/datasets/issues/7612 | 3,141,905,049 | I_kwDODunzps67RaqZ | 7,612 | Provide an option of robust dataset iterator with error handling | {
"avatar_url": "https://avatars.githubusercontent.com/u/40016222?v=4",
"events_url": "https://api.github.com/users/wwwjn/events{/privacy}",
"followers_url": "https://api.github.com/users/wwwjn/followers",
"following_url": "https://api.github.com/users/wwwjn/following{/other_user}",
"gists_url": "https://api.... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"Hi ! Maybe we can add a parameter to the Image() type to make it to return `None` instead of raising an error in case of corruption ? Would that help ?",
"Hi! 👋🏼 I just opened PR [#7638](https://github.com/huggingface/datasets/pull/7638) to address this issue.\n\n### 🔧 What it does:\nIt adds an `ignore_decode... | 2025-06-13T00:40:48 | 2025-06-24T16:52:30 | null | NONE | null | null | null | null | ### Feature request
Adding an option to skip corrupted data samples. Currently the datasets behavior is throwing errors if the data sample if corrupted and let user aware and handle the data corruption. When I tried to try-catch the error at user level, the iterator will raise StopIteration when I called next() again.... | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7612/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7612/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7611/comments | https://api.github.com/repos/huggingface/datasets/issues/7611/events | https://github.com/huggingface/datasets/issues/7611 | 3,141,383,940 | I_kwDODunzps67PbcE | 7,611 | Code example for dataset.add_column() does not reflect correct way to use function | {
"avatar_url": "https://avatars.githubusercontent.com/u/31388649?v=4",
"events_url": "https://api.github.com/users/shaily99/events{/privacy}",
"followers_url": "https://api.github.com/users/shaily99/followers",
"following_url": "https://api.github.com/users/shaily99/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | [
"Hi @shaily99 \n\nThanks for pointing this out — you're absolutely right!\n\nThe current example in the docstring for add_column() implies in-place modification, which is misleading since add_column() actually returns a new dataset.",
"#self-assign\n"
] | 2025-06-12T19:42:29 | 2025-07-17T13:14:18 | 2025-07-17T13:14:18 | NONE | null | null | null | null | https://github.com/huggingface/datasets/blame/38d4d0e11e22fdbc4acf373d2421d25abeb43439/src/datasets/arrow_dataset.py#L5925C10-L5925C10
The example seems to suggest that dataset.add_column() can add column inplace, however, this is wrong -- it cannot. It returns a new dataset with the column added to it. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7611/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 34 days, 17:31:49 |
https://api.github.com/repos/huggingface/datasets/issues/7610 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7610/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7610/comments | https://api.github.com/repos/huggingface/datasets/issues/7610/events | https://github.com/huggingface/datasets/issues/7610 | 3,141,281,560 | I_kwDODunzps67PCcY | 7,610 | i cant confirm email | {
"avatar_url": "https://avatars.githubusercontent.com/u/187984415?v=4",
"events_url": "https://api.github.com/users/lykamspam/events{/privacy}",
"followers_url": "https://api.github.com/users/lykamspam/followers",
"following_url": "https://api.github.com/users/lykamspam/following{/other_user}",
"gists_url": ... | [] | open | false | null | [] | [
"Will you please clarify the issue by some screenshots or more in-depth explanation?",
"\nThis is clarify answer. I have not received a letter.\n\n**The graphic at the top shows how I don't get any letter. Can you show in a c... | 2025-06-12T18:58:49 | 2025-06-27T14:36:47 | null | NONE | null | null | null | null | ### Describe the bug
This is dificult, I cant confirm email because I'm not get any email!
I cant post forum because I cant confirm email!
I can send help desk because... no exist on web page.
paragraph 44
### Steps to reproduce the bug
rthjrtrt
### Expected behavior
ewtgfwetgf
### Environment info
sdgfswdegfwe | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7610/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7610/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7607 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7607/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7607/comments | https://api.github.com/repos/huggingface/datasets/issues/7607/events | https://github.com/huggingface/datasets/issues/7607 | 3,135,722,560 | I_kwDODunzps6651RA | 7,607 | Video and audio decoding with torchcodec | {
"avatar_url": "https://avatars.githubusercontent.com/u/49127578?v=4",
"events_url": "https://api.github.com/users/TyTodd/events{/privacy}",
"followers_url": "https://api.github.com/users/TyTodd/followers",
"following_url": "https://api.github.com/users/TyTodd/following{/other_user}",
"gists_url": "https://a... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | [
"Good idea ! let me know if you have any question or if I can help",
"@lhoestq Almost finished, but I'm having trouble understanding this test case.\nThis is how it looks originally. The `map` function is called, and then `with_format` is called. According to the test case example[\"video\"] is supposed to be a V... | 2025-06-11T07:02:30 | 2025-06-19T18:25:49 | 2025-06-19T18:25:49 | CONTRIBUTOR | null | null | null | null | ### Feature request
Pytorch is migrating video processing to torchcodec and it's pretty cool. It would be nice to migrate both the audio and video features to use torchcodec instead of torchaudio/video.
### Motivation
My use case is I'm working on a multimodal AV model, and what's nice about torchcodec is I can extr... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7607/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7607/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 8 days, 11:23:19 |
https://api.github.com/repos/huggingface/datasets/issues/7600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7600/comments | https://api.github.com/repos/huggingface/datasets/issues/7600/events | https://github.com/huggingface/datasets/issues/7600 | 3,127,296,182 | I_kwDODunzps66ZsC2 | 7,600 | `push_to_hub` is not concurrency safe (dataset schema corruption) | {
"avatar_url": "https://avatars.githubusercontent.com/u/391004?v=4",
"events_url": "https://api.github.com/users/sharvil/events{/privacy}",
"followers_url": "https://api.github.com/users/sharvil/followers",
"following_url": "https://api.github.com/users/sharvil/following{/other_user}",
"gists_url": "https://... | [] | closed | false | null | [] | [
"@lhoestq can you please take a look? I've submitted a PR that fixes this issue. Thanks.",
"Thanks for the ping ! As I said in https://github.com/huggingface/datasets/pull/7605 there is maybe a more general approach using retries :)",
"Dropping this due to inactivity; we've implemented push_to_hub outside of HF... | 2025-06-07T17:28:56 | 2025-07-31T10:00:50 | 2025-07-31T10:00:50 | NONE | null | null | null | null | ### Describe the bug
Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable.
Consider this scenario:
- we have an Arrow dataset
- there are `N` configs of the dataset
- there are `N` independent processes operating on each of the individual configs (... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7600/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 53 days, 16:31:54 |
https://api.github.com/repos/huggingface/datasets/issues/7599 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7599/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7599/comments | https://api.github.com/repos/huggingface/datasets/issues/7599/events | https://github.com/huggingface/datasets/issues/7599 | 3,125,620,119 | I_kwDODunzps66TS2X | 7,599 | My already working dataset (when uploaded few months ago) now is ignoring metadata.jsonl | {
"avatar_url": "https://avatars.githubusercontent.com/u/97530443?v=4",
"events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/events{/privacy}",
"followers_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/followers",
"following_url": "https://api.github.com/users/JuanCarlosMartinezS... | [] | closed | false | null | [] | [
"Maybe its been a recent update, but i can manage to load the metadata.jsonl separately from the images with:\n\n```\nmetadata = load_dataset(\"PRAIG/SMB\", split=\"train\", data_files=[\"*.jsonl\"])\nimages = load_dataset(\"PRAIG/SMB\", split=\"train\")\n```\nDo you know it this is an expected behaviour? This make... | 2025-06-06T18:59:00 | 2025-06-16T15:18:00 | 2025-06-16T15:18:00 | NONE | null | null | null | null | ### Describe the bug
Hi everyone, I uploaded my dataset https://huggingface.co/datasets/PRAIG/SMB a few months ago while I was waiting for a conference acceptance response. Without modifying anything in the dataset repository now the Dataset viewer is not rendering the metadata.jsonl annotations, neither it is being d... | {
"avatar_url": "https://avatars.githubusercontent.com/u/97530443?v=4",
"events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/events{/privacy}",
"followers_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/followers",
"following_url": "https://api.github.com/users/JuanCarlosMartinezS... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7599/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7599/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 9 days, 20:19:00 |
https://api.github.com/repos/huggingface/datasets/issues/7597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7597/comments | https://api.github.com/repos/huggingface/datasets/issues/7597/events | https://github.com/huggingface/datasets/issues/7597 | 3,123,962,709 | I_kwDODunzps66M-NV | 7,597 | Download datasets from a private hub in 2025 | {
"avatar_url": "https://avatars.githubusercontent.com/u/178552926?v=4",
"events_url": "https://api.github.com/users/DanielSchuhmacher/events{/privacy}",
"followers_url": "https://api.github.com/users/DanielSchuhmacher/followers",
"following_url": "https://api.github.com/users/DanielSchuhmacher/following{/other... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | [
"Hi ! First, and in the general case, Hugging Face does offer to host private datasets, and with a subscription you can even choose the region in which the repositories are hosted (US, EU)\n\nThen if you happen to have a private deployment, you can set the HF_ENDPOINT environment variable (same as in https://github... | 2025-06-06T07:55:19 | 2025-06-13T13:46:00 | 2025-06-13T13:46:00 | NONE | null | null | null | null | ### Feature request
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature.
The obvious workaround is to clone the repo first and then l... | {
"avatar_url": "https://avatars.githubusercontent.com/u/178552926?v=4",
"events_url": "https://api.github.com/users/DanielSchuhmacher/events{/privacy}",
"followers_url": "https://api.github.com/users/DanielSchuhmacher/followers",
"following_url": "https://api.github.com/users/DanielSchuhmacher/following{/other... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7597/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7597/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 7 days, 5:50:41 |
https://api.github.com/repos/huggingface/datasets/issues/7594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7594/comments | https://api.github.com/repos/huggingface/datasets/issues/7594/events | https://github.com/huggingface/datasets/issues/7594 | 3,120,799,626 | I_kwDODunzps66A5-K | 7,594 | Add option to ignore keys/columns when loading a dataset from jsonl(or any other data format) | {
"avatar_url": "https://avatars.githubusercontent.com/u/36810152?v=4",
"events_url": "https://api.github.com/users/avishaiElmakies/events{/privacy}",
"followers_url": "https://api.github.com/users/avishaiElmakies/followers",
"following_url": "https://api.github.com/users/avishaiElmakies/following{/other_user}"... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"Good point, I'd be in favor of having the `columns` argument in `JsonConfig` (and the others) to align with `ParquetConfig` to let users choose which columns to load and ignore the rest",
"Is it possible to ignore columns when using parquet? ",
"Yes, you can pass `columns=...` to load_dataset to select which c... | 2025-06-05T11:12:45 | 2025-10-23T14:54:47 | null | NONE | null | null | null | null | ### Feature request
Hi, I would like the option to ignore keys/columns when loading a dataset from files (e.g. jsonl).
### Motivation
I am working on a dataset which is built on jsonl. It seems the dataset is unclean and a column has different types in each row. I can't clean this or remove the column (It is not my ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7594/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7594/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7591 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7591/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7591/comments | https://api.github.com/repos/huggingface/datasets/issues/7591/events | https://github.com/huggingface/datasets/issues/7591 | 3,117,816,388 | I_kwDODunzps651hpE | 7,591 | Add num_proc parameter to push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/46050679?v=4",
"events_url": "https://api.github.com/users/SwayStar123/events{/privacy}",
"followers_url": "https://api.github.com/users/SwayStar123/followers",
"following_url": "https://api.github.com/users/SwayStar123/following{/other_user}",
"gists_u... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | [
"Hi @SwayStar123 \n\nI'd be interested in taking this up. I plan to add a `num_proc` parameter to `push_to_hub()` and use parallel uploads for shards using `concurrent.futures`. Will explore whether `ThreadPoolExecutor` or `ProcessPoolExecutor` is more suitable based on current implementation. Let me know if that s... | 2025-06-04T13:19:15 | 2025-09-04T10:43:33 | 2025-09-04T10:43:33 | NONE | null | null | null | null | ### Feature request
A number of processes parameter to the dataset.push_to_hub method
### Motivation
Shards are currently uploaded serially which makes it slow for many shards, uploading can be done in parallel and much faster
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7591/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7591/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 91 days, 21:24:18 |
https://api.github.com/repos/huggingface/datasets/issues/7590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7590/comments | https://api.github.com/repos/huggingface/datasets/issues/7590/events | https://github.com/huggingface/datasets/issues/7590 | 3,101,654,892 | I_kwDODunzps64339s | 7,590 | `Sequence(Features(...))` causes PyArrow cast error in `load_dataset` despite correct schema. | {
"avatar_url": "https://avatars.githubusercontent.com/u/183279820?v=4",
"events_url": "https://api.github.com/users/AHS-uni/events{/privacy}",
"followers_url": "https://api.github.com/users/AHS-uni/followers",
"following_url": "https://api.github.com/users/AHS-uni/following{/other_user}",
"gists_url": "https... | [] | closed | false | null | [] | [
"Hi @lhoestq \n\nCould you help confirm whether this qualifies as a bug?\n\nIt looks like the issue stems from how `Sequence(Features(...))` is interpreted as a plain struct during schema inference, which leads to a mismatch when casting with PyArrow (especially with nested structs inside lists). From the descripti... | 2025-05-29T22:53:36 | 2025-07-19T22:45:08 | 2025-07-19T22:45:08 | NONE | null | null | null | null | ### Description
When loading a dataset with a field declared as a list of structs using `Sequence(Features(...))`, `load_dataset` incorrectly infers the field as a plain `struct<...>` instead of a `list<struct<...>>`. This leads to the following error:
```
ArrowNotImplementedError: Unsupported cast from list<item: st... | {
"avatar_url": "https://avatars.githubusercontent.com/u/183279820?v=4",
"events_url": "https://api.github.com/users/AHS-uni/events{/privacy}",
"followers_url": "https://api.github.com/users/AHS-uni/followers",
"following_url": "https://api.github.com/users/AHS-uni/following{/other_user}",
"gists_url": "https... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7590/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7590/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 50 days, 23:51:32 |
https://api.github.com/repos/huggingface/datasets/issues/7588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7588/comments | https://api.github.com/repos/huggingface/datasets/issues/7588/events | https://github.com/huggingface/datasets/issues/7588 | 3,094,012,025 | I_kwDODunzps64auB5 | 7,588 | ValueError: Invalid pattern: '**' can only be an entire path component [Colab] | {
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | [
"Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub ver... | 2025-05-27T13:46:05 | 2025-05-30T13:22:52 | 2025-05-30T01:26:30 | NONE | null | null | null | null | ### Describe the bug
I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate).
now i changed a few hyperparameters to increase number of tokens for the model,... | {
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7588/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 2 days, 11:40:25 |
https://api.github.com/repos/huggingface/datasets/issues/7586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7586/comments | https://api.github.com/repos/huggingface/datasets/issues/7586/events | https://github.com/huggingface/datasets/issues/7586 | 3,091,320,431 | I_kwDODunzps64Qc5v | 7,586 | help is appreciated | {
"avatar_url": "https://avatars.githubusercontent.com/u/54931785?v=4",
"events_url": "https://api.github.com/users/rajasekarnp1/events{/privacy}",
"followers_url": "https://api.github.com/users/rajasekarnp1/followers",
"following_url": "https://api.github.com/users/rajasekarnp1/following{/other_user}",
"gist... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"how is this related to this repository ?"
] | 2025-05-26T14:00:42 | 2025-05-26T18:21:57 | null | NONE | null | null | null | null | ### Feature request
https://github.com/rajasekarnp1/neural-audio-upscaler/tree/main
### Motivation
ai model develpment and audio
### Your contribution
ai model develpment and audio | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7586/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7586/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7584/comments | https://api.github.com/repos/huggingface/datasets/issues/7584/events | https://github.com/huggingface/datasets/issues/7584 | 3,090,255,023 | I_kwDODunzps64MYyv | 7,584 | Add LMDB format support | {
"avatar_url": "https://avatars.githubusercontent.com/u/30512160?v=4",
"events_url": "https://api.github.com/users/trotsky1997/events{/privacy}",
"followers_url": "https://api.github.com/users/trotsky1997/followers",
"following_url": "https://api.github.com/users/trotsky1997/following{/other_user}",
"gists_u... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"Hi ! Can you explain what's your use case ? Is it about converting LMDB to Dataset objects (i.e. converting to Arrow) ?"
] | 2025-05-26T07:10:13 | 2025-05-26T18:23:37 | null | NONE | null | null | null | null | ### Feature request
Add LMDB format support for large memory-mapping files
### Motivation
Add LMDB format support for large memory-mapping files
### Your contribution
I'm trying to add it | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7584/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7583/comments | https://api.github.com/repos/huggingface/datasets/issues/7583/events | https://github.com/huggingface/datasets/issues/7583 | 3,088,987,757 | I_kwDODunzps64HjZt | 7,583 | load_dataset type stubs reject List[str] for split parameter, but runtime supports it | {
"avatar_url": "https://avatars.githubusercontent.com/u/25069969?v=4",
"events_url": "https://api.github.com/users/hierr/events{/privacy}",
"followers_url": "https://api.github.com/users/hierr/followers",
"following_url": "https://api.github.com/users/hierr/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | [] | 2025-05-25T02:33:18 | 2025-05-26T18:29:58 | 2025-05-26T18:29:58 | NONE | null | null | null | null | ### Describe the bug
The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type che... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7583/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7583/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 1 day, 15:56:40 |
https://api.github.com/repos/huggingface/datasets/issues/7580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7580/comments | https://api.github.com/repos/huggingface/datasets/issues/7580/events | https://github.com/huggingface/datasets/issues/7580 | 3,082,993,027 | I_kwDODunzps63wr2D | 7,580 | Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False. | {
"avatar_url": "https://avatars.githubusercontent.com/u/48768216?v=4",
"events_url": "https://api.github.com/users/s3pi/events{/privacy}",
"followers_url": "https://api.github.com/users/s3pi/followers",
"following_url": "https://api.github.com/users/s3pi/following{/other_user}",
"gists_url": "https://api.git... | [] | open | false | null | [] | [
"Hi ! There was a PR open to improve this: https://github.com/huggingface/datasets/pull/6832 \nbut it hasn't been continued so far.\n\nIt would be a cool improvement though !",
"Been having this problem with datasets and dataloader for a while."
] | 2025-05-22T11:08:16 | 2025-11-05T16:25:53 | null | NONE | null | null | null | null | ### Describe the bug
When using load_dataset() from the datasets library (in load.py), specifying a particular split (e.g., split="train") still results in downloading data for all splits when streaming=False. This happens during the builder_instance.download_and_prepare() call.
This behavior leads to unnecessary band... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7580/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7577/comments | https://api.github.com/repos/huggingface/datasets/issues/7577/events | https://github.com/huggingface/datasets/issues/7577 | 3,080,833,740 | I_kwDODunzps63ocrM | 7,577 | arrow_schema is not compatible with list | {
"avatar_url": "https://avatars.githubusercontent.com/u/164412025?v=4",
"events_url": "https://api.github.com/users/jonathanshen-upwork/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanshen-upwork/followers",
"following_url": "https://api.github.com/users/jonathanshen-upwork/following{... | [] | closed | false | null | [] | [
"Thanks for reporting, I'll look into it",
"Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = dataset... | 2025-05-21T16:37:01 | 2025-05-26T18:49:51 | 2025-05-26T18:32:55 | NONE | null | null | null | null | ### Describe the bug
```
import datasets
f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})
f.arrow_schema
Traceback (most recent call last):
File "datasets/features/features.py", line 1826, in arrow_schema
return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)})
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7577/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7577/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 5 days, 1:55:54 |
https://api.github.com/repos/huggingface/datasets/issues/7574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7574/comments | https://api.github.com/repos/huggingface/datasets/issues/7574/events | https://github.com/huggingface/datasets/issues/7574 | 3,079,641,072 | I_kwDODunzps63j5fw | 7,574 | Missing multilingual directions in IWSLT2017 dataset's processing script | {
"avatar_url": "https://avatars.githubusercontent.com/u/79297451?v=4",
"events_url": "https://api.github.com/users/andy-joy-25/events{/privacy}",
"followers_url": "https://api.github.com/users/andy-joy-25/followers",
"following_url": "https://api.github.com/users/andy-joy-25/following{/other_user}",
"gists_u... | [] | open | false | null | [] | [
"I have opened 2 PRs on the Hub: `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/7` and `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/8` to resolve this issue",
"cool ! I pinged the owners of the dataset on HF to merge your PRs :)"
] | 2025-05-21T09:53:17 | 2025-05-26T18:36:38 | null | NONE | null | null | null | null | ### Describe the bug
Hi,
Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7574/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7574/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7573/comments | https://api.github.com/repos/huggingface/datasets/issues/7573/events | https://github.com/huggingface/datasets/issues/7573 | 3,076,415,382 | I_kwDODunzps63Xl-W | 7,573 | No Samsum dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17688220?v=4",
"events_url": "https://api.github.com/users/IgorKasianenko/events{/privacy}",
"followers_url": "https://api.github.com/users/IgorKasianenko/followers",
"following_url": "https://api.github.com/users/IgorKasianenko/following{/other_user}",
... | [] | closed | false | null | [] | [
"According to the following https://huggingface.co/posts/seawolf2357/424129432408590, as of now the dataset seems to be inaccessible.\n\n@IgorKasianenko, would https://huggingface.co/datasets/knkarthick/samsum suffice for your purpose?\n",
"Thanks @SP1029 for the update!\nThat will work for now, using it as repla... | 2025-05-20T09:54:35 | 2025-07-21T18:34:34 | 2025-06-18T12:52:23 | NONE | null | null | null | null | ### Describe the bug
https://huggingface.co/datasets/Samsung/samsum dataset not found error 404
Originated from https://github.com/meta-llama/llama-cookbook/issues/948
### Steps to reproduce the bug
go to website https://huggingface.co/datasets/Samsung/samsum
see the error
also downloading it with python throws
`... | {
"avatar_url": "https://avatars.githubusercontent.com/u/17688220?v=4",
"events_url": "https://api.github.com/users/IgorKasianenko/events{/privacy}",
"followers_url": "https://api.github.com/users/IgorKasianenko/followers",
"following_url": "https://api.github.com/users/IgorKasianenko/following{/other_user}",
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7573/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7573/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 29 days, 2:57:48 |
https://api.github.com/repos/huggingface/datasets/issues/7570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7570/comments | https://api.github.com/repos/huggingface/datasets/issues/7570/events | https://github.com/huggingface/datasets/issues/7570 | 3,065,966,529 | I_kwDODunzps62vu_B | 7,570 | Dataset lib seems to broke after fssec lib update | {
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gist... | [] | closed | false | null | [] | [
"Hi, can you try updating `datasets` ? Colab still installs `datasets` 2.x by default, instead of 3.x\n\nIt would be cool to also report this to google colab, they have a GitHub repo for this IIRC",
"@lhoestq I have updated it to `datasets==3.6.0` and now there's an entirely different issue on colab while locally... | 2025-05-15T11:45:06 | 2025-06-13T00:44:27 | 2025-06-13T00:44:27 | NONE | null | null | null | null | ### Describe the bug
I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec`... | {
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gist... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7570/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 28 days, 12:59:21 |
https://api.github.com/repos/huggingface/datasets/issues/7569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7569/comments | https://api.github.com/repos/huggingface/datasets/issues/7569/events | https://github.com/huggingface/datasets/issues/7569 | 3,061,234,054 | I_kwDODunzps62drmG | 7,569 | Dataset creation is broken if nesting a dict inside a dict inside a list | {
"avatar_url": "https://avatars.githubusercontent.com/u/25732590?v=4",
"events_url": "https://api.github.com/users/TimSchneider42/events{/privacy}",
"followers_url": "https://api.github.com/users/TimSchneider42/followers",
"following_url": "https://api.github.com/users/TimSchneider42/following{/other_user}",
... | [] | open | false | null | [] | [
"Hi ! That's because Séquence is a type that comes from tensorflow datasets and inverts lists and focus when doing Séquence(dict).\n\nInstead you should use a list. In your case\n```python\nfeatures = Features({\n \"a\": [{\"b\": {\"c\": Value(\"string\")}}]\n})\n```",
"Hi,\n\nThanks for the swift reply! Could... | 2025-05-13T21:06:45 | 2025-05-20T19:25:15 | null | NONE | null | null | null | null | ### Describe the bug
Hey,
I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details.
Best,
Tim
### Steps to reproduce the bug
Runing this code:
```python
from datasets import Dataset, Features,... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7569/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7569/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7568/comments | https://api.github.com/repos/huggingface/datasets/issues/7568/events | https://github.com/huggingface/datasets/issues/7568 | 3,060,515,257 | I_kwDODunzps62a8G5 | 7,568 | `IterableDatasetDict.map()` call removes `column_names` (in fact info.features) | {
"avatar_url": "https://avatars.githubusercontent.com/u/7893763?v=4",
"events_url": "https://api.github.com/users/mombip/events{/privacy}",
"followers_url": "https://api.github.com/users/mombip/followers",
"following_url": "https://api.github.com/users/mombip/following{/other_user}",
"gists_url": "https://ap... | [] | open | false | null | [] | [
"Hi ! IterableDataset doesn't know what's the output of the function you pass to map(), so it's not possible to know in advance the features of the output dataset.\n\nThere is a workaround though: either do `ds = ds.map(..., features=features)`, or you can do `ds = ds._resolve_features()` which iterates on the firs... | 2025-05-13T15:45:42 | 2025-06-30T09:33:47 | null | NONE | null | null | null | null | When calling `IterableDatasetDict.map()`, each split’s `IterableDataset.map()` is invoked without a `features` argument. While omitting the argument isn’t itself incorrect, the implementation then sets `info.features = features`, which destroys the original `features` content. Since `IterableDataset.column_names` relie... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7568/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7568/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7567/comments | https://api.github.com/repos/huggingface/datasets/issues/7567/events | https://github.com/huggingface/datasets/issues/7567 | 3,058,308,538 | I_kwDODunzps62ShW6 | 7,567 | interleave_datasets seed with multiple workers | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_... | [] | closed | false | null | [] | [
"Hi ! It's already the case IIRC: the effective seed looks like `seed + worker_id`. Do you have a reproducible example ?",
"here is an example with shuffle\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard):\n worker_info = torch.utils.data.get_worker_i... | 2025-05-12T22:38:27 | 2025-10-24T14:04:37 | 2025-10-24T14:04:37 | NONE | null | null | null | null | ### Describe the bug
Using interleave_datasets with multiple dataloader workers and a seed set causes the same dataset sampling order across all workers.
Should the seed be modulated with the worker id?
### Steps to reproduce the bug
See above
### Expected behavior
See above
### Environment info
- `datasets` ve... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7567/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7567/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 164 days, 15:26:10 |
https://api.github.com/repos/huggingface/datasets/issues/7566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7566/comments | https://api.github.com/repos/huggingface/datasets/issues/7566/events | https://github.com/huggingface/datasets/issues/7566 | 3,055,279,344 | I_kwDODunzps62G9zw | 7,566 | terminate called without an active exception; Aborted (core dumped) | {
"avatar_url": "https://avatars.githubusercontent.com/u/18581488?v=4",
"events_url": "https://api.github.com/users/alexey-milovidov/events{/privacy}",
"followers_url": "https://api.github.com/users/alexey-milovidov/followers",
"following_url": "https://api.github.com/users/alexey-milovidov/following{/other_use... | [] | open | false | null | [] | [
"@alexey-milovidov I followed the code snippet, but am able to successfully execute without any error. Could you please verify if the error persists or there is any additional details.",
"@alexey-milovidov else if the problem does not exist please feel free to close this issue.",
"```\nmilovidov@milovidov-pc:~/... | 2025-05-11T23:05:54 | 2025-06-23T17:56:02 | null | NONE | null | null | null | null | ### Describe the bug
I use it as in the tutorial here: https://huggingface.co/docs/datasets/stream, and it ends up with abort.
### Steps to reproduce the bug
1. `pip install datasets`
2.
```
$ cat main.py
#!/usr/bin/env python3
from datasets import load_dataset
dataset = load_dataset('HuggingFaceFW/fineweb', spl... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7566/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7566/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7561/comments | https://api.github.com/repos/huggingface/datasets/issues/7561/events | https://github.com/huggingface/datasets/issues/7561 | 3,046,302,653 | I_kwDODunzps61kuO9 | 7,561 | NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet | {
"avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4",
"events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}",
"followers_url": "https://api.github.com/users/cyanic-selkie/followers",
"following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}",
"g... | [] | closed | false | null | [] | [] | 2025-05-07T15:05:42 | 2025-06-05T12:41:30 | 2025-06-05T12:41:30 | NONE | null | null | null | null | ### Describe the bug
When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7561/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 28 days, 21:35:48 |
https://api.github.com/repos/huggingface/datasets/issues/7554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7554/comments | https://api.github.com/repos/huggingface/datasets/issues/7554/events | https://github.com/huggingface/datasets/issues/7554 | 3,043,089,844 | I_kwDODunzps61Yd20 | 7,554 | datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script) | {
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"g... | [] | closed | false | null | [] | [
"Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets ... | 2025-05-06T14:43:38 | 2025-05-07T14:53:45 | 2025-05-07T14:53:44 | NONE | null | null | null | null | ### Describe the bug
`datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actual... | {
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"g... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7554/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7554/timeline | null | duplicate | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 1 day, 0:10:06 |
https://api.github.com/repos/huggingface/datasets/issues/7551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7551/comments | https://api.github.com/repos/huggingface/datasets/issues/7551/events | https://github.com/huggingface/datasets/issues/7551 | 3,038,114,928 | I_kwDODunzps61FfRw | 7,551 | Issue with offline mode and partial dataset cached | {
"avatar_url": "https://avatars.githubusercontent.com/u/353245?v=4",
"events_url": "https://api.github.com/users/nrv/events{/privacy}",
"followers_url": "https://api.github.com/users/nrv/followers",
"following_url": "https://api.github.com/users/nrv/following{/other_user}",
"gists_url": "https://api.github.c... | [] | open | false | null | [] | [
"It seems the problem comes from builder.py / create_config_id()\n\nOn the first call, when the cache is empty we have\n```\nconfig_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n```\nleading to config_id beeing 'default-2935e8... | 2025-05-04T16:49:37 | 2025-05-13T03:18:43 | null | NONE | null | null | null | null | ### Describe the bug
Hi,
a issue related to #4760 here when loading a single file from a dataset, unable to access it in offline mode afterwards
### Steps to reproduce the bug
```python
import os
# os.environ["HF_HUB_OFFLINE"] = "1"
os.environ["HF_TOKEN"] = "xxxxxxxxxxxxxx"
import datasets
dataset_name = "uonlp/... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7551/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7551/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7549/comments | https://api.github.com/repos/huggingface/datasets/issues/7549/events | https://github.com/huggingface/datasets/issues/7549 | 3,036,272,015 | I_kwDODunzps60-dWP | 7,549 | TypeError: Couldn't cast array of type string to null on webdataset format dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/117186571?v=4",
"events_url": "https://api.github.com/users/narugo1992/events{/privacy}",
"followers_url": "https://api.github.com/users/narugo1992/followers",
"following_url": "https://api.github.com/users/narugo1992/following{/other_user}",
"gists_url... | [] | open | false | null | [] | [
"seems to get fixed by explicitly adding `dataset_infos.json` like this\n\n```json\n{\n \"default\": {\n \"description\": \"Image dataset with tags and ratings\",\n \"citation\": \"\",\n \"homepage\": \"\",\n \"license\": \"\",\n \"features\": {\n \"image\": {\n \"dtype\": \"image\",\n ... | 2025-05-02T15:18:07 | 2025-05-02T15:37:05 | null | NONE | null | null | null | null | ### Describe the bug
```python
from datasets import load_dataset
dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k")
```
got
```
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 626, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarro... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7549/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7549/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7548/comments | https://api.github.com/repos/huggingface/datasets/issues/7548/events | https://github.com/huggingface/datasets/issues/7548 | 3,035,568,851 | I_kwDODunzps607xrT | 7,548 | Python 3.13t (free threads) Compat | {
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https... | [] | open | false | null | [] | [
"Update: `datasets` use `aiohttp` for data streaming and from what I understand data streaming is useful for large datasets that do not fit in memory and/or multi-modal datasets like image/audio where you only what the actual binary bits to fed in as needed. \n\nHowever, there are also many cases where aiohttp will... | 2025-05-02T09:20:09 | 2025-05-12T15:11:32 | null | NONE | null | null | null | null | ### Describe the bug
Cannot install `datasets` under `python 3.13t` due to dependency on `aiohttp` and aiohttp cannot be built for free-threading python.
The `free threading` support issue in `aiothttp` is active since August 2024! Ouch.
https://github.com/aio-libs/aiohttp/issues/8796#issue-2475941784
`pip install... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7548/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7548/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7546/comments | https://api.github.com/repos/huggingface/datasets/issues/7546/events | https://github.com/huggingface/datasets/issues/7546 | 3,034,018,298 | I_kwDODunzps6013H6 | 7,546 | Large memory use when loading large datasets to a ZFS pool | {
"avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4",
"events_url": "https://api.github.com/users/FredHaa/events{/privacy}",
"followers_url": "https://api.github.com/users/FredHaa/followers",
"following_url": "https://api.github.com/users/FredHaa/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | [
"Hi ! datasets are memory mapped from disk, so they don't fill out your RAM. Not sure what's the source of your memory issue.\n\nWhat kind of system are you using ? and what kind of disk ?",
"Well, the fact of the matter is that my RAM is getting filled out by running the given example, as shown in [this video](h... | 2025-05-01T14:43:47 | 2025-05-13T13:30:09 | 2025-05-13T13:29:53 | NONE | null | null | null | null | ### Describe the bug
When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train... | {
"avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4",
"events_url": "https://api.github.com/users/FredHaa/events{/privacy}",
"followers_url": "https://api.github.com/users/FredHaa/followers",
"following_url": "https://api.github.com/users/FredHaa/following{/other_user}",
"gists_url": "https:/... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7546/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7546/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 11 days, 22:46:06 |
https://api.github.com/repos/huggingface/datasets/issues/7545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7545/comments | https://api.github.com/repos/huggingface/datasets/issues/7545/events | https://github.com/huggingface/datasets/issues/7545 | 3,031,617,547 | I_kwDODunzps60stAL | 7,545 | Networked Pull Through Cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/8764173?v=4",
"events_url": "https://api.github.com/users/wrmedford/events{/privacy}",
"followers_url": "https://api.github.com/users/wrmedford/followers",
"following_url": "https://api.github.com/users/wrmedford/following{/other_user}",
"gists_url": "h... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [] | 2025-04-30T15:16:33 | 2025-04-30T15:16:33 | null | NONE | null | null | null | null | ### Feature request
Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service.
Enable a three-tier cache lookup for datasets:
1. Local on-disk cache
2. Configurable network cache proxy
3. Official Hugging Face Hub
### Motivation
- Dis... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7545/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7545/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7543/comments | https://api.github.com/repos/huggingface/datasets/issues/7543/events | https://github.com/huggingface/datasets/issues/7543 | 3,026,867,706 | I_kwDODunzps60alX6 | 7,543 | The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.) | {
"avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4",
"events_url": "https://api.github.com/users/jxma20/events{/privacy}",
"followers_url": "https://api.github.com/users/jxma20/followers",
"following_url": "https://api.github.com/users/jxma20/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | [] | 2025-04-29T03:04:59 | 2025-04-30T02:22:17 | 2025-04-30T02:22:17 | NONE | null | null | null | null | ### Describe the bug
## bug
When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_... | {
"avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4",
"events_url": "https://api.github.com/users/jxma20/events{/privacy}",
"followers_url": "https://api.github.com/users/jxma20/followers",
"following_url": "https://api.github.com/users/jxma20/following{/other_user}",
"gists_url": "https://a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7543/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7543/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 23:17:18 |
https://api.github.com/repos/huggingface/datasets/issues/7538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7538/comments | https://api.github.com/repos/huggingface/datasets/issues/7538/events | https://github.com/huggingface/datasets/issues/7538 | 3,023,280,056 | I_kwDODunzps60M5e4 | 7,538 | `IterableDataset` drops samples when resuming from a checkpoint | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | [
"Thanks for reporting ! I fixed the issue using RebatchedArrowExamplesIterable before the formatted iterable"
] | 2025-04-27T19:34:49 | 2025-05-06T14:04:05 | 2025-05-06T14:03:42 | COLLABORATOR | null | null | null | null | When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted.
In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7538/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7538/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 8 days, 18:28:53 |
https://api.github.com/repos/huggingface/datasets/issues/7537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7537/comments | https://api.github.com/repos/huggingface/datasets/issues/7537/events | https://github.com/huggingface/datasets/issues/7537 | 3,018,792,966 | I_kwDODunzps6z7yAG | 7,537 | `datasets.map(..., num_proc=4)` multi-processing fails | {
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://a... | [] | open | false | null | [] | [
"related: https://github.com/huggingface/datasets/issues/7510\n\nwe need to do more tests to see if latest `dill` is deterministic"
] | 2025-04-25T01:53:47 | 2025-05-06T13:12:08 | null | NONE | null | null | null | null | The following code fails in python 3.11+
```python
tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
```
Error log:
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 315, in _bootstrap
self.ru... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7537/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7537/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7536/comments | https://api.github.com/repos/huggingface/datasets/issues/7536/events | https://github.com/huggingface/datasets/issues/7536 | 3,018,425,549 | I_kwDODunzps6z6YTN | 7,536 | [Errno 13] Permission denied: on `.incomplete` file | {
"avatar_url": "https://avatars.githubusercontent.com/u/1282383?v=4",
"events_url": "https://api.github.com/users/ryan-clancy/events{/privacy}",
"followers_url": "https://api.github.com/users/ryan-clancy/followers",
"following_url": "https://api.github.com/users/ryan-clancy/following{/other_user}",
"gists_ur... | [] | closed | false | null | [] | [
"It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)",
"> It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (usin... | 2025-04-24T20:52:45 | 2025-05-06T13:05:01 | 2025-05-06T13:05:01 | CONTRIBUTOR | null | null | null | null | ### Describe the bug
When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS.
It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can somet... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7536/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7536/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 11 days, 16:12:16 |
https://api.github.com/repos/huggingface/datasets/issues/7534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7534/comments | https://api.github.com/repos/huggingface/datasets/issues/7534/events | https://github.com/huggingface/datasets/issues/7534 | 3,017,259,407 | I_kwDODunzps6z17mP | 7,534 | TensorFlow RaggedTensor Support (batch-level) | {
"avatar_url": "https://avatars.githubusercontent.com/u/7490199?v=4",
"events_url": "https://api.github.com/users/Lundez/events{/privacy}",
"followers_url": "https://api.github.com/users/Lundez/followers",
"following_url": "https://api.github.com/users/Lundez/following{/other_user}",
"gists_url": "https://ap... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"Keras doesn't support other inputs other than tf.data.Dataset objects ? it's a bit painful to have to support and maintain this kind of integration\n\nIs there a way to use a `datasets.Dataset` with outputs formatted as tensors / ragged tensors instead ? like in https://huggingface.co/docs/datasets/use_with_tensor... | 2025-04-24T13:14:52 | 2025-06-30T17:03:39 | null | NONE | null | null | null | null | ### Feature request
Hi,
Currently datasets does not support RaggedTensor output on batch-level.
When building a Object Detection Dataset (with TensorFlow) I need to enable RaggedTensors as that's how BBoxes & classes are expected from the Keras Model POV.
Currently there's a error thrown saying that "Nested Data is ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7534/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7534/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7531/comments | https://api.github.com/repos/huggingface/datasets/issues/7531/events | https://github.com/huggingface/datasets/issues/7531 | 3,008,914,887 | I_kwDODunzps6zWGXH | 7,531 | Deepspeed reward training hangs at end of training with Dataset.from_list | {
"avatar_url": "https://avatars.githubusercontent.com/u/60710414?v=4",
"events_url": "https://api.github.com/users/Matt00n/events{/privacy}",
"followers_url": "https://api.github.com/users/Matt00n/followers",
"following_url": "https://api.github.com/users/Matt00n/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | [
"Hi ! How big is the dataset ? if you load it using `from_list`, the dataset lives in memory and has to be copied to every gpu process, which can be slow.\n\nIt's fasted if you load it from JSON files from disk, because in that case the dataset in converted to Arrow and loaded from disk using memory mapping. Memory... | 2025-04-21T17:29:20 | 2025-06-29T06:20:45 | null | NONE | null | null | null | null | There seems to be a weird interaction between Deepspeed, the Dataset.from_list method and trl's RewardTrainer. On a multi-GPU setup (10 A100s), training always hangs at the very end of training until it times out. The training itself works fine until the end of training and running the same script with Deepspeed on a s... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7531/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7531/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7530/comments | https://api.github.com/repos/huggingface/datasets/issues/7530/events | https://github.com/huggingface/datasets/issues/7530 | 3,007,452,499 | I_kwDODunzps6zQhVT | 7,530 | How to solve "Spaces stuck in Building" problems | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.git... | [] | closed | false | null | [] | [
"I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n",
"> I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n\nAlso see https://github.com/huggingface/huggingface_hub/issues/3019",
"I'm facing the same issu... | 2025-04-21T03:08:38 | 2025-11-11T00:57:14 | 2025-04-22T07:49:52 | NONE | null | null | null | null | ### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7530/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7530/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 1 day, 4:41:14 |
https://api.github.com/repos/huggingface/datasets/issues/7529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7529/comments | https://api.github.com/repos/huggingface/datasets/issues/7529/events | https://github.com/huggingface/datasets/issues/7529 | 3,007,118,969 | I_kwDODunzps6zPP55 | 7,529 | audio folder builder cannot detect custom split name | {
"avatar_url": "https://avatars.githubusercontent.com/u/37548991?v=4",
"events_url": "https://api.github.com/users/phineas-pta/events{/privacy}",
"followers_url": "https://api.github.com/users/phineas-pta/followers",
"following_url": "https://api.github.com/users/phineas-pta/following{/other_user}",
"gists_u... | [] | open | false | null | [] | [] | 2025-04-20T16:53:21 | 2025-04-20T16:53:21 | null | NONE | null | null | null | null | ### Describe the bug
when using audio folder builder (`load_dataset("audiofolder", data_dir="/path/to/folder")`), it cannot detect custom split name other than train/validation/test
### Steps to reproduce the bug
i have the following folder structure
```
my_dataset/
├── train/
│ ├── lorem.wav
│ ├── …
│ └── met... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7529/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7529/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7528/comments | https://api.github.com/repos/huggingface/datasets/issues/7528/events | https://github.com/huggingface/datasets/issues/7528 | 3,006,433,485 | I_kwDODunzps6zMojN | 7,528 | Data Studio Error: Convert JSONL incorrectly | {
"avatar_url": "https://avatars.githubusercontent.com/u/144962041?v=4",
"events_url": "https://api.github.com/users/zxccade/events{/privacy}",
"followers_url": "https://api.github.com/users/zxccade/followers",
"following_url": "https://api.github.com/users/zxccade/following{/other_user}",
"gists_url": "https... | [] | open | false | null | [] | [
"Hi ! Your JSONL file is incompatible with Arrow / Parquet. Indeed in Arrow / Parquet every dict should have the same keys, while in your dataset the bboxes have varying keys.\n\nThis causes the Data Studio to treat the bboxes as if each row was missing the keys from other rows.\n\nFeel free to take a look at the d... | 2025-04-19T13:21:44 | 2025-05-06T13:18:38 | null | NONE | null | null | null | null | ### Describe the bug
Hi there,
I uploaded a dataset here https://huggingface.co/datasets/V-STaR-Bench/V-STaR, but I found that Data Studio incorrectly convert the "bboxes" value for the whole dataset. Therefore, anyone who downloaded the dataset via the API would get the wrong "bboxes" value in the data file.
Could ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7528/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7528/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7527 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7527/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7527/comments | https://api.github.com/repos/huggingface/datasets/issues/7527/events | https://github.com/huggingface/datasets/issues/7527 | 3,005,242,422 | I_kwDODunzps6zIFw2 | 7,527 | Auto-merge option for `convert-to-parquet` | {
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https:... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists... | [
"Alternatively, there could be an option to switch from submitting PRs to just committing changes directly to `main`.",
"Why not, I'd be in favor of `--merge-pull-request` to call `HfApi().merge_pull_request()` at the end of the conversion :) feel free to open a PR if you'd like",
"#self-assign",
"Closing sin... | 2025-04-18T16:03:22 | 2025-07-18T19:09:03 | 2025-07-18T19:09:03 | CONTRIBUTOR | null | null | null | null | ### Feature request
Add a command-line option, e.g. `--auto-merge-pull-request` that enables automatic merging of the commits created by the `convert-to-parquet` tool.
### Motivation
Large datasets may result in dozens of PRs due to the splitting mechanism. Each of these has to be manually accepted via the website.
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7527/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7527/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 91 days, 3:05:41 |
https://api.github.com/repos/huggingface/datasets/issues/7526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7526/comments | https://api.github.com/repos/huggingface/datasets/issues/7526/events | https://github.com/huggingface/datasets/issues/7526 | 3,005,107,536 | I_kwDODunzps6zHk1Q | 7,526 | Faster downloads/uploads with Xet storage | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | [] | 2025-04-18T14:46:42 | 2025-05-12T12:09:09 | null | MEMBER | null | null | null | null | 
## Xet is out !
Over the past few weeks, Hugging Face’s [Xet Team](https://huggingface.co/xet-team) took a major step forward by [migrating the first Model and Dataset repositories off LFS and to Xet storage](https://huggingface... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 7,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7526/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7526/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7520/comments | https://api.github.com/repos/huggingface/datasets/issues/7520/events | https://github.com/huggingface/datasets/issues/7520 | 2,997,422,044 | I_kwDODunzps6yqQfc | 7,520 | Update items in the dataset without `map` | {
"avatar_url": "https://avatars.githubusercontent.com/u/122402293?v=4",
"events_url": "https://api.github.com/users/mashdragon/events{/privacy}",
"followers_url": "https://api.github.com/users/mashdragon/followers",
"following_url": "https://api.github.com/users/mashdragon/following{/other_user}",
"gists_url... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"Hello!\n\nHave you looked at `Dataset.shard`? [Docs](https://huggingface.co/docs/datasets/en/process#shard)\n\nUsing this method you could break your dataset in N shards. Apply `map` on each shard and concatenate them back."
] | 2025-04-15T19:39:01 | 2025-04-19T18:47:46 | null | NONE | null | null | null | null | ### Feature request
I would like to be able to update items in my dataset without affecting all rows. At least if there was a range option, I would be able to process those items, save the dataset, and then continue.
If I am supposed to split the dataset first, that is not clear, since the docs suggest that any of th... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7520/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7520/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7518/comments | https://api.github.com/repos/huggingface/datasets/issues/7518/events | https://github.com/huggingface/datasets/issues/7518 | 2,996,141,825 | I_kwDODunzps6ylX8B | 7,518 | num_proc parallelization works only for first ~10s. | {
"avatar_url": "https://avatars.githubusercontent.com/u/33901783?v=4",
"events_url": "https://api.github.com/users/pshishodiaa/events{/privacy}",
"followers_url": "https://api.github.com/users/pshishodiaa/followers",
"following_url": "https://api.github.com/users/pshishodiaa/following{/other_user}",
"gists_u... | [] | open | false | null | [] | [
"Hi, can you check if the processes are still alive ? It's a bit weird because `datasets` does check if processes crash and return an error in that case",
"Thank you for reverting quickly. I digged a bit, and realized my disk's IOPS is also limited - which is causing this. will check further and report if it's an... | 2025-04-15T11:44:03 | 2025-04-15T13:12:13 | null | NONE | null | null | null | null | ### Describe the bug
When I try to load an already downloaded dataset with num_proc=64, the speed is very high for the first 10-20 seconds acheiving 30-40K samples / s, and 100% utilization for all cores but it soon drops to <= 1000 with almost 0% utilization for most cores.
### Steps to reproduce the bug
```
// do... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7518/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7518/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7517/comments | https://api.github.com/repos/huggingface/datasets/issues/7517/events | https://github.com/huggingface/datasets/issues/7517 | 2,996,106,077 | I_kwDODunzps6ylPNd | 7,517 | Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames | {
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"events_url": "https://api.github.com/users/giraffacarp/events{/privacy}",
"followers_url": "https://api.github.com/users/giraffacarp/followers",
"following_url": "https://api.github.com/users/giraffacarp/following{/other_user}",
"gists_u... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"events_url": "https://api.github.com/users/giraffacarp/events{/privacy}",
"followers_url": "https://api.github.com/users/giraffacarp/followers",
"following_url": "https://api.github.com/users/giraffacarp/following{/other_user}",
"gists_u... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"events_url": "https://api.github.com/users/giraffacarp/events{/privacy}",
"followers_url": "https://api.github.com/users/giraffacarp/followers",
"following_url": "https://api.github.com/users/giraffacarp/following{/other_user}"... | [
"Hi ! The `Image()` type accepts either\n- a `bytes` object containing the image bytes\n- a `str` object containing the image path\n- a `PIL.Image` object\n\nbut it doesn't support `bytearray`, maybe you can convert to `bytes` beforehand ?",
"Hi @lhoestq, \nconverting to bytes is certainly possible and would work... | 2025-04-15T11:29:17 | 2025-05-07T14:17:30 | 2025-05-07T14:17:30 | CONTRIBUTOR | null | null | null | null | ### Describe the bug
When using `IterableDataset.from_spark()` with a Spark DataFrame containing image data, the `Image` feature class fails to properly process this data type, causing an `AttributeError: 'bytearray' object has no attribute 'get'`
### Steps to reproduce the bug
1. Create a Spark DataFrame with a col... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7517/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7517/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 22 days, 2:48:13 |
https://api.github.com/repos/huggingface/datasets/issues/7516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7516/comments | https://api.github.com/repos/huggingface/datasets/issues/7516/events | https://github.com/huggingface/datasets/issues/7516 | 2,995,780,283 | I_kwDODunzps6yj_q7 | 7,516 | unsloth/DeepSeek-R1-Distill-Qwen-32B server error | {
"avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4",
"events_url": "https://api.github.com/users/Editor-1/events{/privacy}",
"followers_url": "https://api.github.com/users/Editor-1/followers",
"following_url": "https://api.github.com/users/Editor-1/following{/other_user}",
"gists_url": "ht... | [] | closed | false | null | [] | [] | 2025-04-15T09:26:53 | 2025-04-15T09:57:26 | 2025-04-15T09:57:26 | NONE | null | null | null | null | ### Describe the bug
hfhubhttperror: 500 server error: internal server error for url: https://huggingface.co/api/models/unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit/commits/main (request id: root=1-67fe23fa-3a2150eb444c2a823c388579;de3aed68-c397-4da5-94d4-6565efd3b919) internal error - we're working hard to fix this ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4",
"events_url": "https://api.github.com/users/Editor-1/events{/privacy}",
"followers_url": "https://api.github.com/users/Editor-1/followers",
"following_url": "https://api.github.com/users/Editor-1/following{/other_user}",
"gists_url": "ht... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7516/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7516/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 0:30:33 |
https://api.github.com/repos/huggingface/datasets/issues/7515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7515/comments | https://api.github.com/repos/huggingface/datasets/issues/7515/events | https://github.com/huggingface/datasets/issues/7515 | 2,995,082,418 | I_kwDODunzps6yhVSy | 7,515 | `concatenate_datasets` does not preserve Pytorch format for IterableDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/5140987?v=4",
"events_url": "https://api.github.com/users/francescorubbo/events{/privacy}",
"followers_url": "https://api.github.com/users/francescorubbo/followers",
"following_url": "https://api.github.com/users/francescorubbo/following{/other_user}",
... | [] | closed | false | null | [] | [
"Hi ! Oh indeed it would be cool to return the same format in that case. Would you like to submit a PR ? The function that does the concatenation is here:\n\nhttps://github.com/huggingface/datasets/blob/90e5bf8a8599b625d6103ee5ac83b98269991141/src/datasets/iterable_dataset.py#L3375-L3380",
"Thank you for the poin... | 2025-04-15T04:36:34 | 2025-05-19T15:07:38 | 2025-05-19T15:07:38 | CONTRIBUTOR | null | null | null | null | ### Describe the bug
When concatenating datasets with `concatenate_datasets`, I would expect the resulting combined dataset to be in the same format as the inputs (assuming it's consistent). This is indeed the behavior when combining `Dataset`, but not when combining `IterableDataset`. Specifically, when applying `con... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7515/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7515/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 34 days, 10:31:04 |
https://api.github.com/repos/huggingface/datasets/issues/7513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7513/comments | https://api.github.com/repos/huggingface/datasets/issues/7513/events | https://github.com/huggingface/datasets/issues/7513 | 2,994,678,437 | I_kwDODunzps6yfyql | 7,513 | MemoryError while creating dataset from generator | {
"avatar_url": "https://avatars.githubusercontent.com/u/43753582?v=4",
"events_url": "https://api.github.com/users/simonreise/events{/privacy}",
"followers_url": "https://api.github.com/users/simonreise/followers",
"following_url": "https://api.github.com/users/simonreise/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | [
"Upd: created a PR that can probably solve the problem: #7514",
"Hi ! We need to take the generator into account for the cache. The generator is hashed to make the dataset fingerprint used by the cache. This way you can reload the Dataset from the cache without regenerating in subsequent `from_generator` calls.\n... | 2025-04-15T01:02:02 | 2025-10-23T22:55:10 | 2025-10-23T22:55:10 | CONTRIBUTOR | null | null | null | null | ### Describe the bug
# TL:DR
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including `generator` function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset pr... | {
"avatar_url": "https://avatars.githubusercontent.com/u/43753582?v=4",
"events_url": "https://api.github.com/users/simonreise/events{/privacy}",
"followers_url": "https://api.github.com/users/simonreise/followers",
"following_url": "https://api.github.com/users/simonreise/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7513/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7513/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 191 days, 21:53:08 |
https://api.github.com/repos/huggingface/datasets/issues/7512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7512/comments | https://api.github.com/repos/huggingface/datasets/issues/7512/events | https://github.com/huggingface/datasets/issues/7512 | 2,994,043,544 | I_kwDODunzps6ydXqY | 7,512 | .map() fails if function uses pyvista | {
"avatar_url": "https://avatars.githubusercontent.com/u/11832922?v=4",
"events_url": "https://api.github.com/users/el-hult/events{/privacy}",
"followers_url": "https://api.github.com/users/el-hult/followers",
"following_url": "https://api.github.com/users/el-hult/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | [
"I found a similar (?) issue in https://github.com/huggingface/datasets/issues/6435, where someone had issues with forks and CUDA. According to https://huggingface.co/docs/datasets/main/en/process#multiprocessing we should do \n\n```\nfrom multiprocess import set_start_method\nset_start_method(\"spawn\")\n```\n\nto... | 2025-04-14T19:43:02 | 2025-04-14T20:01:53 | null | NONE | null | null | null | null | ### Describe the bug
Using PyVista inside a .map() produces a crash with `objc[78796]: +[NSResponder initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7512/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7512/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7510/comments | https://api.github.com/repos/huggingface/datasets/issues/7510/events | https://github.com/huggingface/datasets/issues/7510 | 2,992,131,117 | I_kwDODunzps6yWEwt | 7,510 | Incompatibile dill version (0.3.9) in datasets 2.18.0 - 3.5.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/98061329?v=4",
"events_url": "https://api.github.com/users/JGrel/events{/privacy}",
"followers_url": "https://api.github.com/users/JGrel/followers",
"following_url": "https://api.github.com/users/JGrel/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | [
"Hi ! We can bump `dill` to 0.3.9 if we make sure it's deterministic and doesn't break the caching mechanism in `datasets`.\n\nWould you be interested in opening a PR ? Then we can run the CI to see if it works",
"Hi!. Yeah I can do it. Should I make any changes besides dill versions?",
"There are probably some... | 2025-04-14T07:22:44 | 2025-09-15T08:37:49 | 2025-09-15T08:37:49 | NONE | null | null | null | null | ### Describe the bug
Datasets 2.18.0 - 3.5.0 has a dependency on dill < 0.3.9. This causes errors with dill >= 0.3.9.
Could you please take a look into it and make it compatible?
### Steps to reproduce the bug
1. Install setuptools >= 2.18.0
2. Install dill >=0.3.9
3. Run pip check
4. Output:
ERROR: pip's dependenc... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7510/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7510/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 154 days, 1:15:05 |
https://api.github.com/repos/huggingface/datasets/issues/7509 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7509/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7509/comments | https://api.github.com/repos/huggingface/datasets/issues/7509/events | https://github.com/huggingface/datasets/issues/7509 | 2,991,484,542 | I_kwDODunzps6yTm5- | 7,509 | Dataset uses excessive memory when loading files | {
"avatar_url": "https://avatars.githubusercontent.com/u/36810152?v=4",
"events_url": "https://api.github.com/users/avishaiElmakies/events{/privacy}",
"followers_url": "https://api.github.com/users/avishaiElmakies/followers",
"following_url": "https://api.github.com/users/avishaiElmakies/following{/other_user}"... | [] | open | false | null | [] | [
"small update: I converted the jsons to parquet and it now works well with 32 proc and the same node. \nI still think this needs to be understood, since json is a very popular and easy-to-use format. ",
"Hi ! The JSON loader loads full files in memory, unless they are JSON Lines. In this case it iterates on the J... | 2025-04-13T21:09:49 | 2025-04-28T15:18:55 | null | NONE | null | null | null | null | ### Describe the bug
Hi
I am having an issue when loading a dataset.
I have about 200 json files each about 1GB (total about 215GB). each row has a few features which are a list of ints.
I am trying to load the dataset using `load_dataset`.
The dataset is about 1.5M samples
I use `num_proc=32` and a node with 378GB of... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7509/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7509/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7508/comments | https://api.github.com/repos/huggingface/datasets/issues/7508/events | https://github.com/huggingface/datasets/issues/7508 | 2,986,612,934 | I_kwDODunzps6yBBjG | 7,508 | Iterating over Image feature columns is extremely slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/11831521?v=4",
"events_url": "https://api.github.com/users/sohamparikh/events{/privacy}",
"followers_url": "https://api.github.com/users/sohamparikh/followers",
"following_url": "https://api.github.com/users/sohamparikh/following{/other_user}",
"gists_u... | [] | open | false | null | [] | [
"Hi ! Could it be because the `Image()` type in dataset does `image = Image.open(image_path)` and also `image.load()` which actually loads the image data in memory ? This is needed to avoid too many open files issues, see https://github.com/huggingface/datasets/issues/3985",
"Yes, that seems to be it. For my pur... | 2025-04-10T19:00:54 | 2025-04-15T17:57:08 | null | NONE | null | null | null | null | We are trying to load datasets where the image column stores `PIL.PngImagePlugin.PngImageFile` images. However, iterating over these datasets is extremely slow.
What I have found:
1. It is the presence of the image column that causes the slowdown. Removing the column from the dataset results in blazingly fast (as expe... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7508/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7508/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7507/comments | https://api.github.com/repos/huggingface/datasets/issues/7507/events | https://github.com/huggingface/datasets/issues/7507 | 2,984,309,806 | I_kwDODunzps6x4PQu | 7,507 | Front-end statistical data quantity deviation | {
"avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4",
"events_url": "https://api.github.com/users/rangehow/events{/privacy}",
"followers_url": "https://api.github.com/users/rangehow/followers",
"following_url": "https://api.github.com/users/rangehow/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | [
"Hi ! the format of this dataset is not supported by the Dataset Viewer. It looks like this dataset was saved using `save_to_disk()` which is meant for local storage / easy reload without compression, not for sharing online."
] | 2025-04-10T02:51:38 | 2025-04-15T12:54:51 | null | NONE | null | null | null | null | ### Describe the bug
While browsing the dataset at https://huggingface.co/datasets/NeuML/wikipedia-20250123, I noticed that a dataset with nearly 7M entries was estimated to be only 4M in size—almost half the actual amount. According to the post-download loading and the dataset_info (https://huggingface.co/datasets/Ne... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7507/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7507/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7506/comments | https://api.github.com/repos/huggingface/datasets/issues/7506/events | https://github.com/huggingface/datasets/issues/7506 | 2,981,687,450 | I_kwDODunzps6xuPCa | 7,506 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM | {
"avatar_url": "https://avatars.githubusercontent.com/u/66202555?v=4",
"events_url": "https://api.github.com/users/calvintanama/events{/privacy}",
"followers_url": "https://api.github.com/users/calvintanama/followers",
"following_url": "https://api.github.com/users/calvintanama/following{/other_user}",
"gist... | [] | open | false | null | [] | [
"Hi ! make sure to be logged in with your HF account (e.g. using `huggingface-cli login` or passing `token=` to `load_dataset()`), otherwise you'll get rate limited at one point",
"Hey @calvintanama! Just building on what @lhoestq mentioned above — I ran into similar issues in multi-GPU SLURM setups and here’s wh... | 2025-04-09T06:32:04 | 2025-06-29T06:04:59 | null | NONE | null | null | null | null | ### Describe the bug
I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL er... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7506/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7506/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7505/comments | https://api.github.com/repos/huggingface/datasets/issues/7505/events | https://github.com/huggingface/datasets/issues/7505 | 2,979,926,156 | I_kwDODunzps6xnhCM | 7,505 | HfHubHTTPError: 403 Forbidden: None. Cannot access content at: https://hf.co/api/s3proxy | {
"avatar_url": "https://avatars.githubusercontent.com/u/1412262?v=4",
"events_url": "https://api.github.com/users/hissain/events{/privacy}",
"followers_url": "https://api.github.com/users/hissain/followers",
"following_url": "https://api.github.com/users/hissain/following{/other_user}",
"gists_url": "https:/... | [] | open | false | null | [] | [] | 2025-04-08T14:08:40 | 2025-04-08T14:08:40 | null | NONE | null | null | null | null | I have already logged in Huggingface using CLI with my valid token. Now trying to download the datasets using following code:
from transformers import WhisperProcessor, WhisperForConditionalGeneration, WhisperTokenizer, Trainer, TrainingArguments, DataCollatorForSeq2Seq
from datasets import load_dataset, Data... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7505/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.