Datasets:
url stringlengths 61 61 | repository_url stringclasses 1
value | labels_url stringlengths 75 75 | comments_url stringlengths 70 70 | events_url stringlengths 68 68 | html_url stringlengths 49 51 | id int64 1.24B 2.76B | node_id stringlengths 18 19 | number int64 4.35k 7.35k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 3 | milestone dict | comments int64 0 49 | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association stringclasses 4
values | active_lock_reason null | body stringlengths 1 47.9k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 70 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7347/comments | https://api.github.com/repos/huggingface/datasets/issues/7347/events | https://github.com/huggingface/datasets/issues/7347 | 2,760,282,339 | I_kwDODunzps6khpDj | 7,347 | Converting Arrow to WebDataset TAR Format for Offline Use | {
"login": "katie312",
"id": 91370128,
"node_id": "MDQ6VXNlcjkxMzcwMTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/91370128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katie312",
"html_url": "https://github.com/katie312",
"followers_url": "https://api.github.com/users/kat... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2024-12-27T01:40:44 | 2024-12-31T17:38:00 | 2024-12-28T15:38:03 | NONE | null | ### Feature request
Hi,
I've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by:
```
import json
from datasets import load_dataset
dataset = load_dataset("pixparse/cc3m-wds")
dataset.save_to_disk("./cc3m_1")
```
now I need to convert it to WebDataset's TAR form... | {
"login": "katie312",
"id": 91370128,
"node_id": "MDQ6VXNlcjkxMzcwMTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/91370128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katie312",
"html_url": "https://github.com/katie312",
"followers_url": "https://api.github.com/users/kat... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7347/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7346/comments | https://api.github.com/repos/huggingface/datasets/issues/7346/events | https://github.com/huggingface/datasets/issues/7346 | 2,758,752,118 | I_kwDODunzps6kbzd2 | 7,346 | OSError: Invalid flatbuffers message. | {
"login": "antecede",
"id": 46232487,
"node_id": "MDQ6VXNlcjQ2MjMyNDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/46232487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antecede",
"html_url": "https://github.com/antecede",
"followers_url": "https://api.github.com/users/ant... | [] | open | false | null | [] | null | 0 | 2024-12-25T11:38:52 | 2024-12-25T12:03:13 | null | NONE | null | ### Describe the bug
When loading a large 2D data (1000 × 1152) with a large number of (2,000 data in this case) in `load_dataset`, the error message `OSError: Invalid flatbuffers message` is reported.
When only 300 pieces of data of this size (1000 × 1152) are stored, they can be loaded correctly.
When 2,00... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7346/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7345/comments | https://api.github.com/repos/huggingface/datasets/issues/7345/events | https://github.com/huggingface/datasets/issues/7345 | 2,758,585,709 | I_kwDODunzps6kbK1t | 7,345 | Different behaviour of IterableDataset.map vs Dataset.map with remove_columns | {
"login": "vttrifonov",
"id": 12157034,
"node_id": "MDQ6VXNlcjEyMTU3MDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/12157034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vttrifonov",
"html_url": "https://github.com/vttrifonov",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 0 | 2024-12-25T07:36:48 | 2024-12-25T07:36:48 | null | NONE | null | ### Describe the bug
The following code
```python
import datasets as hf
ds1 = hf.Dataset.from_list([{'i': i} for i in [0,1]])
#ds1 = ds1.to_iterable_dataset()
ds2 = ds1.map(
lambda i: {'i': i+1},
input_columns = ['i'],
remove_columns = ['i']
)
list(ds2)
```
produces
```python
[{'i': ... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7345/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7344/comments | https://api.github.com/repos/huggingface/datasets/issues/7344/events | https://github.com/huggingface/datasets/issues/7344 | 2,754,735,951 | I_kwDODunzps6kMe9P | 7,344 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs | {
"login": "clankur",
"id": 9397233,
"node_id": "MDQ6VXNlcjkzOTcyMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9397233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clankur",
"html_url": "https://github.com/clankur",
"followers_url": "https://api.github.com/users/clankur/... | [] | open | false | null | [] | null | 0 | 2024-12-22T16:30:07 | 2024-12-22T16:30:07 | null | NONE | null | ### Describe the bug
I am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when ... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7344/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7343/comments | https://api.github.com/repos/huggingface/datasets/issues/7343/events | https://github.com/huggingface/datasets/issues/7343 | 2,750,525,823 | I_kwDODunzps6j8bF_ | 7,343 | [Bug] Inconsistent behavior of data_files and data_dir in load_dataset method. | {
"login": "JasonCZH4",
"id": 74161960,
"node_id": "MDQ6VXNlcjc0MTYxOTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/74161960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JasonCZH4",
"html_url": "https://github.com/JasonCZH4",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | 0 | 2024-12-19T14:31:27 | 2024-12-19T14:31:27 | null | NONE | null | ### Describe the bug
Inconsistent operation of data_files and data_dir in load_dataset method.
### Steps to reproduce the bug
# First
I have three files, named 'train.json', 'val.json', 'test.json'.
Each one has a simple dict `{text:'aaa'}`.
Their path are `/data/train.json`, `/data/val.json`, `/data/test.jso... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7343/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7342/comments | https://api.github.com/repos/huggingface/datasets/issues/7342/events | https://github.com/huggingface/datasets/pull/7342 | 2,749,572,310 | PR_kwDODunzps6FvgcK | 7,342 | Update LICENSE | {
"login": "eliebak",
"id": 97572401,
"node_id": "U_kgDOBdDWMQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97572401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliebak",
"html_url": "https://github.com/eliebak",
"followers_url": "https://api.github.com/users/eliebak/follow... | [] | closed | false | null | [] | null | 1 | 2024-12-19T08:17:50 | 2024-12-19T08:44:08 | 2024-12-19T08:44:08 | NONE | null | null | {
"login": "eliebak",
"id": 97572401,
"node_id": "U_kgDOBdDWMQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97572401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliebak",
"html_url": "https://github.com/eliebak",
"followers_url": "https://api.github.com/users/eliebak/follow... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7342/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7342",
"html_url": "https://github.com/huggingface/datasets/pull/7342",
"diff_url": "https://github.com/huggingface/datasets/pull/7342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7342.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7341/comments | https://api.github.com/repos/huggingface/datasets/issues/7341/events | https://github.com/huggingface/datasets/pull/7341 | 2,745,658,561 | PR_kwDODunzps6FiGlt | 7,341 | minor video docs on how to install | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 1 | 2024-12-17T18:06:17 | 2024-12-17T18:11:17 | 2024-12-17T18:11:15 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7341/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7341",
"html_url": "https://github.com/huggingface/datasets/pull/7341",
"diff_url": "https://github.com/huggingface/datasets/pull/7341.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7341.patch",
"merged_at": "2024-12-17T18:11... | true |
https://api.github.com/repos/huggingface/datasets/issues/7340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7340/comments | https://api.github.com/repos/huggingface/datasets/issues/7340/events | https://github.com/huggingface/datasets/pull/7340 | 2,745,473,274 | PR_kwDODunzps6FhdR2 | 7,340 | don't import soundfile in tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 1 | 2024-12-17T16:49:55 | 2024-12-17T16:54:04 | 2024-12-17T16:50:24 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7340/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7340",
"html_url": "https://github.com/huggingface/datasets/pull/7340",
"diff_url": "https://github.com/huggingface/datasets/pull/7340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7340.patch",
"merged_at": "2024-12-17T16:50... | true |
https://api.github.com/repos/huggingface/datasets/issues/7339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7339/comments | https://api.github.com/repos/huggingface/datasets/issues/7339/events | https://github.com/huggingface/datasets/pull/7339 | 2,745,460,060 | PR_kwDODunzps6FhaTl | 7,339 | Update CONTRIBUTING.md | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 1 | 2024-12-17T16:45:25 | 2024-12-17T16:51:36 | 2024-12-17T16:46:30 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7339/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7339",
"html_url": "https://github.com/huggingface/datasets/pull/7339",
"diff_url": "https://github.com/huggingface/datasets/pull/7339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7339.patch",
"merged_at": "2024-12-17T16:46... | true |
https://api.github.com/repos/huggingface/datasets/issues/7337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7337/comments | https://api.github.com/repos/huggingface/datasets/issues/7337/events | https://github.com/huggingface/datasets/issues/7337 | 2,744,877,569 | I_kwDODunzps6jm4IB | 7,337 | One or several metadata.jsonl were found, but not in the same directory or in a parent directory of | {
"login": "mst272",
"id": 67250532,
"node_id": "MDQ6VXNlcjY3MjUwNTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/67250532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mst272",
"html_url": "https://github.com/mst272",
"followers_url": "https://api.github.com/users/mst272/fo... | [] | open | false | null | [] | null | 0 | 2024-12-17T12:58:43 | 2024-12-17T12:58:43 | null | NONE | null | ### Describe the bug
ImageFolder with metadata.jsonl error. I downloaded liuhaotian/LLaVA-CC3M-Pretrain-595K locally from Hugging Face. According to the tutorial in https://huggingface.co/docs/datasets/image_dataset#image-captioning, only put images.zip and metadata.jsonl containing information in the same folder. How... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7337/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7336/comments | https://api.github.com/repos/huggingface/datasets/issues/7336/events | https://github.com/huggingface/datasets/issues/7336 | 2,744,746,456 | I_kwDODunzps6jmYHY | 7,336 | Clarify documentation or Create DatasetCard | {
"login": "August-murr",
"id": 145011209,
"node_id": "U_kgDOCKSyCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/145011209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/August-murr",
"html_url": "https://github.com/August-murr",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-12-17T12:01:00 | 2024-12-17T12:01:00 | null | NONE | null | ### Feature request
I noticed that you can use a Model Card instead of a Dataset Card when pushing a dataset to the Hub, but this isn’t clearly mentioned in [the docs.](https://huggingface.co/docs/datasets/dataset_card)
- Update the docs to clarify that a Model Card can work for datasets too.
- It might be worth c... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7336/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7335/comments | https://api.github.com/repos/huggingface/datasets/issues/7335/events | https://github.com/huggingface/datasets/issues/7335 | 2,743,437,260 | I_kwDODunzps6jhYfM | 7,335 | Too many open files: '/root/.cache/huggingface/token' | {
"login": "kopyl",
"id": 17604849,
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kopyl",
"html_url": "https://github.com/kopyl",
"followers_url": "https://api.github.com/users/kopyl/follow... | [] | open | false | null | [] | null | 0 | 2024-12-16T21:30:24 | 2024-12-16T21:30:24 | null | NONE | null | ### Describe the bug
I ran this code:
```
from datasets import load_dataset
dataset = load_dataset("common-canvas/commoncatalog-cc-by", cache_dir="/datadrive/datasets/cc", num_proc=1000)
```
And got this error.
Before it was some other file though (lie something...incomplete)
runnting
```
ulimit -n 8192
... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7335/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7334/comments | https://api.github.com/repos/huggingface/datasets/issues/7334/events | https://github.com/huggingface/datasets/issues/7334 | 2,740,266,503 | I_kwDODunzps6jVSYH | 7,334 | TypeError: Value.__init__() missing 1 required positional argument: 'dtype' | {
"login": "kakamond",
"id": 185799756,
"node_id": "U_kgDOCxMUTA",
"avatar_url": "https://avatars.githubusercontent.com/u/185799756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kakamond",
"html_url": "https://github.com/kakamond",
"followers_url": "https://api.github.com/users/kakamond/... | [] | open | false | null | [] | null | 0 | 2024-12-15T04:08:46 | 2024-12-15T04:08:46 | null | NONE | null | ### Describe the bug
ds = load_dataset(
"./xxx.py",
name="default",
split="train",
)
The datasets does not support debugging locally anymore...
### Steps to reproduce the bug
```
from datasets import load_dataset
ds = load_dataset(
"./repo.py",
name="default",
split="train",
)
... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7334/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7328/comments | https://api.github.com/repos/huggingface/datasets/issues/7328/events | https://github.com/huggingface/datasets/pull/7328 | 2,738,626,593 | PR_kwDODunzps6FKK13 | 7,328 | Fix typo in arrow_dataset | {
"login": "AndreaFrancis",
"id": 5564745,
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaFrancis",
"html_url": "https://github.com/AndreaFrancis",
"followers_url": "https://api.github.... | [] | closed | false | null | [] | null | 1 | 2024-12-13T15:17:09 | 2024-12-19T17:10:27 | 2024-12-19T17:10:25 | CONTRIBUTOR | null | null | {
"login": "AndreaFrancis",
"id": 5564745,
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaFrancis",
"html_url": "https://github.com/AndreaFrancis",
"followers_url": "https://api.github.... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7328/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7328",
"html_url": "https://github.com/huggingface/datasets/pull/7328",
"diff_url": "https://github.com/huggingface/datasets/pull/7328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7328.patch",
"merged_at": "2024-12-19T17:10... | true |
https://api.github.com/repos/huggingface/datasets/issues/7327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7327/comments | https://api.github.com/repos/huggingface/datasets/issues/7327/events | https://github.com/huggingface/datasets/issues/7327 | 2,738,514,909 | I_kwDODunzps6jOmvd | 7,327 | .map() is not caching and ram goes OOM | {
"login": "simeneide",
"id": 7136076,
"node_id": "MDQ6VXNlcjcxMzYwNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7136076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simeneide",
"html_url": "https://github.com/simeneide",
"followers_url": "https://api.github.com/users/si... | [] | open | false | null | [] | null | 0 | 2024-12-13T14:22:56 | 2024-12-13T14:22:56 | null | NONE | null | ### Describe the bug
Im trying to run a fairly simple map that is converting a dataset into numpy arrays. however, it just piles up on memory and doesnt write to disk. Ive tried multiple cache techniques such as specifying the cache dir, setting max mem, +++ but none seem to work. What am I missing here?
### Steps to... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7327/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7327/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7326/comments | https://api.github.com/repos/huggingface/datasets/issues/7326/events | https://github.com/huggingface/datasets/issues/7326 | 2,738,188,902 | I_kwDODunzps6jNXJm | 7,326 | Remove upper bound for fsspec | {
"login": "fellhorn",
"id": 26092524,
"node_id": "MDQ6VXNlcjI2MDkyNTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/26092524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fellhorn",
"html_url": "https://github.com/fellhorn",
"followers_url": "https://api.github.com/users/fel... | [] | open | false | null | [] | null | 0 | 2024-12-13T11:35:12 | 2024-12-16T11:08:10 | null | NONE | null | ### Describe the bug
As also raised by @cyyever in https://github.com/huggingface/datasets/pull/7296 and @NeilGirdhar in https://github.com/huggingface/datasets/commit/d5468836fe94e8be1ae093397dd43d4a2503b926#commitcomment-140952162 , `datasets` has a problematic version constraint on `fsspec`.
In our case this c... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7326/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7326/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7325/comments | https://api.github.com/repos/huggingface/datasets/issues/7325/events | https://github.com/huggingface/datasets/pull/7325 | 2,736,618,054 | PR_kwDODunzps6FDpMp | 7,325 | Introduce pdf support (#7318) | {
"login": "yabramuvdi",
"id": 4812761,
"node_id": "MDQ6VXNlcjQ4MTI3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4812761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yabramuvdi",
"html_url": "https://github.com/yabramuvdi",
"followers_url": "https://api.github.com/users... | [] | open | false | null | [] | null | 2 | 2024-12-12T18:31:18 | 2024-12-19T17:22:51 | null | NONE | null | First implementation of the Pdf feature to support pdfs (#7318) . Using [pdfplumber](https://github.com/jsvine/pdfplumber?tab=readme-ov-file#python-library) as the default library to work with pdfs.
@lhoestq and @AndreaFrancis | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7325/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7325",
"html_url": "https://github.com/huggingface/datasets/pull/7325",
"diff_url": "https://github.com/huggingface/datasets/pull/7325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7325.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7323/comments | https://api.github.com/repos/huggingface/datasets/issues/7323/events | https://github.com/huggingface/datasets/issues/7323 | 2,736,008,698 | I_kwDODunzps6jFC36 | 7,323 | Unexpected cache behaviour using load_dataset | {
"login": "Moritz-Wirth",
"id": 74349080,
"node_id": "MDQ6VXNlcjc0MzQ5MDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/74349080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moritz-Wirth",
"html_url": "https://github.com/Moritz-Wirth",
"followers_url": "https://api.github.c... | [] | open | false | null | [] | null | 0 | 2024-12-12T14:03:00 | 2024-12-12T14:18:17 | null | NONE | null | ### Describe the bug
Following the (Cache management)[https://huggingface.co/docs/datasets/en/cache] docu and previous behaviour from datasets version 2.18.0, one is able to change the cache directory. Previously, all downloaded/extracted/etc files were found in this folder. As i have recently update to the latest v... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7323/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7322/comments | https://api.github.com/repos/huggingface/datasets/issues/7322/events | https://github.com/huggingface/datasets/issues/7322 | 2,732,254,868 | I_kwDODunzps6i2uaU | 7,322 | ArrowInvalid: JSON parse error: Column() changed from object to array in row 0 | {
"login": "CLL112",
"id": 41767521,
"node_id": "MDQ6VXNlcjQxNzY3NTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/41767521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CLL112",
"html_url": "https://github.com/CLL112",
"followers_url": "https://api.github.com/users/CLL112/fo... | [] | open | false | null | [] | null | 0 | 2024-12-11T08:41:39 | 2024-12-11T08:42:54 | null | NONE | null | ### Describe the bug
Encountering an error while loading the ```liuhaotian/LLaVA-Instruct-150K dataset```.
### Steps to reproduce the bug
```
from datasets import load_dataset
fw =load_dataset("liuhaotian/LLaVA-Instruct-150K")
```
Error:
```
ArrowInvalid Traceback (most recen... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7322/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7321/comments | https://api.github.com/repos/huggingface/datasets/issues/7321/events | https://github.com/huggingface/datasets/issues/7321 | 2,731,626,760 | I_kwDODunzps6i0VEI | 7,321 | ImportError: cannot import name 'set_caching_enabled' from 'datasets' | {
"login": "sankexin",
"id": 33318353,
"node_id": "MDQ6VXNlcjMzMzE4MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/33318353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sankexin",
"html_url": "https://github.com/sankexin",
"followers_url": "https://api.github.com/users/san... | [] | open | false | null | [] | null | 2 | 2024-12-11T01:58:46 | 2024-12-11T13:32:15 | null | NONE | null | ### Describe the bug
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7321/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7320/comments | https://api.github.com/repos/huggingface/datasets/issues/7320/events | https://github.com/huggingface/datasets/issues/7320 | 2,731,112,100 | I_kwDODunzps6iyXak | 7,320 | ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label'] | {
"login": "atrompeterog",
"id": 38381084,
"node_id": "MDQ6VXNlcjM4MzgxMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/38381084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atrompeterog",
"html_url": "https://github.com/atrompeterog",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 1 | 2024-12-10T20:23:11 | 2024-12-10T23:22:23 | 2024-12-10T23:22:23 | NONE | null | ### Describe the bug
I am trying to create a PEFT model from DISTILBERT model, and run a training loop. However, the trainer.train() is giving me this error: ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label']
Here is my code:
### St... | {
"login": "atrompeterog",
"id": 38381084,
"node_id": "MDQ6VXNlcjM4MzgxMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/38381084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atrompeterog",
"html_url": "https://github.com/atrompeterog",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7320/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7319/comments | https://api.github.com/repos/huggingface/datasets/issues/7319/events | https://github.com/huggingface/datasets/pull/7319 | 2,730,679,980 | PR_kwDODunzps6EvHBp | 7,319 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 1 | 2024-12-10T17:01:34 | 2024-12-10T17:04:04 | 2024-12-10T17:01:45 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7319/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7319",
"html_url": "https://github.com/huggingface/datasets/pull/7319",
"diff_url": "https://github.com/huggingface/datasets/pull/7319.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7319.patch",
"merged_at": "2024-12-10T17:01... | true |
https://api.github.com/repos/huggingface/datasets/issues/7318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7318/comments | https://api.github.com/repos/huggingface/datasets/issues/7318/events | https://github.com/huggingface/datasets/issues/7318 | 2,730,676,278 | I_kwDODunzps6iwtA2 | 7,318 | Introduce support for PDFs | {
"login": "yabramuvdi",
"id": 4812761,
"node_id": "MDQ6VXNlcjQ4MTI3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4812761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yabramuvdi",
"html_url": "https://github.com/yabramuvdi",
"followers_url": "https://api.github.com/users... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 6 | 2024-12-10T16:59:48 | 2024-12-12T18:38:13 | null | NONE | null | ### Feature request
The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"pat... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7318/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7318/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7317/comments | https://api.github.com/repos/huggingface/datasets/issues/7317/events | https://github.com/huggingface/datasets/pull/7317 | 2,730,661,237 | PR_kwDODunzps6EvC5Q | 7,317 | Release: 3.2.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 1 | 2024-12-10T16:53:20 | 2024-12-10T16:56:58 | 2024-12-10T16:56:56 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7317/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7317",
"html_url": "https://github.com/huggingface/datasets/pull/7317",
"diff_url": "https://github.com/huggingface/datasets/pull/7317.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7317.patch",
"merged_at": "2024-12-10T16:56... | true |
https://api.github.com/repos/huggingface/datasets/issues/7316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7316/comments | https://api.github.com/repos/huggingface/datasets/issues/7316/events | https://github.com/huggingface/datasets/pull/7316 | 2,730,196,085 | PR_kwDODunzps6Etc0U | 7,316 | More docs to from_dict to mention that the result lives in RAM | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 1 | 2024-12-10T13:56:01 | 2024-12-10T13:58:32 | 2024-12-10T13:57:02 | MEMBER | null | following discussions at https://discuss.huggingface.co/t/how-to-load-this-simple-audio-data-set-and-use-dataset-map-without-memory-issues/17722/14 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7316/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7316/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7316",
"html_url": "https://github.com/huggingface/datasets/pull/7316",
"diff_url": "https://github.com/huggingface/datasets/pull/7316.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7316.patch",
"merged_at": "2024-12-10T13:57... | true |
https://api.github.com/repos/huggingface/datasets/issues/7314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7314/comments | https://api.github.com/repos/huggingface/datasets/issues/7314/events | https://github.com/huggingface/datasets/pull/7314 | 2,727,502,630 | PR_kwDODunzps6EkCi5 | 7,314 | Resolved for empty datafiles | {
"login": "sahillihas",
"id": 20582290,
"node_id": "MDQ6VXNlcjIwNTgyMjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20582290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sahillihas",
"html_url": "https://github.com/sahillihas",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 2 | 2024-12-09T15:47:22 | 2024-12-27T18:20:21 | null | NONE | null | Resolved for Issue#6152 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7314/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7314",
"html_url": "https://github.com/huggingface/datasets/pull/7314",
"diff_url": "https://github.com/huggingface/datasets/pull/7314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7314.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7313/comments | https://api.github.com/repos/huggingface/datasets/issues/7313/events | https://github.com/huggingface/datasets/issues/7313 | 2,726,240,634 | I_kwDODunzps6ifyF6 | 7,313 | Cannot create a dataset with relative audio path | {
"login": "sedol1339",
"id": 5188731,
"node_id": "MDQ6VXNlcjUxODg3MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sedol1339",
"html_url": "https://github.com/sedol1339",
"followers_url": "https://api.github.com/users/se... | [] | open | false | null | [] | null | 3 | 2024-12-09T07:34:20 | 2024-12-12T13:46:38 | null | NONE | null | ### Describe the bug
Hello! I want to create a dataset of parquet files, with audios stored as separate .mp3 files. However, it says "No such file or directory" (see the reproducing code).
### Steps to reproduce the bug
Creating a dataset
```
from pathlib import Path
from datasets import Dataset, load_datas... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7313/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7312/comments | https://api.github.com/repos/huggingface/datasets/issues/7312/events | https://github.com/huggingface/datasets/pull/7312 | 2,725,103,094 | PR_kwDODunzps6EbwNN | 7,312 | [Audio Features - DO NOT MERGE] PoC for adding an offset+sliced reading to audio file. | {
"login": "TParcollet",
"id": 11910731,
"node_id": "MDQ6VXNlcjExOTEwNzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/11910731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TParcollet",
"html_url": "https://github.com/TParcollet",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 0 | 2024-12-08T10:27:31 | 2024-12-08T10:27:31 | null | NONE | null | This is a proof of concept for #7310 . The idea is to enable the access to others column of the dataset row when loading an audio file into a table. This is to allow sliced reading. As stated in the issue, many people have very long audio files and use start and stop slicing in this audio file.
Right now, this code ... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7312/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7312",
"html_url": "https://github.com/huggingface/datasets/pull/7312",
"diff_url": "https://github.com/huggingface/datasets/pull/7312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7312.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7311/comments | https://api.github.com/repos/huggingface/datasets/issues/7311/events | https://github.com/huggingface/datasets/issues/7311 | 2,725,002,630 | I_kwDODunzps6ibD2G | 7,311 | How to get the original dataset name with username? | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-12-08T07:18:14 | 2024-12-08T07:19:41 | null | CONTRIBUTOR | null | ### Feature request
The issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub.
The solution used now is to get the dataset name, config and split, then `... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7311/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7310/comments | https://api.github.com/repos/huggingface/datasets/issues/7310/events | https://github.com/huggingface/datasets/issues/7310 | 2,724,830,603 | I_kwDODunzps6iaZ2L | 7,310 | Enable the Audio Feature to decode / read with an offset + duration | {
"login": "TParcollet",
"id": 11910731,
"node_id": "MDQ6VXNlcjExOTEwNzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/11910731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TParcollet",
"html_url": "https://github.com/TParcollet",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2024-12-07T22:01:44 | 2024-12-09T21:09:46 | null | NONE | null | ### Feature request
For most large speech dataset, we do not wish to generate hundreds of millions of small audio samples. Instead, it is quite common to provide larger audio files with frame offset (soundfile start and stop arguments). We should be able to pass these arguments to Audio() (column ID corresponding in t... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7310/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7315/comments | https://api.github.com/repos/huggingface/datasets/issues/7315/events | https://github.com/huggingface/datasets/issues/7315 | 2,729,738,963 | I_kwDODunzps6itILT | 7,315 | Allow manual configuration of Dataset Viewer for datasets not created with the `datasets` library | {
"login": "diarray-hub",
"id": 114512099,
"node_id": "U_kgDOBtNQ4w",
"avatar_url": "https://avatars.githubusercontent.com/u/114512099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diarray-hub",
"html_url": "https://github.com/diarray-hub",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | 13 | 2024-12-07T16:37:12 | 2024-12-11T11:05:22 | null | NONE | null | #### **Problem Description**
Currently, the Hugging Face Dataset Viewer automatically interprets dataset fields for datasets created with the `datasets` library. However, for datasets pushed directly via `git`, the Viewer:
- Defaults to generic columns like `label` with `null` values if no explicit mapping is provide... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7315/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7309/comments | https://api.github.com/repos/huggingface/datasets/issues/7309/events | https://github.com/huggingface/datasets/pull/7309 | 2,723,636,931 | PR_kwDODunzps6EW77b | 7,309 | Faster parquet streaming + filters with predicate pushdown | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 1 | 2024-12-06T18:01:54 | 2024-12-07T23:32:30 | 2024-12-07T23:32:28 | MEMBER | null | ParquetFragment.to_batches uses a buffered stream to read parquet data, which makes streaming faster (x2 on my laptop).
I also added the `filters` config parameter to support filtering with predicate pushdown, e.g.
```python
from datasets import load_dataset
filters = [('problem_source', '==', 'math')]
ds = ... | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7309/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7309",
"html_url": "https://github.com/huggingface/datasets/pull/7309",
"diff_url": "https://github.com/huggingface/datasets/pull/7309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7309.patch",
"merged_at": "2024-12-07T23:32... | true |
https://api.github.com/repos/huggingface/datasets/issues/7307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7307/comments | https://api.github.com/repos/huggingface/datasets/issues/7307/events | https://github.com/huggingface/datasets/pull/7307 | 2,720,244,889 | PR_kwDODunzps6ELKcR | 7,307 | refactor: remove unnecessary else | {
"login": "HarikrishnanBalagopal",
"id": 20921177,
"node_id": "MDQ6VXNlcjIwOTIxMTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/20921177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HarikrishnanBalagopal",
"html_url": "https://github.com/HarikrishnanBalagopal",
"followers_... | [] | open | false | null | [] | null | 0 | 2024-12-05T12:11:09 | 2024-12-06T15:11:33 | null | NONE | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7307/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7307",
"html_url": "https://github.com/huggingface/datasets/pull/7307",
"diff_url": "https://github.com/huggingface/datasets/pull/7307.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7307.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7306/comments | https://api.github.com/repos/huggingface/datasets/issues/7306/events | https://github.com/huggingface/datasets/issues/7306 | 2,719,807,464 | I_kwDODunzps6iHPfo | 7,306 | Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values). | {
"login": "ai-nikolai",
"id": 9797804,
"node_id": "MDQ6VXNlcjk3OTc4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9797804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ai-nikolai",
"html_url": "https://github.com/ai-nikolai",
"followers_url": "https://api.github.com/users... | [] | open | false | null | [] | null | 0 | 2024-12-05T09:07:53 | 2024-12-05T09:09:38 | null | NONE | null | ### Describe the bug
When creating a dataset from a list of datapoints, information is lost of the individual items.
Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below.
-> What is the best way to create... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7306/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7305/comments | https://api.github.com/repos/huggingface/datasets/issues/7305/events | https://github.com/huggingface/datasets/issues/7305 | 2,715,907,267 | I_kwDODunzps6h4XTD | 7,305 | Build Documentation Test Fails Due to "Bad Credentials" Error | {
"login": "ruidazeng",
"id": 31152346,
"node_id": "MDQ6VXNlcjMxMTUyMzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/31152346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruidazeng",
"html_url": "https://github.com/ruidazeng",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | 0 | 2024-12-03T20:22:54 | 2024-12-03T20:22:54 | null | CONTRIBUTOR | null | ### Describe the bug
The `Build documentation / build / build_main_documentation (push)` job is consistently failing during the "Syncing repository" step. The error occurs when attempting to determine the default branch name, resulting in "Bad credentials" errors.
### Steps to reproduce the bug
1. Trigger the `build... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7305/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7304/comments | https://api.github.com/repos/huggingface/datasets/issues/7304/events | https://github.com/huggingface/datasets/pull/7304 | 2,715,179,811 | PR_kwDODunzps6D5saw | 7,304 | Update iterable_dataset.py | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 1 | 2024-12-03T14:25:42 | 2024-12-03T14:28:10 | 2024-12-03T14:27:02 | MEMBER | null | close https://github.com/huggingface/datasets/issues/7297 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7304/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7304",
"html_url": "https://github.com/huggingface/datasets/pull/7304",
"diff_url": "https://github.com/huggingface/datasets/pull/7304.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7304.patch",
"merged_at": "2024-12-03T14:27... | true |
https://api.github.com/repos/huggingface/datasets/issues/7303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7303/comments | https://api.github.com/repos/huggingface/datasets/issues/7303/events | https://github.com/huggingface/datasets/issues/7303 | 2,705,729,696 | I_kwDODunzps6hRiig | 7,303 | DataFilesNotFoundError for datasets LM1B | {
"login": "hml1996-fight",
"id": 72264324,
"node_id": "MDQ6VXNlcjcyMjY0MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hml1996-fight",
"html_url": "https://github.com/hml1996-fight",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 1 | 2024-11-29T17:27:45 | 2024-12-11T13:22:47 | 2024-12-11T13:22:47 | NONE | null | ### Describe the bug
Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b
### Steps to reproduce the bug
`dataset = datasets.load_dataset('lm1b', split=split)`
### Expected behavior
`Traceback (most recent call last):
File "/home/hml/projects/DeepLearning/Generative_model/Diffusio... | {
"login": "hml1996-fight",
"id": 72264324,
"node_id": "MDQ6VXNlcjcyMjY0MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hml1996-fight",
"html_url": "https://github.com/hml1996-fight",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7303/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7302/comments | https://api.github.com/repos/huggingface/datasets/issues/7302/events | https://github.com/huggingface/datasets/pull/7302 | 2,702,626,386 | PR_kwDODunzps6DfY8G | 7,302 | Let server decide default repo visibility | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Waupli... | [] | closed | false | null | [] | null | 2 | 2024-11-28T16:01:13 | 2024-11-29T17:00:40 | 2024-11-29T17:00:38 | CONTRIBUTOR | null | Until now, all repos were public by default when created without passing the `private` argument. This meant that passing `private=False` or `private=None` was strictly the same. This is not the case anymore. Enterprise Hub offers organizations to set a default visibility setting for new repos. This is useful for organi... | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Waupli... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7302/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7302",
"html_url": "https://github.com/huggingface/datasets/pull/7302",
"diff_url": "https://github.com/huggingface/datasets/pull/7302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7302.patch",
"merged_at": "2024-11-29T17:00... | true |
https://api.github.com/repos/huggingface/datasets/issues/7301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7301/comments | https://api.github.com/repos/huggingface/datasets/issues/7301/events | https://github.com/huggingface/datasets/pull/7301 | 2,701,813,922 | PR_kwDODunzps6DdYLZ | 7,301 | update load_dataset doctring | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 1 | 2024-11-28T11:19:20 | 2024-11-29T10:31:43 | 2024-11-29T10:31:40 | MEMBER | null | - remove canonical dataset name
- remove dataset script logic
- add streaming info
- clearer download and prepare steps | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7301/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7301",
"html_url": "https://github.com/huggingface/datasets/pull/7301",
"diff_url": "https://github.com/huggingface/datasets/pull/7301.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7301.patch",
"merged_at": "2024-11-29T10:31... | true |
https://api.github.com/repos/huggingface/datasets/issues/7300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7300/comments | https://api.github.com/repos/huggingface/datasets/issues/7300/events | https://github.com/huggingface/datasets/pull/7300 | 2,701,424,320 | PR_kwDODunzps6Dcba8 | 7,300 | fix: update elasticsearch version | {
"login": "ruidazeng",
"id": 31152346,
"node_id": "MDQ6VXNlcjMxMTUyMzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/31152346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruidazeng",
"html_url": "https://github.com/ruidazeng",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2024-11-28T09:14:21 | 2024-12-03T14:36:56 | 2024-12-03T14:24:42 | CONTRIBUTOR | null | This should fix the `test_py311 (windows latest, deps-latest` errors.
```
=========================== short test summary info ===========================
ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead.
ERROR tests/test_search.py - AttributeE... | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7300/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7300",
"html_url": "https://github.com/huggingface/datasets/pull/7300",
"diff_url": "https://github.com/huggingface/datasets/pull/7300.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7300.patch",
"merged_at": "2024-12-03T14:24... | true |
https://api.github.com/repos/huggingface/datasets/issues/7299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7299/comments | https://api.github.com/repos/huggingface/datasets/issues/7299/events | https://github.com/huggingface/datasets/issues/7299 | 2,695,378,251 | I_kwDODunzps6gqDVL | 7,299 | Efficient Image Augmentation in Hugging Face Datasets | {
"login": "fabiozappo",
"id": 46443190,
"node_id": "MDQ6VXNlcjQ2NDQzMTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/46443190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabiozappo",
"html_url": "https://github.com/fabiozappo",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 0 | 2024-11-26T16:50:32 | 2024-11-26T16:53:53 | null | NONE | null | ### Describe the bug
I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient.
... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7299/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7298/comments | https://api.github.com/repos/huggingface/datasets/issues/7298/events | https://github.com/huggingface/datasets/issues/7298 | 2,694,196,968 | I_kwDODunzps6gli7o | 7,298 | loading dataset issue with load_dataset() when training controlnet | {
"login": "bigbraindump",
"id": 81594044,
"node_id": "MDQ6VXNlcjgxNTk0MDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/81594044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigbraindump",
"html_url": "https://github.com/bigbraindump",
"followers_url": "https://api.github.c... | [] | open | false | null | [] | null | 0 | 2024-11-26T10:50:18 | 2024-11-26T10:50:18 | null | NONE | null | ### Describe the bug
i'm unable to load my dataset for [controlnet training](https://github.com/huggingface/diffusers/blob/074e12358bc17e7dbe111ea4f62f05dbae8a49d5/examples/controlnet/train_controlnet.py#L606) using load_dataset(). however, load_from_disk() seems to work?
would appreciate if someone can explain why ... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7298/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7297/comments | https://api.github.com/repos/huggingface/datasets/issues/7297/events | https://github.com/huggingface/datasets/issues/7297 | 2,683,977,430 | I_kwDODunzps6f-j7W | 7,297 | wrong return type for `IterableDataset.shard()` | {
"login": "ysngshn",
"id": 47225236,
"node_id": "MDQ6VXNlcjQ3MjI1MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/47225236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysngshn",
"html_url": "https://github.com/ysngshn",
"followers_url": "https://api.github.com/users/ysngsh... | [] | closed | false | null | [] | null | 1 | 2024-11-22T17:25:46 | 2024-12-03T14:27:27 | 2024-12-03T14:27:03 | NONE | null | ### Describe the bug
`IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy.
### Steps to reproduce the bug
look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)?
### Expected ... | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7297/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7296/comments | https://api.github.com/repos/huggingface/datasets/issues/7296/events | https://github.com/huggingface/datasets/pull/7296 | 2,675,573,974 | PR_kwDODunzps6ChJIJ | 7,296 | Remove upper version limit of fsspec[http] | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyeve... | [] | open | false | null | [] | null | 0 | 2024-11-20T11:29:16 | 2024-11-20T11:29:16 | null | NONE | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7296/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7296/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7296",
"html_url": "https://github.com/huggingface/datasets/pull/7296",
"diff_url": "https://github.com/huggingface/datasets/pull/7296.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7296.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7295/comments | https://api.github.com/repos/huggingface/datasets/issues/7295/events | https://github.com/huggingface/datasets/issues/7295 | 2,672,003,384 | I_kwDODunzps6fQ4k4 | 7,295 | [BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'` | {
"login": "casper-hansen",
"id": 27340033,
"node_id": "MDQ6VXNlcjI3MzQwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casper-hansen",
"html_url": "https://github.com/casper-hansen",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | 0 | 2024-11-19T12:23:36 | 2024-11-19T13:01:53 | null | NONE | null | ### Describe the bug
Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions.
Analysis of what's happening:
1. `datasets` passes the `client_kw... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7295/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7294/comments | https://api.github.com/repos/huggingface/datasets/issues/7294/events | https://github.com/huggingface/datasets/pull/7294 | 2,668,663,130 | PR_kwDODunzps6CQKTy | 7,294 | Remove `aiohttp` from direct dependencies | {
"login": "akx",
"id": 58669,
"node_id": "MDQ6VXNlcjU4NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/58669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akx",
"html_url": "https://github.com/akx",
"followers_url": "https://api.github.com/users/akx/followers",
"following... | [] | open | false | null | [] | null | 0 | 2024-11-18T14:00:59 | 2024-11-18T14:00:59 | null | NONE | null | The dependency is only used for catching an exception from other code. That can be done with an import guard.
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7294/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7294",
"html_url": "https://github.com/huggingface/datasets/pull/7294",
"diff_url": "https://github.com/huggingface/datasets/pull/7294.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7294.patch",
"merged_at": null
} | true |
Dataset Card for GitHub Issues
Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the task-category-tag with an appropriate other:other-task-name).
task-category-tag: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a high/low metric name. The (model name or model class) model currently achieves the following score. [IF A LEADERBOARD IS AVAILABLE]: This task has an active leaderboard which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.
Languages
Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
When relevant, please provide BCP-47 codes, which consist of a primary language subtag, with a script subtag and/or region subtag if available.
Dataset Structure
Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
{
'example_field': ...,
...
}
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
example_field: description ofexample_field
Note that the descriptions can be initialized with the Show Markdown Data Fields output of the tagging app, you will then only need to refine the generated descriptions.
Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
| Tain | Valid | Test | |
|---|---|---|---|
| Input Sentences | |||
| Average Sentence Length |
Dataset Creation
Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their Hugging Face version.
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See Larson 2017 for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
Considerations for Using the Data
Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
Additional Information
Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
Licensing Information
Provide the license and link to the license webpage if available.
Citation Information
Provide the BibTex-formatted reference for the dataset. For example:
@article{article_id,
author = {Author List},
title = {Dataset Paper Title},
journal = {Publication Venue},
year = {2525}
}
If the dataset has a DOI, please provide it here.
Contributions
Thanks to @lewtun for adding this dataset.
- Downloads last month
- 5