Datasets:
Dataset Viewer
url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 48
51
| id
int64 600M
3.43B
| node_id
stringlengths 18
24
| number
int64 2
7.78k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
stringdate 2020-04-14 18:18:51
2025-09-18 08:25:34
| updated_at
stringdate 2020-04-29 09:23:05
2025-09-22 08:47:53
| closed_at
stringlengths 20
20
⌀ | author_association
stringclasses 4
values | type
null | active_lock_reason
null | draft
bool 0
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 4
values | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7780
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7780/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7780/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7780/events
|
https://github.com/huggingface/datasets/issues/7780
| 3,429,267,259
|
I_kwDODunzps7MZnc7
| 7,780
|
BIGPATENT dataset inaccessible (deprecated script loader)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/137755081?v=4",
"events_url": "https://api.github.com/users/ishmaifan/events{/privacy}",
"followers_url": "https://api.github.com/users/ishmaifan/followers",
"following_url": "https://api.github.com/users/ishmaifan/following{/other_user}",
"gists_url": "https://api.github.com/users/ishmaifan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ishmaifan",
"id": 137755081,
"login": "ishmaifan",
"node_id": "U_kgDOCDX5yQ",
"organizations_url": "https://api.github.com/users/ishmaifan/orgs",
"received_events_url": "https://api.github.com/users/ishmaifan/received_events",
"repos_url": "https://api.github.com/users/ishmaifan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ishmaifan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ishmaifan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ishmaifan",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! I opened https://huggingface.co/datasets/NortheasternUniversity/big_patent/discussions/7 to update the dataset, hopefully it's merged soon !"
] |
2025-09-18T08:25:34Z
|
2025-09-19T14:35:54Z
| null |
NONE
| null | null | null | null |
dataset: https://huggingface.co/datasets/NortheasternUniversity/big_patent
When I try to load it with the datasets library, it fails with:
RuntimeError: Dataset scripts are no longer supported, but found big_patent.py
Could you please publish a Parquet/Arrow export of BIGPATENT on the Hugging Face so that it can be accessed with datasets>=4.x.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7780/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7780/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7777
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7777/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7777/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7777/events
|
https://github.com/huggingface/datasets/issues/7777
| 3,424,462,082
|
I_kwDODunzps7MHSUC
| 7,777
|
push_to_hub not overwriting but stuck in a loop when there are existing commits
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"HTTP 412 means a commit happened in the meantime, so `get_deletions_and_dataset_card` has to retry to get the latest version of the dataset card and what files to delete based on the latest version of the dataset repository\n\nAre you running other operations in the dataset repo for your push_to_hub ?",
"There was only a map() followed by a push_to_hub(). The repo had one prior commit also by using push_to_hub(). The error disappeared when I downgraded datasets to 4.0.0.",
"It is reproducible if you use finegrained token with Read+Write (Open pull request) access to only that repo.",
"Ah it was due to the use of requests_cache with POST methods, closing this. "
] |
2025-09-17T03:15:35Z
|
2025-09-17T19:31:14Z
|
2025-09-17T19:31:14Z
|
NONE
| null | null | null | null |
### Describe the bug
`get_deletions_and_dataset_card` stuck at error a commit has happened error since push to hub for http error 412 for tag 4.1.0. The error does not exists in 4.0.0.
### Steps to reproduce the bug
Create code to use push_to_hub, ran twice each time with different content for datasets.Dataset.
The code will stuck in time.sleep loop for `get_deletions_and_dataset_card`. If error is explicitly printed, the error is HTTP 412.
### Expected behavior
New datasets overwrite existing one on repo.
### Environment info
datasets 4.1.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7777/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7777/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7772
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7772/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7772/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7772/events
|
https://github.com/huggingface/datasets/issues/7772
| 3,417,353,751
|
I_kwDODunzps7LsK4X
| 7,772
|
Error processing scalar columns using tensorflow.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3871483?v=4",
"events_url": "https://api.github.com/users/khteh/events{/privacy}",
"followers_url": "https://api.github.com/users/khteh/followers",
"following_url": "https://api.github.com/users/khteh/following{/other_user}",
"gists_url": "https://api.github.com/users/khteh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/khteh",
"id": 3871483,
"login": "khteh",
"node_id": "MDQ6VXNlcjM4NzE0ODM=",
"organizations_url": "https://api.github.com/users/khteh/orgs",
"received_events_url": "https://api.github.com/users/khteh/received_events",
"repos_url": "https://api.github.com/users/khteh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/khteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khteh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/khteh",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2025-09-15T10:36:31Z
|
2025-09-15T10:49:17Z
| null |
NONE
| null | null | null | null |
`datasets==4.0.0`
```
columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions']
train_ds.set_format(type='tf', columns=columns_to_return)
```
`train_ds`:
```
train_ds type: <class 'datasets.arrow_dataset.Dataset'>, shape: (1000, 9)
columns: ['question', 'sentences', 'answer', 'str_idx', 'end_idx', 'input_ids', 'attention_mask', 'start_positions', 'end_positions']
features:{'question': Value('string'), 'sentences': Value('string'), 'answer': Value('string'), 'str_idx': Value('int64'), 'end_idx': Value('int64'), 'input_ids': List(Value('int32')), 'attention_mask': List(Value('int8')), 'start_positions': Value('int64'), 'end_positions': Value('int64')}
```
`train_ds_tensor = train_ds['start_positions'].to_tensor(shape=(-1,1))` hits the following error:
```
AttributeError: 'Column' object has no attribute 'to_tensor'
```
`tf.reshape(train_ds['start_positions'], shape=[-1,1])` hits the following error:
```
TypeError: Scalar tensor has no `len()`
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7772/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7772/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7767
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7767/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7767/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7767/events
|
https://github.com/huggingface/datasets/issues/7767
| 3,411,654,444
|
I_kwDODunzps7LWbcs
| 7,767
|
Custom `dl_manager` in `load_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] |
2025-09-12T19:06:23Z
|
2025-09-12T19:07:52Z
| null |
NONE
| null | null | null | null |
### Feature request
https://github.com/huggingface/datasets/blob/4.0.0/src/datasets/load.py#L1411-L1418
```
def load_dataset(
...
dl_manager: Optional[DownloadManager] = None, # add this new argument
**config_kwargs,
) -> Union[DatasetDict, Dataset, IterableDatasetDict, IterableDataset]:
...
# Create a dataset builder
builder_instance = load_dataset_builder(
path=path,
name=name,
data_dir=data_dir,
data_files=data_files,
cache_dir=cache_dir,
features=features,
download_config=download_config,
download_mode=download_mode,
revision=revision,
token=token,
storage_options=storage_options,
**config_kwargs,
)
# Return iterable dataset in case of streaming
if streaming:
return builder_instance.as_streaming_dataset(split=split)
# Note: This is the revised part
if dl_manager is None:
if download_config is None:
download_config = DownloadConfig(
cache_dir=builder_instance._cache_downloaded_dir,
force_download=download_mode == DownloadMode.FORCE_REDOWNLOAD,
force_extract=download_mode == DownloadMode.FORCE_REDOWNLOAD,
use_etag=False,
num_proc=num_proc,
token=builder_instance.token,
storage_options=builder_instance.storage_options,
) # We don't use etag for data files to speed up the process
dl_manager = DownloadManager(
dataset_name=builder_instance.dataset_name,
download_config=download_config,
data_dir=builder_instance.config.data_dir,
record_checksums=(
builder_instance._record_infos or verification_mode == VerificationMode.ALL_CHECKS
),
)
# Download and prepare data
builder_instance.download_and_prepare(
download_config=download_config,
download_mode=download_mode,
verification_mode=verification_mode,
dl_manager=dl_manager, # pass the new argument
num_proc=num_proc,
storage_options=storage_options,
)
...
```
### Motivation
In my case, I'm hoping to deal with the cache files downloading manually (not using hash filenames and save to another location, or using potential existing local files).
### Your contribution
It's already implemented above. If maintainers think this should be considered, I'll open a PR.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7767/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7767/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7766
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7766/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7766/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7766/events
|
https://github.com/huggingface/datasets/issues/7766
| 3,411,611,165
|
I_kwDODunzps7LWQ4d
| 7,766
|
cast columns to Image/Audio/Video with `storage_options`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] |
2025-09-12T18:51:01Z
|
2025-09-12T18:51:01Z
| null |
NONE
| null | null | null | null |
### Feature request
Allow `storage_options` to be passed in
1. `cast` related operations (e.g., `cast_columns, cast`)
2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`
```python3
import datasets
image_path = "s3://bucket/sample.png"
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
# dataset = dataset.cast_column("image_path", datasets.Image()) # now works without `storage_options`
# expected behavior
dataset = dataset.cast_column("image_path", datasets.Image(), storage_options={"anon": True})
```
### Motivation
I'm using my own registered fsspec filesystem (s3 with customized local cache support). I need to pass cache folder paths `cache_dirs: list[str]` to the filesystem when I read the remote images (cast from file_paths).
### Your contribution
Could help with a PR at weekends
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7766/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7766/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7765
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7765/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7765/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7765/events
|
https://github.com/huggingface/datasets/issues/7765
| 3,411,556,378
|
I_kwDODunzps7LWDga
| 7,765
|
polars dataset cannot cast column to Image/Audio/Video
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I fixed this with a combination of `to_dict` and `from_dict`:\n\n```py\ndatasets.Dataset.from_dict(df.to_dict(as_series=False))\n```",
"@samuelstevens Yeah, I'm using similar workaround as well. But it would be ideal if we can avoid the copy."
] |
2025-09-12T18:32:49Z
|
2025-09-16T01:33:31Z
| null |
NONE
| null | null | null | null |
### Describe the bug
`from_polars` dataset cannot cast column to Image/Audio/Video, while it works on `from_pandas` and `from_dict`
### Steps to reproduce the bug
```python3
import datasets
import pandas as pd
import polars as pl
image_path = "./sample.png"
# polars
df = pl.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_polars(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # raises Error
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from large_string to struct using function cast_struct
# pandas
df = pd.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_pandas(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
# dict
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
```
### Expected behavior
`from_polars` case shouldn't raise error and have the same outputs as `from_pandas` and `from_dict`
### Environment info
```
# Name Version Build Channel
datasets 4.0.0 pypi_0 pypi
pandas 2.3.1 pypi_0 pypi
polars 1.32.3 pypi_0 pypi
```
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7765/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7765/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7760
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7760/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7760/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7760/events
|
https://github.com/huggingface/datasets/issues/7760
| 3,401,799,485
|
I_kwDODunzps7Kw1c9
| 7,760
|
Hugging Face Hub Dataset Upload CAS Error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142820182?v=4",
"events_url": "https://api.github.com/users/n-bkoe/events{/privacy}",
"followers_url": "https://api.github.com/users/n-bkoe/followers",
"following_url": "https://api.github.com/users/n-bkoe/following{/other_user}",
"gists_url": "https://api.github.com/users/n-bkoe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/n-bkoe",
"id": 142820182,
"login": "n-bkoe",
"node_id": "U_kgDOCINDVg",
"organizations_url": "https://api.github.com/users/n-bkoe/orgs",
"received_events_url": "https://api.github.com/users/n-bkoe/received_events",
"repos_url": "https://api.github.com/users/n-bkoe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/n-bkoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n-bkoe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/n-bkoe",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"cc @jsulz maybe ?",
"Curious! I took a look at this and was unable to see why this would be occurring on our side. Tagging in @jgodlew and @bpronan since they might have insights. \n\n@n-bkoe just a few questions if you wouldn't mind: \n1. What kind of data are you uploading and what is the difference in file size (in bytes) between 100 and 10,000 samples?\n2. Could you provide a specific repository where you encountered this so we could look at to attempt to trace this in our systems?\n3. I cannot currently reproduce this, but I'm just trying locally; have you tried to attempt this outside of SageMaker? I'm wondering if there is something unique about that environment causing this. \n4. How/where did you set `HF_HUB_DISABLE_XET`?",
"Hi, and thank you for your quick answer 🙏 \n\n1. Its fairly simple string data, four cols, all string, some long. The script works for data up to 8000 samples long, which is two parquet files totalling 260 kb. It breaks at 10k. \n2. Unfortunately, both data and code is private for now !\n3. I will try \n4. I did it both at CLI level when call my script, and tried inside the python script with os.environ[\"HF_HUB_DISABLE_XET\"] = \"1\"\n\nThe load is also partial, it starts for one file, but does not complete and no data file is pushed. \n\n```\n5. Pushing to Hugging Face Hub...\nPushing dataset to YourOrg/dataset-10000-test_set...\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 1235.07ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:37.018887Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFGSQV1FH8846S0DNS91C): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 291kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 291kB, 0.00B/s \n❌ Failed to push test_set: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\nPushing dataset to YourOrg/dataset-10000-indic_test_set...\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 1289.10ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:37.721996Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFHFPJ2DC5D6JC93172H9): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 277kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 277kB, 0.00B/s \n❌ Failed to push indic_test_set: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\nPushing dataset to YourOrg/dataset-10000-indic_test_set_combined...\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 1310.04ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:38.685575Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFJDTVAYM9MFTRDSWKTD6): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 184kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 184kB, 0.00B/s \n❌ Failed to push indic_test_set_combined: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\n\nSummary:\n Succeeded: None\n Failed: [('test_set', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'), ('indic_test_set', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'), ('indic_test_set_combined', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')]\n❌ Some datasets failed to upload\n```\n\n",
"Thanks for following up with more details, @n-bkoe \n\nCould you tell me more about your Sagemaker environment and how you are running this script? In testing with your steps to reproduce in a Sagemaker Jupyter notebook instance (and uploading Parquet datasets with splits of anywhere from a few KBs to a few hundred MBs), I've yet to reproduce this error. This makes me believe that it's either something about the Sagemaker environment or the reproduction steps that I'm not yet emulating. \n\nConcerning the `HF_HUB_DISABLE_XET` flag, you should ensure it is set before any package imports and in the same process where you are running the script itself. If either aren't true, then this environment variable will not work. You could also explicitly uninstall `hf-xet` from the environment, although that should be unnecessary with the `HF_HUB_DISABLE_XET` flag."
] |
2025-09-10T10:01:19Z
|
2025-09-16T20:01:36Z
| null |
NONE
| null | null | null | null |
### Describe the bug
Experiencing persistent 401 Unauthorized errors when attempting to upload datasets to Hugging Face Hub using the `datasets` library. The error occurs specifically with the CAS (Content Addressable Storage) service during the upload process. Tried using HF_HUB_DISABLE_XET=1. It seems to work for smaller files.
Exact error message :
```
Processing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-10T09:44:35.657565Z ERROR Fatal Error: "cas::upload_xorb" api call failed (request id 01b[...]XXX): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX)
at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113
Processing Files (0 / 0) : 0%| | 0.00B / 184kB, 0.00B/s
New Data Upload : 0%| | 0.00B / 184kB, 0.00B/s
❌ Failed to push some_dataset: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX
```
Workaround Attempts
1. **Disabled XET**: Set `HF_HUB_DISABLE_XET=1` environment variable
2. **Updated hf-xet**: Use `hf-xet==1.1.9` rather than latest
3. **Verified Authentication**: Confirmed HF token is valid and has write permissions
4. **Tested with Smaller Datasets**:
- 100 samples: ✅ **SUCCESS** (uploaded successfully)
- 10,000 samples: ❌ **FAILS** (401 Unauthorized)
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
# Create dataset (example with 10,000 samples)
dataset = Dataset.from_dict({
"question": questions,
"answer": answers,
# ... other fields
})
# Split into train/test
dataset_dict = dataset.train_test_split(test_size=0.1)
# Upload to Hub
dataset_dict.push_to_hub("Org/some-dataset")
```
### Expected behavior
## Expected Behavior
- Dataset should upload successfully to Hugging Face Hub
- Progress bars should complete without authentication errors
- Dataset should be accessible at the specified repository URL
## Actual Behavior
- Upload fails consistently with 401 Unauthorized error
- Error occurs specifically during CAS service interaction
- No progress is made on the upload (0% completion)
- Dataset is created on Hugging Face Hub with no data folder
### Environment info
- **Platform**: SageMaker (AWS)
- **Python Version**: 3.12
- **Libraries**:
- `datasets` library (latest version)
- `hf-xet==1.1.9` (attempted fix)
- **Authentication**: Hugging Face token configured
- **Dataset Size**: ~10,000 samples, works for smaller sizes (e.g. 100)
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7760/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7760/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7759
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7759/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7759/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7759/events
|
https://github.com/huggingface/datasets/issues/7759
| 3,398,099,513
|
I_kwDODunzps7KiuI5
| 7,759
|
Comment/feature request: Huggingface 502s from GHA
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Scott-Simmons",
"id": 52365471,
"login": "Scott-Simmons",
"node_id": "MDQ6VXNlcjUyMzY1NDcx",
"organizations_url": "https://api.github.com/users/Scott-Simmons/orgs",
"received_events_url": "https://api.github.com/users/Scott-Simmons/received_events",
"repos_url": "https://api.github.com/users/Scott-Simmons/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Scott-Simmons",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2025-09-09T11:59:20Z
|
2025-09-09T13:02:28Z
| null |
NONE
| null | null | null | null |
This is no longer a pressing issue, but for completeness I am reporting that in August 26th, GET requests to `/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Finfo%5C%3Fdataset%5C%3Dlivebench%2Fmath%60 were returning 502s when invoked from [github actions](https://github.com/UKGovernmentBEIS/inspect_evals/actions/runs/17241892475/job/48921123754) (that link will expire eventually, [here are the logs](https://github.com/user-attachments/files/22233578/logs_44225296943.zip)).
When invoked from actions, it appeared to be consistently failing for ~6 hours. However, these 502s never occurred when the request was invoked from my local machine in that same time period.
I suspect that this is related to how the requests are routed with github actions versus locally.
Its not clear to me if the request even reached huggingface servers or if its the github proxy that stopped it from going through, but I wanted to report it nonetheless in case this is helpful information. I'm curious if huggingface can do anything on their end to confirm cause.
And a feature request for if this happens in the future (assuming huggingface has visibilty on it): A "datasets status" page highlighting if 502s occur for specific individual datasets could be useful for people debugging on the other end of this!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7759/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7759/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7758
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7758/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7758/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7758/events
|
https://github.com/huggingface/datasets/issues/7758
| 3,395,590,783
|
I_kwDODunzps7KZJp_
| 7,758
|
Option for Anonymous Dataset link
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4",
"events_url": "https://api.github.com/users/egrace479/events{/privacy}",
"followers_url": "https://api.github.com/users/egrace479/followers",
"following_url": "https://api.github.com/users/egrace479/following{/other_user}",
"gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/egrace479",
"id": 38985481,
"login": "egrace479",
"node_id": "MDQ6VXNlcjM4OTg1NDgx",
"organizations_url": "https://api.github.com/users/egrace479/orgs",
"received_events_url": "https://api.github.com/users/egrace479/received_events",
"repos_url": "https://api.github.com/users/egrace479/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/egrace479/subscriptions",
"type": "User",
"url": "https://api.github.com/users/egrace479",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] |
2025-09-08T20:20:10Z
|
2025-09-08T20:20:10Z
| null |
NONE
| null | null | null | null |
### Feature request
Allow for anonymized viewing of datasets. For instance, something similar to [Anonymous GitHub](https://anonymous.4open.science/).
### Motivation
We generally publish our data through Hugging Face. This has worked out very well as it's both our repository and archive (thanks to the DOI feature!). However, we have an increasing challenge when it comes to sharing our datasets for paper (both conference and journal) submissions. Due to the need to share data anonymously, we can't use the Hugging Face URLs, but datasets tend to be too large for inclusion as a zip. Being able to have an anonymous link would be great since we can't be double-publishing the data.
### Your contribution
Sorry, I don't have a contribution to make to the implementation of this. Perhaps it would be possible to work off the [Anonymous GitHub](https://github.com/tdurieux/anonymous_github) code to generate something analogous with pointers to the data still on Hugging Face's servers (instead of the duplication of data required for the GitHub version)?
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7758/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7758/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7757
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7757/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7757/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7757/events
|
https://github.com/huggingface/datasets/issues/7757
| 3,389,535,011
|
I_kwDODunzps7KCDMj
| 7,757
|
Add support for `.conll` file format in datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/88763593?v=4",
"events_url": "https://api.github.com/users/namesarnav/events{/privacy}",
"followers_url": "https://api.github.com/users/namesarnav/followers",
"following_url": "https://api.github.com/users/namesarnav/following{/other_user}",
"gists_url": "https://api.github.com/users/namesarnav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/namesarnav",
"id": 88763593,
"login": "namesarnav",
"node_id": "MDQ6VXNlcjg4NzYzNTkz",
"organizations_url": "https://api.github.com/users/namesarnav/orgs",
"received_events_url": "https://api.github.com/users/namesarnav/received_events",
"repos_url": "https://api.github.com/users/namesarnav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/namesarnav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/namesarnav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/namesarnav",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"That would be cool ! feel free to ping me if I can help reviewing a PR"
] |
2025-09-06T07:25:39Z
|
2025-09-10T14:22:48Z
| null |
NONE
| null | null | null | null |
### Feature request
I’d like to request native support in the Hugging Face datasets library for reading .conll files (CoNLL format). This format is widely used in NLP tasks, especially for Named Entity Recognition (NER), POS tagging, and other token classification problems.
Right now `.conll` datasets need to be manually parsed or preprocessed before being loaded into datasets. Having built in support would save time and make workflows smoother for researchers and practitioners.
I propose -
Add a conll dataset builder or file parser to datasets that can:
- Read `.conll` files with customizable delimiters (space, tab).
- Handle sentence/document boundaries (typically indicated by empty lines).
- Support common CoNLL variants (e.g., CoNLL-2000 chunking, CoNLL-2003 NER).
- Output a dataset where each example contains:
- tokens: list of strings
- tags (or similar): list of labels aligned with tokens
Given a .conll snippet like:
```
EU NNP B-ORG
rejects VBZ O
German JJ B-MISC
call NN O
. . O
```
The dataset should load as:
```
{
"tokens": ["EU", "rejects", "German", "call", "."],
"tags": ["B-ORG", "O", "B-MISC", "O", "O"]
}
```
### Motivation
- CoNLL files are a standard benchmark format in NLP (e.g., CoNLL-2003, CoNLL-2000).
- Many users train NER or sequence labeling models (like BERT for token classification) directly on `.conll`
- Right now you have to write your own parsing scripts. Built in support would unify this process and would be much more convenient
### Your contribution
I’d be happy to contribute by implementing this feature. My plan is to-
- Add a new dataset script (conll.py) to handle .conll files.
- Implement parsing logic that supports sentence/document boundaries and token-label alignment.
- Write unit tests with small `.conll` examples to ensure correctness.
- Add documentation and usage examples so new users can easily load `.conll` datasets.
This would be my first open source contribution, so I’ll follow the `CONTRIBUTING.md` guidelines closely and adjust based on feedback from the maintainers.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7757/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7757/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7756
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7756/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7756/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7756/events
|
https://github.com/huggingface/datasets/issues/7756
| 3,387,076,693
|
I_kwDODunzps7J4rBV
| 7,756
|
datasets.map(f, num_proc=N) hangs with N>1 when run on import
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20065?v=4",
"events_url": "https://api.github.com/users/arjunguha/events{/privacy}",
"followers_url": "https://api.github.com/users/arjunguha/followers",
"following_url": "https://api.github.com/users/arjunguha/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunguha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arjunguha",
"id": 20065,
"login": "arjunguha",
"node_id": "MDQ6VXNlcjIwMDY1",
"organizations_url": "https://api.github.com/users/arjunguha/orgs",
"received_events_url": "https://api.github.com/users/arjunguha/received_events",
"repos_url": "https://api.github.com/users/arjunguha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arjunguha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunguha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arjunguha",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2025-09-05T10:32:01Z
|
2025-09-05T10:32:01Z
| null |
NONE
| null | null | null | null |
### Describe the bug
If you `import` a module that runs `datasets.map(f, num_proc=N)` at the top-level, Python hangs.
### Steps to reproduce the bug
1. Create a file that runs datasets.map at the top-level:
```bash
cat <<EOF > import_me.py
import datasets
the_dataset = datasets.load_dataset("openai/openai_humaneval")
the_dataset = the_dataset.map(lambda item: item, num_proc=2)
EOF
```
2. Start Python REPL:
```bash
uv run --python 3.12.3 --with "datasets==4.0.0" python3
Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
```
3. Import the file:
```python
import import_me
````
Observe hang.
### Expected behavior
Ideally would not hang, or would fallback to num_proc=1 with a warning.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7756/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7756/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7753
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7753/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7753/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7753/events
|
https://github.com/huggingface/datasets/issues/7753
| 3,381,831,487
|
I_kwDODunzps7Jkqc_
| 7,753
|
datasets massively slows data reads, even in memory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1191040?v=4",
"events_url": "https://api.github.com/users/lrast/events{/privacy}",
"followers_url": "https://api.github.com/users/lrast/followers",
"following_url": "https://api.github.com/users/lrast/following{/other_user}",
"gists_url": "https://api.github.com/users/lrast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lrast",
"id": 1191040,
"login": "lrast",
"node_id": "MDQ6VXNlcjExOTEwNDA=",
"organizations_url": "https://api.github.com/users/lrast/orgs",
"received_events_url": "https://api.github.com/users/lrast/received_events",
"repos_url": "https://api.github.com/users/lrast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lrast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lrast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lrast",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! you should try\n\n```python\nfrom datasets import Array3D, Dataset, Features, Value\n\nfeatures = Features({\"image\": Array3D(shape=(3, 224, 224), dtype=\"uint8\"), \"label\": Value(\"uint8\")})\nhf_dataset = Dataset.from_dict({'image': images, 'label':labels}, features=features)\n```\n\notherwise the type of the \"image\" column is List(List(List(Value(\"uint8\")))) and is less efficient.",
"Thanks! This leads to a 10x speedup:\n```python\nimport torch\nimport time\nfrom datasets import Array3D, Dataset, Features, Value\n\nimages = torch.randint(0, 255, (1000, 3, 224, 224), dtype=torch.uint8)\nlabels = torch.randint(0, 200, (1000,), dtype=torch.uint8)\n\npt_dataset = torch.utils.data.TensorDataset(images, labels)\n\nfeatures = Features({\"image\": Array3D(shape=(3, 224, 224), dtype=\"uint8\"), \"label\": Value(\"uint8\")})\nhf_dataset = Dataset.from_dict({'image': images, 'label':labels}, features=features)\nhf_in_memory = hf_dataset.map(lambda x: x, keep_in_memory=True)\n\nhf_dataset.set_format('torch', dtype=torch.uint8)\nhf_in_memory.set_format('torch', dtype=torch.uint8)\n\n# measure access speeds\ndef time_access(dataset, img_col):\n start_time = time.time()\n for i in range(1000):\n _ = dataset[i][img_col].shape\n end_time = time.time()\n return end_time - start_time\n\n\nprint(f\"In-memory Tensor access: {time_access(pt_dataset, 0):.4f} seconds\")\nprint(f\"HF Dataset access: {time_access(hf_dataset, 'image'):.4f} seconds\")\nprint(f\"In-memory HF Dataset access: {time_access(hf_in_memory, 'image'):.4f} seconds\")\n```\nProduces\n```\nIn-memory Tensor access: 0.0026 seconds\nHF Dataset access: 0.2070 seconds\nIn-memory HF Dataset access: 0.2112 seconds\n```\n\nCurious if there is a reason why this is not the default behavior for huggingface image processors?\n```python\nfrom transformers import ViTImageProcessor\nfrom transformers import AutoImageProcessor\n\nfrom datasets import load_dataset\n# Load the dataset\nds = load_dataset('ylecun/mnist', split='train[0:100]')\n\n# Instantiate the processor, explicitly requesting NumPy arrays\nprocessor1 = ViTImageProcessor.from_pretrained('facebook/vit-mae-base', do_convert_rgb=True)\nprocessor2 = AutoImageProcessor.from_pretrained(\"facebook/detr-resnet-50\", use_fast=True)\n\nprocessed1 = ds.map(lambda row: processor1(row['image']))\nprocessed2 = ds.map(lambda row: processor2(row['image']))\n\nprint( type(processed1['pixel_values'][0]), type(processed1['pixel_values'][0]))\n```\nproduces\n```\n<class 'list'> <class 'list'>\n```\n\nI can, of course, manually manipulate the dataset to the use the correct format, but this is fairly standard for images, and the performance implications seem large."
] |
2025-09-04T01:45:24Z
|
2025-09-18T22:08:51Z
| null |
NONE
| null | null | null | null |
### Describe the bug
Loading image data in a huggingface dataset results in very slow read speeds, approximately 1000 times longer than reading the same data from a pytorch dataset. This applies even when the dataset is loaded into RAM using a `keep_in_memory=True` flag.
The following script reproduces the result with random data, but it applies equally to datasets that are loaded from the hub.
### Steps to reproduce the bug
The following script should reproduce the behavior
```
import torch
import time
from datasets import Dataset
images = torch.randint(0, 255, (1000, 3, 224, 224), dtype=torch.uint8)
labels = torch.randint(0, 200, (1000,), dtype=torch.uint8)
pt_dataset = torch.utils.data.TensorDataset(images, labels)
hf_dataset = Dataset.from_dict({'image': images, 'label':labels})
hf_dataset.set_format('torch', dtype=torch.uint8)
hf_in_memory = hf_dataset.map(lambda x: x, keep_in_memory=True)
# measure access speeds
def time_access(dataset, img_col):
start_time = time.time()
for i in range(1000):
_ = dataset[i][img_col].shape
end_time = time.time()
return end_time - start_time
print(f"In-memory Tensor access: {time_access(pt_dataset, 0):.4f} seconds")
print(f"HF Dataset access: {time_access(hf_dataset, 'image'):.4f} seconds")
print(f"In-memory HF Dataset access: {time_access(hf_in_memory, 'image'):.4f} seconds")
```
### Expected behavior
For me, the above script produces
```
In-memory Tensor access: 0.0025 seconds
HF Dataset access: 2.9317 seconds
In-memory HF Dataset access: 2.8082 seconds
```
I think that this difference is larger than expected.
### Environment info
- `datasets` version: 4.0.0
- Platform: macOS-14.7.7-arm64-arm-64bit
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7753/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7753/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7751
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7751/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7751/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7751/events
|
https://github.com/huggingface/datasets/issues/7751
| 3,358,369,976
|
I_kwDODunzps7ILKi4
| 7,751
|
Dill version update
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/98005188?v=4",
"events_url": "https://api.github.com/users/Navanit-git/events{/privacy}",
"followers_url": "https://api.github.com/users/Navanit-git/followers",
"following_url": "https://api.github.com/users/Navanit-git/following{/other_user}",
"gists_url": "https://api.github.com/users/Navanit-git/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Navanit-git",
"id": 98005188,
"login": "Navanit-git",
"node_id": "U_kgDOBddwxA",
"organizations_url": "https://api.github.com/users/Navanit-git/orgs",
"received_events_url": "https://api.github.com/users/Navanit-git/received_events",
"repos_url": "https://api.github.com/users/Navanit-git/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Navanit-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Navanit-git/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Navanit-git",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"#7752 ",
"related: #7510 "
] |
2025-08-27T07:38:30Z
|
2025-09-10T14:24:02Z
| null |
NONE
| null | null | null | null |
### Describe the bug
Why the datasets is not updating the dill ?
Just want to know if I update the dill version in dill what will be the repucssion.
For now in multiplaces I have to update the library like process requirequire dill 0.4.0 so why not datasets.
Adding a pr too.
### Steps to reproduce the bug
.
### Expected behavior
.
### Environment info
.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7751/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7751/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7746
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7746/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7746/events
|
https://github.com/huggingface/datasets/issues/7746
| 3,345,391,211
|
I_kwDODunzps7HZp5r
| 7,746
|
Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/187888489?v=4",
"events_url": "https://api.github.com/users/Awesome075/events{/privacy}",
"followers_url": "https://api.github.com/users/Awesome075/followers",
"following_url": "https://api.github.com/users/Awesome075/following{/other_user}",
"gists_url": "https://api.github.com/users/Awesome075/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Awesome075",
"id": 187888489,
"login": "Awesome075",
"node_id": "U_kgDOCzLzaQ",
"organizations_url": "https://api.github.com/users/Awesome075/orgs",
"received_events_url": "https://api.github.com/users/Awesome075/received_events",
"repos_url": "https://api.github.com/users/Awesome075/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Awesome075/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Awesome075/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Awesome075",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"@sayakpaul @a-r-r-o-w could you verify this issue then i can contribute to solve this issue!😊"
] |
2025-08-22T12:52:03Z
|
2025-08-27T20:23:35Z
| null |
NONE
| null | null | null | null |
Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.
The original maintainer's GitHub and Hugging Face repositories appear to be inactive, so a community-led fix is needed.
I have created a working fix by converting the dataset to the modern Parquet format, which does not require a loading script. The fixed version is available here and loads correctly:
**[Awesome075/multi_news_parquet](https://huggingface.co/datasets/Awesome075/multi_news_parquet)**
Could the maintainers please guide me or themselves update the official `multi_news` dataset to use this working Parquet version? This would involve updating the canonical pointer for "multi_news" to resolve to the new repository.
This action would fix the dataset for all users and ensure its continued availability.
Thank you!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7746/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7745
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7745/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7745/events
|
https://github.com/huggingface/datasets/issues/7745
| 3,345,286,773
|
I_kwDODunzps7HZQZ1
| 7,745
|
Audio mono argument no longer supported, despite class documentation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5666041?v=4",
"events_url": "https://api.github.com/users/jheitz/events{/privacy}",
"followers_url": "https://api.github.com/users/jheitz/followers",
"following_url": "https://api.github.com/users/jheitz/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jheitz",
"id": 5666041,
"login": "jheitz",
"node_id": "MDQ6VXNlcjU2NjYwNDE=",
"organizations_url": "https://api.github.com/users/jheitz/orgs",
"received_events_url": "https://api.github.com/users/jheitz/received_events",
"repos_url": "https://api.github.com/users/jheitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jheitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jheitz",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I want to solve this problem can you please assign it to me\nand also can you please guide whether the mono parameter is required to be re-added or the documentation needs an update?"
] |
2025-08-22T12:15:41Z
|
2025-08-24T18:22:41Z
| null |
NONE
| null | null | null | null |
### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class documentation, is says:
Args:
sampling_rate (`int`, *optional*):
Target sampling rate. If `None`, the native sampling rate is used.
mono (`bool`, defaults to `True`):
Whether to convert the audio signal to mono by averaging samples across
channels.
[...]
### Expected behavior
The above call should either work, or the documentation within the Audio class should be updated
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7745/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7744
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7744/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7744/events
|
https://github.com/huggingface/datasets/issues/7744
| 3,343,510,686
|
I_kwDODunzps7HSeye
| 7,744
|
dtype: ClassLabel is not parsed correctly in `features.py`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cmatKhan",
"id": 43553003,
"login": "cmatKhan",
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cmatKhan",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I think it's \"class_label\"",
"> I think it's \"class_label\"\n\nI see -- thank you. This works\n\n```yaml\nlicense: mit\nlanguage:\n- en\ntags:\n- genomics\n- yeast\n- transcription\n- perturbation\n- response\n- overexpression\npretty_name: Hackett, 2020 Overexpression\nsize_categories:\n- 1M<n<10M\ndataset_info:\n features:\n ...\n - name: mechanism\n dtype:\n class_label:\n names: [\"GEV\", \"ZEV\"]\n description: induction system (GEV or ZEV)\n - name: restriction\n dtype:\n class_label:\n names: [\"M\", \"N\", \"P\"]\n description: nutrient limitation (M, N or P)\n```\n\nI see the documentation for [datasets.ClassLabel](https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.ClassLabel). And the documentation for the [dataset cards](https://huggingface.co/docs/hub/en/datasets-cards). I don't see anything in either of those places, though, that specifies the pattern above.\n\nI suppose rather than writing the yaml by hand, the expected workflow is to use `datasets` to construct these features?",
"I generally copy/paste and adapt a YAML from another dataset.\n\nBut it's also possible to generate it from `datasets` like that\n\n```python\n>>> import yaml\n>>> print(yaml.dump(features._to_yaml_list(), sort_keys=False))\n- name: start\n dtype: int32\n- name: end\n dtype: int32\n- name: restriction\n dtype:\n class_label:\n names: [\"M\", \"N\", \"P\"]\n```"
] |
2025-08-21T23:28:50Z
|
2025-09-10T15:23:41Z
|
2025-09-10T15:23:41Z
|
NONE
| null | null | null | null |
`dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the error):
```yaml
license: mit
pretty_name: BrentLab Yeast Genome Resources
size_categories:
- 1K<n<10K
language:
- en
dataset_info:
features:
- name: start
dtype: int32
description: Start coordinate (1-based, **inclusive**)
- name: end
dtype: int32
description: End coordinate (1-based, **inclusive**)
- name: strand
dtype: ClassLabel
...
```
is producing the following error in the data viewer:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory
return HubDatasetModuleFactory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1872, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1465, in generate_from_dict
raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
ValueError: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
I think that this is caused by this line
https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/features/features.py#L2013
Reproducible example from [naming.py](https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/naming.py)
```python
import itertools
import os
import re
_uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])")
_lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])")
_single_underscore_re = re.compile(r"(?<!_)_(?!_)")
_multiple_underscores_re = re.compile(r"(_{2,})")
_split_re = r"^\w+(\.\w+)*$"
def snakecase_to_camelcase(name):
"""Convert snake-case string to camel-case string."""
name = _single_underscore_re.split(name)
name = [_multiple_underscores_re.split(n) for n in name]
return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "")
snakecase_to_camelcase("ClassLabel")
```
Result:
```raw
'Classlabel'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cmatKhan",
"id": 43553003,
"login": "cmatKhan",
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cmatKhan",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7744/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7742
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7742/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7742/events
|
https://github.com/huggingface/datasets/issues/7742
| 3,336,704,928
|
I_kwDODunzps7G4hOg
| 7,742
|
module 'pyarrow' has no attribute 'PyExtensionType'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6106392?v=4",
"events_url": "https://api.github.com/users/mnedelko/events{/privacy}",
"followers_url": "https://api.github.com/users/mnedelko/followers",
"following_url": "https://api.github.com/users/mnedelko/following{/other_user}",
"gists_url": "https://api.github.com/users/mnedelko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mnedelko",
"id": 6106392,
"login": "mnedelko",
"node_id": "MDQ6VXNlcjYxMDYzOTI=",
"organizations_url": "https://api.github.com/users/mnedelko/orgs",
"received_events_url": "https://api.github.com/users/mnedelko/received_events",
"repos_url": "https://api.github.com/users/mnedelko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mnedelko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnedelko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mnedelko",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Just checked out the files and thishad already been addressed",
"For others who find this issue: \n\n`pip install --upgrade \"datasets>=2.20.0\"` \n\nfrom https://github.com/explodinggradients/ragas/issues/2170#issuecomment-3204393672 can fix it."
] |
2025-08-20T06:14:33Z
|
2025-09-09T02:51:46Z
| null |
NONE
| null | null | null | null |
### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will proceed to submit a PR with the below fix:
**Issue Reason**
The issue is that PyArrow version 21.0.0 doesn’t have PyExtensionType. This was changed in newer versions of PyArrow. The
PyExtensionType class was renamed to ExtensionType in PyArrow 13.0.0 and later versions.
** Issue Solution**
Making the following changes to the following lib files should temporarily resolve the issue.
I will submit a PR to the dataets library in the meantime.
env_name/lib/python3.10/site-packages/datasets/features/features.py:
```
> 521 self.shape = tuple(shape)
522 self.value_type = dtype
523 self.storage_dtype = self._generate_dtype(self.value_type)
524 - pa.PyExtensionType.__init__(self, self.storage_dtype)
524 + pa.ExtensionType.__init__(self, self.storage_dtype)
525
526 def __reduce__(self):
527 return self.__class__, (
```
Updated venv_name/lib/python3.10/site-packages/datasets/features/features.py:
```
510 _type: str = field(default=“Array5D”, init=False, repr=False)
511
512
513 - class _ArrayXDExtensionType(pa.PyExtensionType):
513 + class _ArrayXDExtensionType(pa.ExtensionType):
514 ndims: Optional[int] = None
515
516 def __init__(self, shape: tuple, dtype: str):
```
### Steps to reproduce the bug
Ragas version: 0.3.1
Python version: 3.11
**Code to Reproduce**
_**In notebook:**_
!pip install ragas
from ragas import evaluate
### Expected behavior
The required package installs without issue.
### Environment info
In Jupyter Notebook.
venv
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7742/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7741
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7741/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7741/events
|
https://github.com/huggingface/datasets/issues/7741
| 3,334,848,656
|
I_kwDODunzps7GxcCQ
| 7,741
|
Preserve tree structure when loading HDF5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/klamike",
"id": 17013474,
"login": "klamike",
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"repos_url": "https://api.github.com/users/klamike/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"type": "User",
"url": "https://api.github.com/users/klamike",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[] |
2025-08-19T15:42:05Z
|
2025-08-26T15:28:06Z
|
2025-08-26T15:28:06Z
|
CONTRIBUTOR
| null | null | null | null |
### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user.
### Your contribution
I'll open a PR (#7743)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7741/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7739
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7739/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7739/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7739/events
|
https://github.com/huggingface/datasets/issues/7739
| 3,331,537,762
|
I_kwDODunzps7Gkzti
| 7,739
|
Replacement of "Sequence" feature with "List" breaks backward compatibility
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15764776?v=4",
"events_url": "https://api.github.com/users/evmaki/events{/privacy}",
"followers_url": "https://api.github.com/users/evmaki/followers",
"following_url": "https://api.github.com/users/evmaki/following{/other_user}",
"gists_url": "https://api.github.com/users/evmaki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/evmaki",
"id": 15764776,
"login": "evmaki",
"node_id": "MDQ6VXNlcjE1NzY0Nzc2",
"organizations_url": "https://api.github.com/users/evmaki/orgs",
"received_events_url": "https://api.github.com/users/evmaki/received_events",
"repos_url": "https://api.github.com/users/evmaki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/evmaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evmaki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/evmaki",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Backward compatibility here means 4.0.0 can load datasets saved with older versions.\n\nYou will need 4.0.0 to load datasets saved with 4.0.0"
] |
2025-08-18T17:28:38Z
|
2025-09-10T14:17:50Z
| null |
NONE
| null | null | null | null |
PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons.
Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7739/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7739/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7738
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7738/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7738/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7738/events
|
https://github.com/huggingface/datasets/issues/7738
| 3,328,948,690
|
I_kwDODunzps7Ga7nS
| 7,738
|
Allow saving multi-dimensional ndarray with dynamic shapes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"events_url": "https://api.github.com/users/ryan-minato/events{/privacy}",
"followers_url": "https://api.github.com/users/ryan-minato/followers",
"following_url": "https://api.github.com/users/ryan-minato/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ryan-minato",
"id": 82735346,
"login": "ryan-minato",
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"organizations_url": "https://api.github.com/users/ryan-minato/orgs",
"received_events_url": "https://api.github.com/users/ryan-minato/received_events",
"repos_url": "https://api.github.com/users/ryan-minato/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ryan-minato",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"I agree this would be super valuable.\n\nIt looks like this was discussed a few years ago in https://github.com/huggingface/datasets/issues/5272#issuecomment-1550200824 but there were some issues. Those PRs are merged now and it looks like Arrow [officially supports](https://arrow.apache.org/docs/format/CanonicalExtensions.html#variable-shape-tensor) this so it's a good time to re-evaluate!",
"Happy to help with this, maybe we can think of adding a new type `Tensor` (instead of Array2D, 3D etc. which imply a fixed number of dims - we can keep them for backward compat anyways) that uses VariableShapeTensor (or FixedShapeTensor if the shape is provided maybe ? happy to discuss this)"
] |
2025-08-18T02:23:51Z
|
2025-08-26T15:25:02Z
| null |
NONE
| null | null | null | null |
### Feature request
I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dimensions are not fixed.
A possible implementation could be a new Array or Tensor feature type that stores the data in a structured format, for example,
```python
{
"shape": (5, 224, 224),
"dtype": "uint8",
"data": [...]
}
```
This would allow the datasets library to handle heterogeneous array sizes within a single column without requiring a fixed shape definition in the feature schema.
### Motivation
I am currently trying to upload data from astronomical telescopes, specifically FITS files, to the Hugging Face Hub. This type of data is very similar to images but often has more than three dimensions. For example, data from the SDSS project contains five channels (u, g, r, i, z), and the pixel values can exceed 255, making the Pillow based Image feature unsuitable.
The current datasets library requires a fixed shape to be defined in the feature schema for multi-dimensional arrays, which is a major roadblock. This prevents me from saving my data, as the dimensions of the arrays can vary across different FITS files.
https://github.com/huggingface/datasets/blob/985c9bee6bfc345787a8b9dd316e1d4f3b930503/src/datasets/features/features.py#L613-L614
A feature that supports dynamic shapes would be incredibly beneficial for the astronomy community and other fields dealing with similar high-dimensional, variable-sized data (e.g., medical imaging, scientific simulations).
### Your contribution
I am willing to create a PR to help implement this feature if the proposal is accepted.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7738/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7738/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7733
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7733/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7733/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7733/events
|
https://github.com/huggingface/datasets/issues/7733
| 3,304,979,299
|
I_kwDODunzps7E_ftj
| 7,733
|
Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4",
"events_url": "https://api.github.com/users/dennys246/events{/privacy}",
"followers_url": "https://api.github.com/users/dennys246/followers",
"following_url": "https://api.github.com/users/dennys246/following{/other_user}",
"gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dennys246",
"id": 27898715,
"login": "dennys246",
"node_id": "MDQ6VXNlcjI3ODk4NzE1",
"organizations_url": "https://api.github.com/users/dennys246/orgs",
"received_events_url": "https://api.github.com/users/dennys246/received_events",
"repos_url": "https://api.github.com/users/dennys246/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennys246/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dennys246",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"This is the download issues I come into, about ever other time it fails...\n<img width=\"1719\" height=\"1226\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/2e5b4b3e-7c13-4bad-a77c-34b47a932831\" />"
] |
2025-08-08T19:10:58Z
|
2025-08-12T00:54:58Z
| null |
NONE
| null | null | null | null |
### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of trouble trying to have the images handled properly (at least in the way I’d expect them to be handled).
I find that I cannot use relative paths for loading images remotely from the Hugging Face repo or from a local repository. Any time I do it always simply appends my current working directory to the dataset. As a result to use the datasets library with my dataset I have to change my working directory to the dataset folder or abandon the dataset object structure, which I cannot imagine you intended. As a result I have to use URL’s since an absolute path on my system obviously wouldn’t work for others. The URL works ok, but despite me having it locally downloaded, it appears to be redownloading the dataset every time I train my snowGAN model on it (and often times I’m coming into HTTPS errors for over requesting the data).
Or maybe image relative paths aren't intended to be loaded directly through your datasets library as images and should be kept as strings for the user to handle? If so I feel like you’re missing out on some pretty seamless functionality
### Steps to reproduce the bug
1. Download a local copy of the dataset (rmdig/rocky_mountain_snowpack) through git or whatever you prefer.
2. Alter the README.md YAML for file_path (the relative path to each image) to be type Image instead of type string
`
---
dataset_info:
features:
- name: image
dtype: Image
- name: file_path
dtype: Image
`
3. Initialize the dataset locally, make sure your working directory is not the dataset directory root
`dataset = datasets.load_dataset(‘path/to/local/rocky_mountain_snowpack/‘)`
4. Call to one of the samples and you’ll get an error that the image was not found in current/working/directory/preprocessed/cores/image_1.png. Showing that it’s simply looking in the current working directory + relative path
`
>>> dataset['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/image.py", line 171, in decode_example
image = PIL.Image.open(path)
^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/PIL/Image.py", line 3277, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dennyschaedig/Datasets/preprocessed/cores/image_1.png'
`
### Expected behavior
I expect the datasets and Image() to load the locally hosted data using path/to/local/rocky_mountain_snowpack/ (that I pass in with my datasets.load_dataset() or the you all handle on the backend) call + relative path.
Instead it appears to load from my current working directory + relative path.
### Environment info
Tested on…
Windows 11, Ubuntu Linux 22.04 and Mac Sequoia 15.5 Silicone M2
datasets version 4.0.0
Python 3.12 and 3.13
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7733/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7733/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7732
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7732/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7732/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7732/events
|
https://github.com/huggingface/datasets/issues/7732
| 3,304,673,383
|
I_kwDODunzps7E-VBn
| 7,732
|
webdataset: key errors when `field_name` has upper case characters
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YassineYousfi",
"id": 29985433,
"login": "YassineYousfi",
"node_id": "MDQ6VXNlcjI5OTg1NDMz",
"organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
"received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
"repos_url": "https://api.github.com/users/YassineYousfi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YassineYousfi",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2025-08-08T16:56:42Z
|
2025-08-08T16:56:42Z
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
When using a webdataset each sample can be a collection of different "fields"
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characters, the HF webdataset integration throws a key error when trying to load the dataset:
e.g. from a dataset (now updated so that it doesn't throw this error)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[1], line 2
1 from datasets import load_dataset
----> 2 ds = load_dataset("commaai/comma2k19", data_files={'train': ['data-00000.tar.gz']}, num_proc=1)
File ~/xx/.venv/lib/python3.11/site-packages/datasets/load.py:1412, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1409 return builder_instance.as_streaming_dataset(split=split)
1411 # Download and prepare data
-> 1412 builder_instance.download_and_prepare(
1413 download_config=download_config,
1414 download_mode=download_mode,
1415 verification_mode=verification_mode,
1416 num_proc=num_proc,
1417 storage_options=storage_options,
1418 )
1420 # Build dataset for splits
1421 keep_in_memory = (
1422 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1423 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:894, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
892 if num_proc is not None:
893 prepare_split_kwargs["num_proc"] = num_proc
--> 894 self._download_and_prepare(
895 dl_manager=dl_manager,
896 verification_mode=verification_mode,
897 **prepare_split_kwargs,
898 **download_and_prepare_kwargs,
899 )
900 # Sync info
901 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:1609, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1608 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1609 super()._download_and_prepare(
1610 dl_manager,
1611 verification_mode,
1612 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
1613 or verification_mode == VerificationMode.ALL_CHECKS,
1614 **prepare_splits_kwargs,
1615 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:948, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
946 split_dict = SplitDict(dataset_name=self.dataset_name)
947 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 948 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
950 # Checksums verification
951 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:81, in WebDataset._split_generators(self, dl_manager)
78 if not self.info.features:
79 # Get one example to get the feature types
80 pipeline = self._get_pipeline_from_tar(tar_paths[0], tar_iterators[0])
---> 81 first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
82 if any(example.keys() != first_examples[0].keys() for example in first_examples):
83 raise ValueError(
84 "The TAR archives of the dataset should be in WebDataset format, "
85 "but the files in the archive don't share the same prefix or the same types."
86 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:55, in WebDataset._get_pipeline_from_tar(cls, tar_path, tar_iterator)
53 data_extension = field_name.split(".")[-1]
54 if data_extension in cls.DECODERS:
---> 55 current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
56 if current_example:
57 yield current_example
KeyError: 'processed_log_IMU_magnetometer_value.npy'
```
### Steps to reproduce the bug
unit test was added in: https://github.com/huggingface/datasets/pull/7726
it fails without the fixed proposed in the same PR
### Expected behavior
Not throwing a key error.
### Environment info
```
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
- Python version: 3.11.4
- `huggingface_hub` version: 0.33.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.7.0
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7732/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7732/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7731
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7731/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7731/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7731/events
|
https://github.com/huggingface/datasets/issues/7731
| 3,303,637,075
|
I_kwDODunzps7E6YBT
| 7,731
|
Add the possibility of a backend for audio decoding
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/intexcor",
"id": 142020129,
"login": "intexcor",
"node_id": "U_kgDOCHcOIQ",
"organizations_url": "https://api.github.com/users/intexcor/orgs",
"received_events_url": "https://api.github.com/users/intexcor/received_events",
"repos_url": "https://api.github.com/users/intexcor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/intexcor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/intexcor",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"is there a work around im stuck",
"never mind just downgraded"
] |
2025-08-08T11:08:56Z
|
2025-08-20T16:29:33Z
| null |
NONE
| null | null | null | null |
### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.
### Motivation
I use a service for training models in which ffmpeg cannot be installed.
### Your contribution
I use a service for training models in which ffmpeg cannot be installed.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7731/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7731/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7729/events
|
https://github.com/huggingface/datasets/issues/7729
| 3,300,672,954
|
I_kwDODunzps7EvEW6
| 7,729
|
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/115183904?v=4",
"events_url": "https://api.github.com/users/SaleemMalikAI/events{/privacy}",
"followers_url": "https://api.github.com/users/SaleemMalikAI/followers",
"following_url": "https://api.github.com/users/SaleemMalikAI/following{/other_user}",
"gists_url": "https://api.github.com/users/SaleemMalikAI/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SaleemMalikAI",
"id": 115183904,
"login": "SaleemMalikAI",
"node_id": "U_kgDOBt2RIA",
"organizations_url": "https://api.github.com/users/SaleemMalikAI/orgs",
"received_events_url": "https://api.github.com/users/SaleemMalikAI/received_events",
"repos_url": "https://api.github.com/users/SaleemMalikAI/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SaleemMalikAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaleemMalikAI/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SaleemMalikAI",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2025-08-07T14:07:23Z
|
2025-08-07T14:07:23Z
| null |
NONE
| null | null | null | null |
> Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7729/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7729/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7728
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7728/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7728/events
|
https://github.com/huggingface/datasets/issues/7728
| 3,298,854,904
|
I_kwDODunzps7EoIf4
| 7,728
|
NonMatchingSplitsSizesError and ExpectedMoreSplitsError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4",
"events_url": "https://api.github.com/users/efsotr/events{/privacy}",
"followers_url": "https://api.github.com/users/efsotr/followers",
"following_url": "https://api.github.com/users/efsotr/following{/other_user}",
"gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/efsotr",
"id": 104755879,
"login": "efsotr",
"node_id": "U_kgDOBj5ypw",
"organizations_url": "https://api.github.com/users/efsotr/orgs",
"received_events_url": "https://api.github.com/users/efsotr/received_events",
"repos_url": "https://api.github.com/users/efsotr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efsotr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/efsotr",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2025-08-07T04:04:50Z
|
2025-08-07T07:31:47Z
| null |
NONE
| null | null | null | null |
### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz",
"validation": "en/c4-validation.00000-of-00008.json.gz"},
)
```
```log
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}]
```
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train"
)
```
```log
ExpectedMoreSplitsError: {'validation'}
```
### Expected behavior
No error
### Environment info
datasets 4.0.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7727
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7727/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7727/events
|
https://github.com/huggingface/datasets/issues/7727
| 3,295,718,578
|
I_kwDODunzps7EcKyy
| 7,727
|
config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/doctorpangloss",
"id": 2229300,
"login": "doctorpangloss",
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"type": "User",
"url": "https://api.github.com/users/doctorpangloss",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2025-08-06T08:21:37Z
|
2025-08-06T08:21:37Z
| null |
NONE
| null | null | null | null |
### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine.
### Steps to reproduce the bug
1. create a README.md with the front matter of the form
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
2. `touch ./images/xyz/1.jpg`
3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly.
4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")`
### Expected behavior
`./` prefix should be interpreted correctly
### Environment info
datasets 4.0.0
datasets 3.4.0
reproduce
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7724
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7724/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7724/events
|
https://github.com/huggingface/datasets/issues/7724
| 3,292,315,241
|
I_kwDODunzps7EPL5p
| 7,724
|
Can not stepinto load_dataset.py?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4",
"events_url": "https://api.github.com/users/micklexqg/events{/privacy}",
"followers_url": "https://api.github.com/users/micklexqg/followers",
"following_url": "https://api.github.com/users/micklexqg/following{/other_user}",
"gists_url": "https://api.github.com/users/micklexqg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/micklexqg",
"id": 13776012,
"login": "micklexqg",
"node_id": "MDQ6VXNlcjEzNzc2MDEy",
"organizations_url": "https://api.github.com/users/micklexqg/orgs",
"received_events_url": "https://api.github.com/users/micklexqg/received_events",
"repos_url": "https://api.github.com/users/micklexqg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/micklexqg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/micklexqg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/micklexqg",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2025-08-05T09:28:51Z
|
2025-08-05T09:28:51Z
| null |
NONE
| null | null | null | null |
I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" -->
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7723
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7723/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7723/events
|
https://github.com/huggingface/datasets/issues/7723
| 3,289,943,261
|
I_kwDODunzps7EGIzd
| 7,723
|
Don't remove `trust_remote_code` arg!!!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4",
"events_url": "https://api.github.com/users/autosquid/events{/privacy}",
"followers_url": "https://api.github.com/users/autosquid/followers",
"following_url": "https://api.github.com/users/autosquid/following{/other_user}",
"gists_url": "https://api.github.com/users/autosquid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/autosquid",
"id": 758925,
"login": "autosquid",
"node_id": "MDQ6VXNlcjc1ODkyNQ==",
"organizations_url": "https://api.github.com/users/autosquid/orgs",
"received_events_url": "https://api.github.com/users/autosquid/received_events",
"repos_url": "https://api.github.com/users/autosquid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/autosquid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/autosquid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/autosquid",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] |
2025-08-04T15:42:07Z
|
2025-08-04T15:42:07Z
| null |
NONE
| null | null | null | null |
### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7722
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7722/events
|
https://github.com/huggingface/datasets/issues/7722
| 3,289,741,064
|
I_kwDODunzps7EFXcI
| 7,722
|
Out of memory even though using load_dataset(..., streaming=True)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2025-08-04T14:41:55Z
|
2025-08-04T14:41:55Z
| null |
NONE
| null | null | null | null |
### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True)
for i,sample in enumerate(tqdm(ds)):
target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav')
try:
sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate'])
except Exception as e:
print(f"Could not write audio {i} in ds: {e}")
```
### Expected behavior
I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same.
### Environment info
Python 3.12.11
Ubuntu 24
datasets 4.0.0 and 3.6.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7721
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7721/events
|
https://github.com/huggingface/datasets/issues/7721
| 3,289,426,104
|
I_kwDODunzps7EEKi4
| 7,721
|
Bad split error message when using percentages
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I'd like to work on this: add clearer validation/messages for percent-based splits + tests",
"The most basic example is this code:\n`load_dataset(\"openslr/librispeech_asr\", split=\"train[10%:20%]\")`\n\nThis results in this ValueError:\n```\n raise ValueError(f'Unknown split \"{split}\". Should be one of {list(name2len)}.')\nValueError: Unknown split \"train\". Should be one of ['test.clean', 'test.other', 'train.clean.100', 'train.clean.360', 'train.other.500', 'validation.clean', 'validation.other'].\n```\n"
] |
2025-08-04T13:20:25Z
|
2025-08-14T14:42:24Z
| null |
NONE
| null | null | null | null |
### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: train[0%:10%]. Available splits: ['train']
Edit: Same happens with a split like _train[:90000]_
### Steps to reproduce the bug
```
for split in range(10):
split_str = f"train[{split*10}%:{(split+1)*10}%]"
print(f"Processing split {split_str}...")
ds = load_dataset("user/dataset", split=split_str, streaming=True)
```
### Expected behavior
I'd expect the library to split my dataset in 10% steps.
### Environment info
python 3.12.11
ubuntu 24
dataset 4.0.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7720
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7720/events
|
https://github.com/huggingface/datasets/issues/7720
| 3,287,150,513
|
I_kwDODunzps7D7e-x
| 7,720
|
Datasets 4.0 map function causing column not found
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi, I tried to reproduce this issue on the latest `main` branch but it seems to be working correctly now. My test script (which creates a dummy dataset and applies the `.map()` function) successfully creates and accesses the new column without a `KeyError`.\n\nIt's possible this was fixed by a recent commit. The maintainers might want to consider closing this issue.",
"Hi, have you tried on a large dataset (200GB+) perhaps? I will try my best to do a rerun with main branch when I have the time.",
"I ran it on a small dataset, maybe that’s why I didn’t hit the issue. If it still shows up on your side with the latest main, let me know. I can try it on a bigger set too."
] |
2025-08-03T12:52:34Z
|
2025-08-07T19:23:34Z
| null |
NONE
| null | null | null | null |
### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]}
def get_total_audio_length(dataset):
data = dataset.map(compute_duration, num_proc=NUM_PROC)
print(data)
durations=data["duration"]
total_seconds = sum(durations)
return total_seconds
```
### Expected behavior
New datasets.Dataset instance should have new columns attached.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2023.12.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7719
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7719/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7719/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7719/events
|
https://github.com/huggingface/datasets/issues/7719
| 3,285,928,491
|
I_kwDODunzps7D20or
| 7,719
|
Specify dataset columns types in typehint
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Samoed",
"id": 36135455,
"login": "Samoed",
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"repos_url": "https://api.github.com/users/Samoed/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Samoed",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] |
2025-08-02T13:22:31Z
|
2025-08-02T13:22:31Z
| null |
NONE
| null | null | null | null |
### Feature request
Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131
### Motivation
In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder
```python
from typing import TypedDict
from torch.utils.data import DataLoader
class CorpusInput(TypedDict):
title: list[str]
body: list[str]
class QueryInput(TypedDict):
query: list[str]
instruction: list[str]
def queries_loader() -> DataLoader[QueryInput]:
...
def corpus_loader() -> DataLoader[CorpusInput]:
...
```
But for datasets we can only specify columns in type in comments
```python
from datasets import Dataset
QueryDataset = Dataset
"""Query dataset should have `query` and `instructions` columns as `str` """
```
### Your contribution
I can create draft implementation
| null |
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7719/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7717
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7717/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7717/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7717/events
|
https://github.com/huggingface/datasets/issues/7717
| 3,282,855,127
|
I_kwDODunzps7DrGTX
| 7,717
|
Cached dataset is not used when explicitly passing the cache_dir parameter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi, I've investigated this issue and can confirm the bug. Here are my findings:\n\n**1. Reproduction:**\nI was able to reproduce the issue on the latest `main` branch. Using the provided code snippet, `snapshot_download` correctly populates the custom `cache_dir`, but `load_dataset` with the same `cache_dir` triggers a full re-download and re-processing of the dataset, ignoring the existing cache.\n\n**2. Investigation:**\nI traced the `cache_dir` parameter from `load_dataset` down to the `DatasetBuilder` class in `src/datasets/builder.py`. The root cause seems to be a mismatch between the cache path structure created by `snapshot_download` and the path structure expected by the `DatasetBuilder`.\n\nSpecifically, the `_relative_data_dir` method in `DatasetBuilder` constructs a path using `namespace___dataset_name` (with three underscores), while the cache from `snapshot_download` appears to use a `repo_id` based format like `datasets--namespace--dataset_name` (with double hyphens).\n\n**3. Attempted Fix & Result:**\nI attempted a fix by modifying the `_relative_data_dir` method to replace the path separator \"/\" in `self.repo_id` with \"--\", to align it with the `snapshot_download` structure.\n\nThis partially worked: `load_dataset` no longer re-downloads the files. However, it still re-processes them every time (triggering \"Generating train split...\", etc.) instead of loading the already processed Arrow files from the cache.\n\nThis suggests the issue is deeper than just the directory name and might be related to how the builder verifies the integrity or presence of the processed cache files.\n\nI hope these findings are helpful for whoever picks up this issue."
] |
2025-08-01T07:12:41Z
|
2025-08-05T19:19:36Z
| null |
NONE
| null | null | null | null |
### Describe the bug
Hi, we are pre-downloading a dataset using snapshot_download(). When loading this exact dataset with load_dataset() the cached snapshot is not used. In both calls, I provide the cache_dir parameter.
### Steps to reproduce the bug
```
from datasets import load_dataset, concatenate_datasets
from huggingface_hub import snapshot_download
def download_ds(name: str):
snapshot_download(repo_id=name, repo_type="dataset", cache_dir="G:/Datasets/cache")
def prepare_ds():
audio_ds = load_dataset("openslr/librispeech_asr", num_proc=4, cache_dir="G:/Datasets/cache")
print(sfw_ds.features)
if __name__ == '__main__':
download_ds("openslr/librispeech_asr")
prepare_ds()
```
### Expected behavior
I'd expect that the cached version of the dataset is used. Instead, the same dataset is downloaded again to the default cache directory.
### Environment info
Windows 11
datasets==4.0.0
Python 3.12.11
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7717/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7717/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7709
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7709/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7709/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7709/events
|
https://github.com/huggingface/datasets/issues/7709
| 3,276,677,990
|
I_kwDODunzps7DTiNm
| 7,709
|
Release 4.0.0 breaks usage patterns of with_format
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4",
"events_url": "https://api.github.com/users/wittenator/events{/privacy}",
"followers_url": "https://api.github.com/users/wittenator/followers",
"following_url": "https://api.github.com/users/wittenator/following{/other_user}",
"gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wittenator",
"id": 9154515,
"login": "wittenator",
"node_id": "MDQ6VXNlcjkxNTQ1MTU=",
"organizations_url": "https://api.github.com/users/wittenator/orgs",
"received_events_url": "https://api.github.com/users/wittenator/received_events",
"repos_url": "https://api.github.com/users/wittenator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wittenator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wittenator",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This is a breaking change with 4.0 which introduced `Column` objects. To get the numpy array from a `Column` you can `col[i]`, `col[i:j]` or even `col[:]` if you want the full column as a numpy array:\n\n```python\nfrom datasets import load_dataset\ndataset = load_dataset(...)\ndataset = dataset.with_format(\"numpy\")\nprint(dataset[\"star\"][:].ndim)\n```",
"Ah perfect, thanks for clearing this up. I would close this ticket then."
] |
2025-07-30T11:34:53Z
|
2025-08-07T08:27:18Z
|
2025-08-07T08:27:18Z
|
NONE
| null | null | null | null |
### Describe the bug
Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memory dataset for which iterating would be too slow. Is this intended behaviour? I couldn't find much documentation on the intended usage of the new Column class yet.
### Steps to reproduce the bug
Steps to reproduce:
```
from datasets import load_dataset
dataset = load_dataset("lhoestq/demo1")
dataset = dataset.with_format("numpy")
print(dataset["star"].ndim)
```
### Expected behavior
Working on whole columns should be possible.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-63-generic-x86_64-with-glibc2.36
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4",
"events_url": "https://api.github.com/users/wittenator/events{/privacy}",
"followers_url": "https://api.github.com/users/wittenator/followers",
"following_url": "https://api.github.com/users/wittenator/following{/other_user}",
"gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wittenator",
"id": 9154515,
"login": "wittenator",
"node_id": "MDQ6VXNlcjkxNTQ1MTU=",
"organizations_url": "https://api.github.com/users/wittenator/orgs",
"received_events_url": "https://api.github.com/users/wittenator/received_events",
"repos_url": "https://api.github.com/users/wittenator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wittenator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wittenator",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7709/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7709/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7707
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7707/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7707/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7707/events
|
https://github.com/huggingface/datasets/issues/7707
| 3,271,867,998
|
I_kwDODunzps7DBL5e
| 7,707
|
load_dataset() in 4.0.0 failed when decoding audio
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @lhoestq . Would you please have a look at it? I use the official NV Docker ([NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`) on A100 and encountered this issue, but I don't know how to fix it.",
"Use !pip install -U datasets[audio] rather than !pip install datasets\n\nI got the solution from this link [https://github.com/huggingface/datasets/issues/7678](https://github.com/huggingface/datasets/issues/7678), and it processes the data; however, it led to certain transformer importnerrors",
"> https://github.com/huggingface/datasets/issues/7678\n\nHi @asantewaa-bremang . Thanks for your reply, but sadly it does not work for me.",
"It looks like a torchcodec issue, have you tried to look at the torchcodec issues here in case someone has the same issue ? https://github.com/pytorch/torchcodec/issues\n\notherwise feel free to open a new issue there",
"@jiqing-feng, are you running the code on Colab? If you are, you should restart after making this installation ! pip install -U datasets[audio]. ",
"> [@jiqing-feng](https://github.com/jiqing-feng), are you running the code on Colab? If you are, you should restart after making this installation ! pip install -U datasets[audio].\n\nNo, I ran the script on the A100 instance locally.",
"> It looks like a torchcodec issue, have you tried to look at the torchcodec issues here in case someone has the same issue ? https://github.com/pytorch/torchcodec/issues\n> \n> otherwise feel free to open a new issue there\n\nThanks! I've opened a new issue on torchcodec. Could we have a fallback implementation without torchcodec (just like datasets==3.6.0) ?",
"> Thanks! I've opened a new issue on torchcodec. Could we have a fallback implementation without torchcodec (just like datasets==3.6.0) ?\n\nFor now I'd recommend using `datasets==3.6.0` if this issue is blocking for you",
"Resolved by installing the pre-release torchcodec. Thanks!",
"Same. torchcodec==0.6.0 failed, torchcodec==0.5.0 solved",
"So what combination of 'datasets' and 'torchcodec' worked out?",
"> So what combination of 'datasets' and 'torchcodec' worked out?\n\nnice mate! \njust about to write this massage!!!!!\n\n\n\nwhen this will solve????\n",
"torchcodec 0.7 fails\n0.5 not guaranty to work with torch 2.8\n\n",
"> Resolved by installing the pre-release torchcodec. Thanks!\n\nhow to install the pre-release torchcodec, when I use pip install --pre torchcodec, it do not download new version"
] |
2025-07-29T03:25:03Z
|
2025-09-15T16:17:06Z
|
2025-08-01T05:15:45Z
|
NONE
| null | null | null | null |
### Describe the bug
Cannot decode audio data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
print(dataset[0]["audio"]["array"])
```
1st round run, got
```
File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 172, in decode_example
raise ImportError("To support decoding audio data, please install 'torchcodec'.")
ImportError: To support decoding audio data, please install 'torchcodec'.
```
After `pip install torchcodec` and run, got
```
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/_metadata.py", line 16, in <module>
from torchcodec._core.ops import (
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 84, in <module>
load_torchcodec_shared_libraries()
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 69, in load_torchcodec_shared_libraries
raise RuntimeError(
RuntimeError: Could not load libtorchcodec. Likely causes:
1. FFmpeg is not properly installed in your environment. We support
versions 4, 5, 6 and 7.
2. The PyTorch version (2.8.0a0+5228986c39.nv25.06) is not compatible with
this version of TorchCodec. Refer to the version compatibility
table:
https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
3. Another runtime dependency; see exceptions below.
The following exceptions were raised as we tried to load libtorchcodec:
[start of libtorchcodec loading traceback]
FFmpeg version 7: libavutil.so.59: cannot open shared object file: No such file or directory
FFmpeg version 6: libavutil.so.58: cannot open shared object file: No such file or directory
FFmpeg version 5: libavutil.so.57: cannot open shared object file: No such file or directory
FFmpeg version 4: libavutil.so.56: cannot open shared object file: No such file or directory
[end of libtorchcodec loading traceback].
```
After `apt update && apt install ffmpeg -y`, got
```
Traceback (most recent call last):
File "/workspace/jiqing/test_datasets.py", line 4, in <module>
print(dataset[0]["audio"]["array"])
~~~~~~~^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 198, in decode_example
audio = AudioDecoder(bytes, stream_index=self.stream_index, sample_rate=self.sampling_rate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_audio_decoder.py", line 62, in __init__
self._decoder = create_decoder(source=source, seek_mode="approximate")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_decoder_utils.py", line 33, in create_decoder
return core.create_from_bytes(source, seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 144, in create_from_bytes
return create_from_tensor(buffer, seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 756, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Could not run 'torchcodec_ns::create_from_tensor' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchcodec_ns::create_from_tensor' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:214 [kernel]
BackendSelect: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /__w/torchcodec/torchcodec/pytorch/torchcodec/src/torchcodec/_core/custom_ops.cpp:694 [kernel]
FuncTorchDynamicLayerBackMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:479 [backend fallback]
Functionalize: registered at /opt/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:349 [backend fallback]
Named: registered at /opt/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /opt/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /opt/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:104 [backend fallback]
AutogradOther: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradCPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:67 [backend fallback]
AutogradCUDA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:75 [backend fallback]
AutogradXLA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:87 [backend fallback]
AutogradMPS: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:95 [backend fallback]
AutogradXPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback]
AutogradHPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:108 [backend fallback]
AutogradLazy: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:91 [backend fallback]
AutogradMTIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:79 [backend fallback]
AutogradMAIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:83 [backend fallback]
AutogradMeta: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:99 [backend fallback]
Tracer: registered at /opt/pytorch/pytorch/torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:322 [backend fallback]
AutocastMTIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:466 [backend fallback]
AutocastMAIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:504 [backend fallback]
AutocastXPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:542 [backend fallback]
AutocastMPS: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:209 [backend fallback]
AutocastCUDA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:165 [backend fallback]
FuncTorchBatched: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:731 [backend fallback]
BatchedNestedTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:758 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:27 [backend fallback]
Batched: registered at /opt/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:208 [backend fallback]
PythonTLSSnapshot: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:202 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:475 [backend fallback]
PreDispatch: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:206 [backend fallback]
PythonDispatcher: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:198 [backend fallback]
```
### Expected behavior
The result is
```
[0.00238037 0.0020752 0.00198364 ... 0.00042725 0.00057983 0.0010376 ]
```
on `datasets==3.6.0`
### Environment info
[NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`
```
- `datasets` version: 4.0.0
- Platform: Linux-5.4.292-1.el8.elrepo.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
}
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7707/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7707/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7705
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7705/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7705/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7705/events
|
https://github.com/huggingface/datasets/issues/7705
| 3,269,070,499
|
I_kwDODunzps7C2g6j
| 7,705
|
Can Not read installed dataset in dataset.load(.)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/52521165?v=4",
"events_url": "https://api.github.com/users/HuangChiEn/events{/privacy}",
"followers_url": "https://api.github.com/users/HuangChiEn/followers",
"following_url": "https://api.github.com/users/HuangChiEn/following{/other_user}",
"gists_url": "https://api.github.com/users/HuangChiEn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuangChiEn",
"id": 52521165,
"login": "HuangChiEn",
"node_id": "MDQ6VXNlcjUyNTIxMTY1",
"organizations_url": "https://api.github.com/users/HuangChiEn/orgs",
"received_events_url": "https://api.github.com/users/HuangChiEn/received_events",
"repos_url": "https://api.github.com/users/HuangChiEn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuangChiEn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuangChiEn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuangChiEn",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n\n```python\ndataset = load_dataset(local_directory_path)\n```",
"> You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n> \n> dataset = load_dataset(local_directory_path)\n\nIt's good suggestion, but my server env is network restriction. It can not directly fetch data from huggingface. I spent lot of time to download and transfer it to the server.\nSo, I attempt to make load_dataset connect to my local dataset. ",
"Just Solved it few day before. Will post solution later...\nalso thanks folks quick reply.."
] |
2025-07-28T09:43:54Z
|
2025-08-05T01:24:32Z
| null |
NONE
| null | null | null | null |
Hi, folks, I'm newbie in huggingface dataset api.
As title, i'm facing the issue that the dataset.load api can not connect to the installed dataset.
code snippet :
<img width="572" height="253" alt="Image" src="https://github.com/user-attachments/assets/10f48aaf-d6ca-4239-b1cf-145d74f125d1" />
data path :
"/xxx/joseph/llava_ds/vlm_ds"
it contains all video clips i want!
<img width="1398" height="261" alt="Image" src="https://github.com/user-attachments/assets/bf213b66-e344-4311-97e7-bc209677ae77" />
i run the py script by
<img width="1042" height="38" alt="Image" src="https://github.com/user-attachments/assets/8b3fcee4-e1a6-41b8-bee1-91567b00d9d2" />
But bad happended, even i provide dataset path by "HF_HUB_CACHE", it still attempt to download data from remote side :
<img width="1697" height="813" alt="Image" src="https://github.com/user-attachments/assets/baa6cff1-a724-4710-a8c4-4805459deffb" />
Any suggestion will be appreciated!!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7705/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7705/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7703
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7703/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7703/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7703/events
|
https://github.com/huggingface/datasets/issues/7703
| 3,265,648,942
|
I_kwDODunzps7Cpdku
| 7,703
|
[Docs] map() example uses undefined `tokenizer` — causes NameError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sanjaykumar030",
"id": 183703408,
"login": "Sanjaykumar030",
"node_id": "U_kgDOCvMXcA",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sanjaykumar030",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I've submitted PR #7704 which adds documentation to clarify the behavior of `map()` when returning `None`."
] |
2025-07-26T13:35:11Z
|
2025-07-27T09:44:35Z
| null |
CONTRIBUTOR
| null | null | null | null |
## Description
The current documentation example for `datasets.Dataset.map()` demonstrates batched processing but uses a `tokenizer` object without defining or importing it. This causes an error every time it's copied.
Here is the problematic line:
```python
# process a batch of examples
>>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This assumes the user has already set up a tokenizer, which contradicts the goal of having self-contained, copy-paste-friendly examples.
## Problem
Users who copy and run the example as-is will encounter:
```python
NameError: name 'tokenizer' is not defined
```
This breaks the flow for users and violates HuggingFace's documentation principle that examples should "work as expected" when copied directly.
## Proposal
Update the example to include the required tokenizer setup using the Transformers library, like so:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
ds_tokenized = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This will help new users understand the workflow and apply the method correctly.
## Note
This PR complements ongoing improvements like #7700, which clarifies multiprocessing in .map(). My change focuses on undefined tokenizer — causes NameError
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7703/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7703/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7700
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7700/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7700/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7700/events
|
https://github.com/huggingface/datasets/issues/7700
| 3,263,922,255
|
I_kwDODunzps7Ci4BP
| 7,700
|
[doc] map.num_proc needs clarification
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/196988264?v=4",
"events_url": "https://api.github.com/users/sfc-gh-sbekman/events{/privacy}",
"followers_url": "https://api.github.com/users/sfc-gh-sbekman/followers",
"following_url": "https://api.github.com/users/sfc-gh-sbekman/following{/other_user}",
"gists_url": "https://api.github.com/users/sfc-gh-sbekman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sfc-gh-sbekman",
"id": 196988264,
"login": "sfc-gh-sbekman",
"node_id": "U_kgDOC73NaA",
"organizations_url": "https://api.github.com/users/sfc-gh-sbekman/orgs",
"received_events_url": "https://api.github.com/users/sfc-gh-sbekman/received_events",
"repos_url": "https://api.github.com/users/sfc-gh-sbekman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sfc-gh-sbekman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sfc-gh-sbekman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sfc-gh-sbekman",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2025-07-25T17:35:09Z
|
2025-07-25T17:39:36Z
| null |
NONE
| null | null | null | null |
https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Dataset.map.num_proc
```
num_proc (int, optional, defaults to None) — Max number of processes when generating cache. Already cached
shards are loaded sequentially.
```
for batch:
```
num_proc (int, optional, defaults to None): The number of processes to use for multiprocessing. If None, no
multiprocessing is used. This can significantly speed up batching for large datasets.
```
So what happens to `map.num_proc` - is it the same behavior as `batch.num_proc` - so only if `num_proc=None` then no multiprocessing is used?
Let's update the doc to be unambiguous.
**bonus**: we could make all of these behave similarly to `DataLoader.num_workers` - where `num_workers==0` implies no multiprocessing. I think that's the most intuitive, IMHO. 0 workers - the main process has to do all the work. `None` could be the same as `0`.
context: debugging a failing `map`
Thank you!
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7700/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7700/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7699
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7699/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7699/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7699/events
|
https://github.com/huggingface/datasets/issues/7699
| 3,261,053,171
|
I_kwDODunzps7CX7jz
| 7,699
|
Broken link in documentation for "Create a video dataset"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The URL is ok but it seems the webdataset website is down. There seems to be a related issue here: https://github.com/webdataset/webdataset/issues/155\n\nFeel free to ask the authors there for an update. Otherwise happy to witch the link to the mirror shared in that issue"
] |
2025-07-24T19:46:28Z
|
2025-07-25T15:27:47Z
| null |
NONE
| null | null | null | null |
The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken.
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" />
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7699/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7699/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 16