Datasets:

ArXiv:
License:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    BadZipFile
Message:      zipfiles that span multiple disks are not supported
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 113, in _generate_tables
                  for file_idx, file in enumerate(itertools.chain.from_iterable(files)):
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/track.py", line 49, in __iter__
                  for x in self.generator(*self.args):
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1350, in _iter_from_urlpaths
                  if xisfile(urlpath, download_config=download_config):
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 733, in xisfile
                  fs, *_ = url_to_fs(path, **storage_options)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/core.py", line 395, in url_to_fs
                  fs = filesystem(protocol, **inkwargs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/registry.py", line 293, in filesystem
                  return cls(**storage_options)
                         ^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 80, in __call__
                  obj = super().__call__(*args, **kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/implementations/zip.py", line 62, in __init__
                  self.zip = zipfile.ZipFile(
                             ^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/zipfile/__init__.py", line 1354, in __init__
                  self._RealGetContents()
                File "/usr/local/lib/python3.12/zipfile/__init__.py", line 1417, in _RealGetContents
                  endrec = _EndRecData(fp)
                           ^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/zipfile/__init__.py", line 311, in _EndRecData
                  return _EndRecData64(fpin, -sizeEndCentDir, endrec)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/zipfile/__init__.py", line 257, in _EndRecData64
                  raise BadZipFile("zipfiles that span multiple disks are not supported")
              zipfile.BadZipFile: zipfiles that span multiple disks are not supported

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

TeleEgo:
Benchmarking Egocentric AI Assistants in the Wild

arXiv Page

πŸ“’ Note:This project is still under active development, and the benchmark will be continuously updated.

πŸ“Œ Introduction

TeleEgo is a comprehensive omni benchmark designed for multi-person, multi-scene, multi-task, and multimodal long-term memory reasoning in egocentric video streams. It reflects realistic personal assistant scenarios where continuous egocentric video data is collected across hours or even days, requiring models to maintain and reason over memory, understanding, and cross-memory reasoning. Omni here means that TeleEgo covers the full spectrum of roles, scenes, tasks, modalities, and memory horizons, offering all-round evaluation for egocentric AI assistants.

TeleEgo provides:

  • 🧠 Omni-scale, diverse egocentric data from 5 roles across 4 daily scenarios.
  • 🎀 Multi-modal annotations: video, narration, and speech transcripts.
  • ❓ Fine-grained QA benchmark: 3 cognitive dimensions, 12 subcategories.

πŸ“Š Dataset Overview

  • Participants: 5 (balanced gender)
  • Scenarios:
    • Work & Study
    • Lifestyle & Routines
    • Social Activities
    • Outings & Culture
  • Recording: 3 days/participant (~14.4 hours each)
  • Modalities:
    • Egocentric video streams
    • Speech & conversations
    • Narration and event descriptions

πŸ§ͺ Benchmark Tasks

TeleEgo-QA evaluates models along three main dimensions:

  1. Memory

    • Short-term / Long-term / Ultra-long Memory
    • Entity Tracking
    • Temporal Comparison & Interval
  2. Understanding

    • Causal Understanding
    • Intent Inference
    • Multi-step Reasoning
    • Cross-modal Understanding
  3. Cross-Memory Reasoning

    • Cross-temporal Causality
    • Cross-entity Relation
    • Temporal Chain Understanding

Each QA instance includes:

  • Question type: Single-choice, Multi-choice, Binary, Open-ended

πŸ“œ Citation

If you find our TeleEgo in your research, please cite:

@misc{yan2025teleegobenchmarkingegocentricai,
      title={TeleEgo: Benchmarking Egocentric AI Assistants in the Wild}, 
      author={Jiaqi Yan and Ruilong Ren and Jingren Liu and Shuning Xu and Ling Wang and Yiheng Wang and Yun Wang and Long Zhang and Xiangyu Chen and Changzhi Sun and Jixiang Luo and Dell Zhang and Hao Sun and Chi Zhang and Xuelong Li},
      year={2025},
      eprint={2510.23981},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2510.23981}, 
}

πŸͺͺ License

This project is licensed under the MIT License. Dataset usage is restricted under a research-only license.


πŸ“¬ Contact

If you have any questions, please feel free to reach out: [email protected].


✨ TeleEgo is an Omni benchmark, a step toward building personalized AI assistants with true long-term memory, reasoning and decision-making in real-world wearable scenarios. ✨

Downloads last month
136