Screen2Coord / README.md
cybertruck32489's picture
Upload README.md with huggingface_hub
3934100 verified
metadata
language:
  - en
license: apache-2.0
task_categories:
  - image-text-to-text
  - object-detection
task_ids:
  - visual-question-answering
  - instance-segmentation
tags:
  - agent
  - ui-automation
  - screen-understanding
configs:
  - config_name: macos
    data_files:
      - path: macos/data-00000-of-00001.arrow
        split: train
  - config_name: windows
    data_files:
      - path: windows/data-00000-of-00001.arrow
        split: train
  - config_name: linux-ubuntu
    data_files:
      - path: linux-ubuntu/data-00000-of-00001.arrow
        split: train
  - config_name: linux-mint
    data_files:
      - path: linux-mint/data-00000-of-00001.arrow
        split: train

Screen2Coord_denorm_extend Dataset

Screen2Coord is a dataset for training models that take a screenshot, screen dimensions, and a textual action description as input and output the coordinates of the target bounding box on the screen. This dataset is intended for image-text-to-text LLMs applied to user interface interactions.

Dataset Structure

New feature! Windows, MacOS, Linux-Ubuntu subsets!

Data Instances

A typical data instance in Screen2Coord consists of:

  • image: A screenshot image in PNG format

  • mapped_denorm_bboxes: List of bounding box objects containing:

    • bbox: List of integers [x, y, width, height] specifying the bounding box coordinates in denormalized 0-1000 system

    • texts: List of textual descriptions associated with the bounding box (e.g., "click on my profile")

Data Fields

  • image: Image file in PNG format

  • mapped_denorm_bboxes: Sequence of dictionaries with bounding box information (coordinates in denormalized 0-1000 system)

Data Splits

The dataset contains the following splits:

  • macos (train): 174 examples
  • windows (train): 166 examples
  • linux-ubuntu (train): 1 examples
  • linux-mint (train): 1 examples

Purpose / How to Use

The main idea of this dataset is to train image-text-to-text LLMs that can interpret a screenshot and textual prompt along with screen dimensions and instructions, e.g., "open the browser".

The model receives:

  • Screenshot of the screen

  • Screen size [width, height]

  • Textual instruction (prompt)

And outputs:

  • Bounding box coordinates corresponding to where the action should be performed.

    For example, clicking in the middle of the predicted bounding box executes the instructed action.

This enables models to perform UI actions based on visual context and natural language instructions.

For example, during training, you can provide the model with a full prompt from an agent system and also add a click tool, supplying the labeled bounding boxes from this dataset in the tool call.

Contributions

If you can help with annotations or support the dataset financially, please send a direct message. The dataset is updated in my spare time.