cybertruck32489 commited on
Commit
30bc1c7
·
verified ·
1 Parent(s): 2ab1cd4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +339 -339
README.md CHANGED
@@ -30,177 +30,177 @@ configs:
30
  - path: linux-mint/data-00000-of-00001.arrow
31
  split: train
32
  ---
33
- # Screen2Coord Dataset
34
-
35
-
36
-
37
-
38
-
39
-
40
-
41
-
42
-
43
-
44
-
45
-
46
-
47
-
48
-
49
- **Screen2Coord** is a dataset for training models that take a **screenshot, screen dimensions, and a textual action description** as input and output the **coordinates of the target bounding box** on the screen. This dataset is intended for image-text-to-text LLMs applied to user interface interactions.
50
-
51
-
52
-
53
-
54
-
55
-
56
-
57
-
58
-
59
-
60
-
61
-
62
-
63
-
64
-
65
- ## Dataset Structure
66
-
67
-
68
-
69
-
70
- ### New feature! Windows, MacOS, Linux-Ubuntu subsets!
71
-
72
-
73
-
74
-
75
-
76
- ### Data Instances
77
-
78
-
79
-
80
-
81
-
82
-
83
-
84
-
85
-
86
-
87
-
88
-
89
-
90
-
91
-
92
- A typical data instance in Screen2Coord consists of:
93
-
94
-
95
-
96
-
97
-
98
-
99
-
100
-
101
-
102
-
103
-
104
-
105
-
106
-
107
-
108
- - `image`: A screenshot image in PNG format
109
-
110
-
111
-
112
-
113
-
114
-
115
-
116
- - `image_size`: List of two integers `[width, height]` representing the screen size in pixels (e.g., `[1200, 674]`)
117
-
118
-
119
-
120
-
121
-
122
-
123
-
124
- - `mapped_bboxes`: List of bounding box objects containing:
125
-
126
-
127
-
128
-
129
-
130
-
131
-
132
- - `bbox`: List of integers `[x_min, y_min, x_max, y_max]` specifying the bounding box coordinates
133
-
134
-
135
-
136
-
137
-
138
-
139
-
140
- - `texts`: List of textual descriptions associated with the bounding box (e.g., `"click on my profile"`)
141
-
142
-
143
-
144
-
145
-
146
-
147
-
148
-
149
-
150
-
151
-
152
-
153
-
154
-
155
-
156
- ### Data Fields
157
-
158
-
159
-
160
-
161
-
162
-
163
-
164
-
165
-
166
-
167
-
168
-
169
-
170
-
171
-
172
- - `image`: Image file in PNG format
173
-
174
-
175
-
176
-
177
-
178
-
179
-
180
- - `image_size`: Sequence of integers representing image dimensions
181
-
182
-
183
-
184
-
185
-
186
-
187
-
188
- - `mapped_bboxes`: Sequence of dictionaries with bounding box information
189
-
190
-
191
-
192
-
193
-
194
-
195
-
196
-
197
-
198
-
199
-
200
-
201
-
202
-
203
-
204
  ### Data Splits
205
 
206
  The dataset contains the following splits:
@@ -209,172 +209,172 @@ The dataset contains the following splits:
209
  - `windows` (train): 4 examples
210
  - `linux-ubuntu` (train): 1 examples
211
  - `linux-mint` (train): 1 examples
212
- ## Purpose / How to Use
213
-
214
-
215
-
216
-
217
-
218
-
219
-
220
-
221
-
222
-
223
-
224
-
225
-
226
-
227
-
228
- The main idea of this dataset is to train **image-text-to-text LLMs** that can interpret a screenshot and textual prompt **along with screen dimensions and instructions**, e.g., `"open the browser"`.
229
-
230
-
231
-
232
-
233
-
234
-
235
-
236
-
237
-
238
-
239
-
240
-
241
-
242
-
243
-
244
- The model receives:
245
-
246
-
247
-
248
-
249
-
250
-
251
-
252
-
253
-
254
-
255
-
256
-
257
-
258
-
259
-
260
- - **Screenshot of the screen**
261
-
262
-
263
-
264
-
265
-
266
-
267
-
268
- - **Screen size** `[width, height]`
269
-
270
-
271
-
272
-
273
-
274
-
275
-
276
- - **Textual instruction** (prompt)
277
-
278
-
279
-
280
-
281
-
282
-
283
-
284
-
285
-
286
-
287
-
288
-
289
-
290
-
291
-
292
- And outputs:
293
-
294
-
295
-
296
-
297
-
298
-
299
-
300
-
301
-
302
-
303
-
304
-
305
-
306
-
307
-
308
- - **Bounding box coordinates** corresponding to where the action should be performed.
309
-
310
-
311
-
312
-
313
-
314
-
315
-
316
- For example, clicking in the middle of the predicted bounding box executes the instructed action.
317
-
318
-
319
-
320
-
321
-
322
-
323
-
324
-
325
-
326
-
327
-
328
-
329
-
330
-
331
-
332
- This enables models to perform **UI actions** based on visual context and natural language instructions.
333
-
334
-
335
-
336
-
337
-
338
-
339
-
340
-
341
-
342
-
343
-
344
-
345
-
346
-
347
-
348
- For example, during training, you can provide the model with a full prompt from an agent system and also add a click tool, supplying the labeled bounding boxes from this dataset in the tool call.
349
-
350
-
351
-
352
-
353
-
354
-
355
-
356
-
357
-
358
-
359
-
360
-
361
-
362
-
363
-
364
- ## Contributions
365
-
366
-
367
-
368
-
369
-
370
-
371
-
372
-
373
-
374
-
375
-
376
-
377
-
378
-
379
-
380
  If you can help with **annotations** or support the dataset **financially**, please send a direct message. The dataset is updated in my spare time.
 
30
  - path: linux-mint/data-00000-of-00001.arrow
31
  split: train
32
  ---
33
+ # Screen2Coord_denorm Dataset
34
+
35
+
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+
48
+
49
+ **Screen2Coord** is a dataset for training models that take a **screenshot, screen dimensions, and a textual action description** as input and output the **coordinates of the target bounding box** on the screen. This dataset is intended for image-text-to-text LLMs applied to user interface interactions.
50
+
51
+
52
+
53
+
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+ ## Dataset Structure
66
+
67
+
68
+
69
+
70
+ ### New feature! Windows, MacOS, Linux-Ubuntu subsets!
71
+
72
+
73
+
74
+
75
+
76
+ ### Data Instances
77
+
78
+
79
+
80
+
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
+
92
+ A typical data instance in Screen2Coord consists of:
93
+
94
+
95
+
96
+
97
+
98
+
99
+
100
+
101
+
102
+
103
+
104
+
105
+
106
+
107
+
108
+ - `image`: A screenshot image in PNG format
109
+
110
+
111
+
112
+
113
+
114
+
115
+
116
+ - `image_size`: List of two integers `[width, height]` representing the screen size in pixels (e.g., `[1200, 674]`)
117
+
118
+
119
+
120
+
121
+
122
+
123
+
124
+ - `mapped_bboxes`: List of bounding box objects containing:
125
+
126
+
127
+
128
+
129
+
130
+
131
+
132
+ - `bbox`: List of integers `[x_min, y_min, x_max, y_max]` specifying the bounding box coordinates
133
+
134
+
135
+
136
+
137
+
138
+
139
+
140
+ - `texts`: List of textual descriptions associated with the bounding box (e.g., `"click on my profile"`)
141
+
142
+
143
+
144
+
145
+
146
+
147
+
148
+
149
+
150
+
151
+
152
+
153
+
154
+
155
+
156
+ ### Data Fields
157
+
158
+
159
+
160
+
161
+
162
+
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+
172
+ - `image`: Image file in PNG format
173
+
174
+
175
+
176
+
177
+
178
+
179
+
180
+ - `image_size`: Sequence of integers representing image dimensions
181
+
182
+
183
+
184
+
185
+
186
+
187
+
188
+ - `mapped_bboxes`: Sequence of dictionaries with bounding box information
189
+
190
+
191
+
192
+
193
+
194
+
195
+
196
+
197
+
198
+
199
+
200
+
201
+
202
+
203
+
204
  ### Data Splits
205
 
206
  The dataset contains the following splits:
 
209
  - `windows` (train): 4 examples
210
  - `linux-ubuntu` (train): 1 examples
211
  - `linux-mint` (train): 1 examples
212
+ ## Purpose / How to Use
213
+
214
+
215
+
216
+
217
+
218
+
219
+
220
+
221
+
222
+
223
+
224
+
225
+
226
+
227
+
228
+ The main idea of this dataset is to train **image-text-to-text LLMs** that can interpret a screenshot and textual prompt **along with screen dimensions and instructions**, e.g., `"open the browser"`.
229
+
230
+
231
+
232
+
233
+
234
+
235
+
236
+
237
+
238
+
239
+
240
+
241
+
242
+
243
+
244
+ The model receives:
245
+
246
+
247
+
248
+
249
+
250
+
251
+
252
+
253
+
254
+
255
+
256
+
257
+
258
+
259
+
260
+ - **Screenshot of the screen**
261
+
262
+
263
+
264
+
265
+
266
+
267
+
268
+ - **Screen size** `[width, height]`
269
+
270
+
271
+
272
+
273
+
274
+
275
+
276
+ - **Textual instruction** (prompt)
277
+
278
+
279
+
280
+
281
+
282
+
283
+
284
+
285
+
286
+
287
+
288
+
289
+
290
+
291
+
292
+ And outputs:
293
+
294
+
295
+
296
+
297
+
298
+
299
+
300
+
301
+
302
+
303
+
304
+
305
+
306
+
307
+
308
+ - **Bounding box coordinates** corresponding to where the action should be performed.
309
+
310
+
311
+
312
+
313
+
314
+
315
+
316
+ For example, clicking in the middle of the predicted bounding box executes the instructed action.
317
+
318
+
319
+
320
+
321
+
322
+
323
+
324
+
325
+
326
+
327
+
328
+
329
+
330
+
331
+
332
+ This enables models to perform **UI actions** based on visual context and natural language instructions.
333
+
334
+
335
+
336
+
337
+
338
+
339
+
340
+
341
+
342
+
343
+
344
+
345
+
346
+
347
+
348
+ For example, during training, you can provide the model with a full prompt from an agent system and also add a click tool, supplying the labeled bounding boxes from this dataset in the tool call.
349
+
350
+
351
+
352
+
353
+
354
+
355
+
356
+
357
+
358
+
359
+
360
+
361
+
362
+
363
+
364
+ ## Contributions
365
+
366
+
367
+
368
+
369
+
370
+
371
+
372
+
373
+
374
+
375
+
376
+
377
+
378
+
379
+
380
  If you can help with **annotations** or support the dataset **financially**, please send a direct message. The dataset is updated in my spare time.