cybertruck32489 commited on
Commit
3934100
·
verified ·
1 Parent(s): c8aa25c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +590 -590
README.md CHANGED
@@ -30,600 +30,600 @@ configs:
30
  - path: linux-mint/data-00000-of-00001.arrow
31
  split: train
32
  ---
33
- # Screen2Coord_denorm Dataset
34
-
35
-
36
-
37
-
38
-
39
-
40
-
41
-
42
-
43
-
44
-
45
-
46
-
47
-
48
-
49
-
50
-
51
-
52
-
53
-
54
-
55
-
56
-
57
-
58
-
59
-
60
-
61
-
62
-
63
-
64
-
65
- **Screen2Coord** is a dataset for training models that take a **screenshot, screen dimensions, and a textual action description** as input and output the **coordinates of the target bounding box** on the screen. This dataset is intended for image-text-to-text LLMs applied to user interface interactions.
66
-
67
-
68
-
69
-
70
-
71
-
72
-
73
-
74
-
75
-
76
-
77
-
78
-
79
-
80
-
81
-
82
-
83
-
84
-
85
-
86
-
87
-
88
-
89
-
90
-
91
-
92
-
93
-
94
-
95
-
96
-
97
- ## Dataset Structure
98
-
99
-
100
-
101
-
102
-
103
-
104
-
105
-
106
-
107
- ### New feature! Windows, MacOS, Linux-Ubuntu subsets!
108
-
109
-
110
-
111
-
112
-
113
-
114
-
115
-
116
-
117
-
118
-
119
- ### Data Instances
120
-
121
-
122
-
123
-
124
-
125
-
126
-
127
-
128
-
129
-
130
-
131
-
132
-
133
-
134
-
135
-
136
-
137
-
138
-
139
-
140
-
141
-
142
-
143
-
144
-
145
-
146
-
147
-
148
-
149
-
150
-
151
- A typical data instance in Screen2Coord consists of:
152
-
153
-
154
-
155
-
156
-
157
-
158
-
159
-
160
-
161
-
162
-
163
-
164
-
165
-
166
-
167
-
168
-
169
-
170
-
171
-
172
-
173
-
174
-
175
-
176
-
177
-
178
-
179
-
180
-
181
-
182
-
183
- - `image`: A screenshot image in PNG format
184
- - `mapped_denorm_bboxes`: List of bounding box objects containing:
185
- - `bbox`: List of integers `[x, y, width, height]` specifying the bounding box coordinates in denormalized 0-1000 system
186
- - `texts`: List of textual descriptions associated with the bounding box (e.g., `"click on my profile"`)
187
-
188
-
189
-
190
-
191
-
192
-
193
-
194
-
195
-
196
-
197
-
198
-
199
-
200
-
201
-
202
-
203
-
204
-
205
-
206
-
207
-
208
-
209
-
210
-
211
-
212
-
213
-
214
-
215
-
216
-
217
-
218
-
219
- ### Data Fields
220
-
221
-
222
-
223
-
224
-
225
-
226
-
227
-
228
-
229
-
230
-
231
-
232
-
233
-
234
-
235
-
236
-
237
-
238
-
239
-
240
-
241
-
242
-
243
-
244
-
245
-
246
-
247
-
248
-
249
-
250
-
251
- - `image`: Image file in PNG format
252
- - `mapped_denorm_bboxes`: Sequence of dictionaries with bounding box information (coordinates in denormalized 0-1000 system)
253
-
254
-
255
-
256
-
257
-
258
-
259
-
260
-
261
-
262
-
263
-
264
-
265
-
266
-
267
-
268
-
269
-
270
-
271
-
272
-
273
-
274
-
275
-
276
-
277
-
278
-
279
-
280
-
281
-
282
-
283
-
284
-
285
  ### Data Splits
286
 
287
  The dataset contains the following splits:
288
 
289
- - `macos` (train): 2 examples
290
- - `windows` (train): 5 examples
291
  - `linux-ubuntu` (train): 1 examples
292
  - `linux-mint` (train): 1 examples
293
- ## Purpose / How to Use
294
-
295
-
296
-
297
-
298
-
299
-
300
-
301
-
302
-
303
-
304
-
305
-
306
-
307
-
308
-
309
-
310
-
311
-
312
-
313
-
314
-
315
-
316
-
317
-
318
-
319
-
320
-
321
-
322
-
323
-
324
-
325
- The main idea of this dataset is to train **image-text-to-text LLMs** that can interpret a screenshot and textual prompt **along with screen dimensions and instructions**, e.g., `"open the browser"`.
326
-
327
-
328
-
329
-
330
-
331
-
332
-
333
-
334
-
335
-
336
-
337
-
338
-
339
-
340
-
341
-
342
-
343
-
344
-
345
-
346
-
347
-
348
-
349
-
350
-
351
-
352
-
353
-
354
-
355
-
356
-
357
- The model receives:
358
-
359
-
360
-
361
-
362
-
363
-
364
-
365
-
366
-
367
-
368
-
369
-
370
-
371
-
372
-
373
-
374
-
375
-
376
-
377
-
378
-
379
-
380
-
381
-
382
-
383
-
384
-
385
-
386
-
387
-
388
-
389
- - **Screenshot of the screen**
390
-
391
-
392
-
393
-
394
-
395
-
396
-
397
-
398
-
399
-
400
-
401
-
402
-
403
-
404
-
405
- - **Screen size** `[width, height]`
406
-
407
-
408
-
409
-
410
-
411
-
412
-
413
-
414
-
415
-
416
-
417
-
418
-
419
-
420
-
421
- - **Textual instruction** (prompt)
422
-
423
-
424
-
425
-
426
-
427
-
428
-
429
-
430
-
431
-
432
-
433
-
434
-
435
-
436
-
437
-
438
-
439
-
440
-
441
-
442
-
443
-
444
-
445
-
446
-
447
-
448
-
449
-
450
-
451
-
452
-
453
- And outputs:
454
-
455
-
456
-
457
-
458
-
459
-
460
-
461
-
462
-
463
-
464
-
465
-
466
-
467
-
468
-
469
-
470
-
471
-
472
-
473
-
474
-
475
-
476
-
477
-
478
-
479
-
480
-
481
-
482
-
483
-
484
-
485
- - **Bounding box coordinates** corresponding to where the action should be performed.
486
-
487
-
488
-
489
-
490
-
491
-
492
-
493
-
494
-
495
-
496
-
497
-
498
-
499
-
500
-
501
- For example, clicking in the middle of the predicted bounding box executes the instructed action.
502
-
503
-
504
-
505
-
506
-
507
-
508
-
509
-
510
-
511
-
512
-
513
-
514
-
515
-
516
-
517
-
518
-
519
-
520
-
521
-
522
-
523
-
524
-
525
-
526
-
527
-
528
-
529
-
530
-
531
-
532
-
533
- This enables models to perform **UI actions** based on visual context and natural language instructions.
534
-
535
-
536
-
537
-
538
-
539
-
540
-
541
-
542
-
543
-
544
-
545
-
546
-
547
-
548
-
549
-
550
-
551
-
552
-
553
-
554
-
555
-
556
-
557
-
558
-
559
-
560
-
561
-
562
-
563
-
564
-
565
- For example, during training, you can provide the model with a full prompt from an agent system and also add a click tool, supplying the labeled bounding boxes from this dataset in the tool call.
566
-
567
-
568
-
569
-
570
-
571
-
572
-
573
-
574
-
575
-
576
-
577
-
578
-
579
-
580
-
581
-
582
-
583
-
584
-
585
-
586
-
587
-
588
-
589
-
590
-
591
-
592
-
593
-
594
-
595
-
596
-
597
- ## Contributions
598
-
599
-
600
-
601
-
602
-
603
-
604
-
605
-
606
-
607
-
608
-
609
-
610
-
611
-
612
-
613
-
614
-
615
-
616
-
617
-
618
-
619
-
620
-
621
-
622
-
623
-
624
-
625
-
626
-
627
-
628
-
629
  If you can help with **annotations** or support the dataset **financially**, please send a direct message. The dataset is updated in my spare time.
 
30
  - path: linux-mint/data-00000-of-00001.arrow
31
  split: train
32
  ---
33
+ # Screen2Coord_denorm_extend Dataset
34
+
35
+
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+
48
+
49
+
50
+
51
+
52
+
53
+
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+ **Screen2Coord** is a dataset for training models that take a **screenshot, screen dimensions, and a textual action description** as input and output the **coordinates of the target bounding box** on the screen. This dataset is intended for image-text-to-text LLMs applied to user interface interactions.
66
+
67
+
68
+
69
+
70
+
71
+
72
+
73
+
74
+
75
+
76
+
77
+
78
+
79
+
80
+
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
+
92
+
93
+
94
+
95
+
96
+
97
+ ## Dataset Structure
98
+
99
+
100
+
101
+
102
+
103
+
104
+
105
+
106
+
107
+ ### New feature! Windows, MacOS, Linux-Ubuntu subsets!
108
+
109
+
110
+
111
+
112
+
113
+
114
+
115
+
116
+
117
+
118
+
119
+ ### Data Instances
120
+
121
+
122
+
123
+
124
+
125
+
126
+
127
+
128
+
129
+
130
+
131
+
132
+
133
+
134
+
135
+
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+
145
+
146
+
147
+
148
+
149
+
150
+
151
+ A typical data instance in Screen2Coord consists of:
152
+
153
+
154
+
155
+
156
+
157
+
158
+
159
+
160
+
161
+
162
+
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+
172
+
173
+
174
+
175
+
176
+
177
+
178
+
179
+
180
+
181
+
182
+
183
+ - `image`: A screenshot image in PNG format
184
+ - `mapped_denorm_bboxes`: List of bounding box objects containing:
185
+ - `bbox`: List of integers `[x, y, width, height]` specifying the bounding box coordinates in denormalized 0-1000 system
186
+ - `texts`: List of textual descriptions associated with the bounding box (e.g., `"click on my profile"`)
187
+
188
+
189
+
190
+
191
+
192
+
193
+
194
+
195
+
196
+
197
+
198
+
199
+
200
+
201
+
202
+
203
+
204
+
205
+
206
+
207
+
208
+
209
+
210
+
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+
219
+ ### Data Fields
220
+
221
+
222
+
223
+
224
+
225
+
226
+
227
+
228
+
229
+
230
+
231
+
232
+
233
+
234
+
235
+
236
+
237
+
238
+
239
+
240
+
241
+
242
+
243
+
244
+
245
+
246
+
247
+
248
+
249
+
250
+
251
+ - `image`: Image file in PNG format
252
+ - `mapped_denorm_bboxes`: Sequence of dictionaries with bounding box information (coordinates in denormalized 0-1000 system)
253
+
254
+
255
+
256
+
257
+
258
+
259
+
260
+
261
+
262
+
263
+
264
+
265
+
266
+
267
+
268
+
269
+
270
+
271
+
272
+
273
+
274
+
275
+
276
+
277
+
278
+
279
+
280
+
281
+
282
+
283
+
284
+
285
  ### Data Splits
286
 
287
  The dataset contains the following splits:
288
 
289
+ - `macos` (train): 174 examples
290
+ - `windows` (train): 166 examples
291
  - `linux-ubuntu` (train): 1 examples
292
  - `linux-mint` (train): 1 examples
293
+ ## Purpose / How to Use
294
+
295
+
296
+
297
+
298
+
299
+
300
+
301
+
302
+
303
+
304
+
305
+
306
+
307
+
308
+
309
+
310
+
311
+
312
+
313
+
314
+
315
+
316
+
317
+
318
+
319
+
320
+
321
+
322
+
323
+
324
+
325
+ The main idea of this dataset is to train **image-text-to-text LLMs** that can interpret a screenshot and textual prompt **along with screen dimensions and instructions**, e.g., `"open the browser"`.
326
+
327
+
328
+
329
+
330
+
331
+
332
+
333
+
334
+
335
+
336
+
337
+
338
+
339
+
340
+
341
+
342
+
343
+
344
+
345
+
346
+
347
+
348
+
349
+
350
+
351
+
352
+
353
+
354
+
355
+
356
+
357
+ The model receives:
358
+
359
+
360
+
361
+
362
+
363
+
364
+
365
+
366
+
367
+
368
+
369
+
370
+
371
+
372
+
373
+
374
+
375
+
376
+
377
+
378
+
379
+
380
+
381
+
382
+
383
+
384
+
385
+
386
+
387
+
388
+
389
+ - **Screenshot of the screen**
390
+
391
+
392
+
393
+
394
+
395
+
396
+
397
+
398
+
399
+
400
+
401
+
402
+
403
+
404
+
405
+ - **Screen size** `[width, height]`
406
+
407
+
408
+
409
+
410
+
411
+
412
+
413
+
414
+
415
+
416
+
417
+
418
+
419
+
420
+
421
+ - **Textual instruction** (prompt)
422
+
423
+
424
+
425
+
426
+
427
+
428
+
429
+
430
+
431
+
432
+
433
+
434
+
435
+
436
+
437
+
438
+
439
+
440
+
441
+
442
+
443
+
444
+
445
+
446
+
447
+
448
+
449
+
450
+
451
+
452
+
453
+ And outputs:
454
+
455
+
456
+
457
+
458
+
459
+
460
+
461
+
462
+
463
+
464
+
465
+
466
+
467
+
468
+
469
+
470
+
471
+
472
+
473
+
474
+
475
+
476
+
477
+
478
+
479
+
480
+
481
+
482
+
483
+
484
+
485
+ - **Bounding box coordinates** corresponding to where the action should be performed.
486
+
487
+
488
+
489
+
490
+
491
+
492
+
493
+
494
+
495
+
496
+
497
+
498
+
499
+
500
+
501
+ For example, clicking in the middle of the predicted bounding box executes the instructed action.
502
+
503
+
504
+
505
+
506
+
507
+
508
+
509
+
510
+
511
+
512
+
513
+
514
+
515
+
516
+
517
+
518
+
519
+
520
+
521
+
522
+
523
+
524
+
525
+
526
+
527
+
528
+
529
+
530
+
531
+
532
+
533
+ This enables models to perform **UI actions** based on visual context and natural language instructions.
534
+
535
+
536
+
537
+
538
+
539
+
540
+
541
+
542
+
543
+
544
+
545
+
546
+
547
+
548
+
549
+
550
+
551
+
552
+
553
+
554
+
555
+
556
+
557
+
558
+
559
+
560
+
561
+
562
+
563
+
564
+
565
+ For example, during training, you can provide the model with a full prompt from an agent system and also add a click tool, supplying the labeled bounding boxes from this dataset in the tool call.
566
+
567
+
568
+
569
+
570
+
571
+
572
+
573
+
574
+
575
+
576
+
577
+
578
+
579
+
580
+
581
+
582
+
583
+
584
+
585
+
586
+
587
+
588
+
589
+
590
+
591
+
592
+
593
+
594
+
595
+
596
+
597
+ ## Contributions
598
+
599
+
600
+
601
+
602
+
603
+
604
+
605
+
606
+
607
+
608
+
609
+
610
+
611
+
612
+
613
+
614
+
615
+
616
+
617
+
618
+
619
+
620
+
621
+
622
+
623
+
624
+
625
+
626
+
627
+
628
+
629
  If you can help with **annotations** or support the dataset **financially**, please send a direct message. The dataset is updated in my spare time.