Add model card for GUI-AIMA-3B

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +100 -0
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ pipeline_tag: image-text-to-text
4
+ library_name: transformers
5
+ ---
6
+
7
+ # GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding
8
+
9
+ This repository hosts the **GUI-AIMA-3B** model, an attention-based and coordinate-free supervised fine-tuning framework for efficient GUI grounding. The model is presented in the paper [GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding](https://huggingface.co/papers/2511.00810).
10
+
11
+ GUI-AIMA addresses the challenge of mapping natural-language instructions to actionable screen regions in graphical user interfaces (GUIs). It aligns the intrinsic multimodal attention of Multimodal Large Language Models (MLLMs) with patch-wise grounding signals. GUI-AIMA-3B was trained with only 85k screenshots, demonstrating exceptional data efficiency, and achieves state-of-the-art performance among 3B models, attaining an average accuracy of 58.6% on ScreenSpot-Pro and 62.2% on OSWorld-G. It also supports a plug-and-play zoom-in stage for higher precision on high-resolution screenshots without further fine-tuning.
12
+
13
+ * **Paper:** [GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding](https://huggingface.co/papers/2511.00810)
14
+ * **Project Page:** [https://github.com/sjz5202/GUI-AIMA](https://github.com/sjz5202/GUI-AIMA)
15
+ * **Code Repository:** [https://github.com/sjz5202/GUI-AIMA](https://github.com/sjz5202/GUI-AIMA)
16
+
17
+ <div align="center">
18
+ <img src="https://github.com/sjz5202/GUI-AIMA/raw/main/assets/images/comparison.png" width="85%">
19
+ </div>
20
+ Figure 1. **GUI-AIMA** utilize the inherent attention of MLLMs for patch-wise GUI grounding. It simplifies the vanilla attention grounding requiring proper aggregation between all query tokens' grounding vectors by adding a learnable ANCHOR token as the context anchor of query. The multi-head aggregation on attention vectors between ANCHOR and visual tokens is adequate for grounding.
21
+
22
+ <div align="center">
23
+ <img src="https://github.com/sjz5202/GUI-AIMA/raw/main/assets/images/main_fig.png" width="85%">
24
+ </div>
25
+ Figure 2. **GUI-AIMA** proposes an effective multi-head weighting approach by measuring the uniformity between global query-visual pattern and head-wise query-visual pattern.
26
+
27
+ ## Main Results
28
+
29
+ There are two variants of GUI-AIMA: [GUI-AIMA-3B](https://huggingface.co/smz8599/GUI-AIMA-3B) and [GUI-AIMA-3B(soft)](https://huggingface.co/smz8599/GUI-AIMA-3B-kl) with slight differences of multihead weighting.
30
+
31
+ 1-step inference of GUI-AIMA achieves **47.1%** and **56.9%** on ScreenSpot-pro and OSWorld-G. With 2-step zoom-in inference, it can achieve **58.6%** and **62.2%** on ScreenSpot-pro and OSWorld-G.
32
+
33
+ We trained GUI-AIMA for one-step center points predictions. However, **GUI-AIMA can be inferenced in the 2-step fashion without further fine-tuning**: (step 1) 1st inference to determine rough grounding areas; (step 2) crop and zoom-in the rough grounding areas for 2nd preciser grounding inference. The 2-step inference is very helpful for GUI grounding on high-resolution screenshots, such as samples in ScreenSpot-pro and OSWorld-G.
34
+
35
+ <div align="left">
36
+ <img src="https://github.com/sjz5202/GUI-AIMA/raw/main/assets/images/ss_pro.png" width="100%">
37
+ </div>
38
+
39
+ <div align="left">
40
+ <img src="https://github.com/sjz5202/GUI-AIMA/raw/main/assets/images/osworld-g.png" width="80%">
41
+ </div>
42
+
43
+ <div align="left">
44
+ <img src="https://github.com/sjz5202/GUI-AIMA/raw/main/assets/images/ss_v2.png" width="85%">
45
+ </div>
46
+
47
+ ## Sample Usage
48
+
49
+ You can use the model with the `transformers` library. For detailed installation and full examples, refer to the [GitHub repository](https://github.com/sjz5202/GUI-AIMA). A single sample inference example is available in `eval/example_inference.py`.
50
+
51
+ ```python
52
+ import numpy as np
53
+ import torch
54
+ import torchvision.transforms as T
55
+ from PIL import Image
56
+ from torchvision.transforms.functional import InterpolationMode
57
+ from transformers import AutoModel, AutoTokenizer
58
+
59
+ # If you want to load a model using multiple GPUs, please refer to the `Multiple GPUs` section.
60
+ path = 'smz8599/GUI-AIMA-3B' # or 'smz8599/GUI-AIMA-3B-kl'
61
+ model = AutoModel.from_pretrained(
62
+ path,
63
+ torch_dtype=torch.bfloat16,
64
+ low_cpu_mem_usage=True,
65
+ trust_remote_code=True).eval().cuda()
66
+ tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
67
+
68
+ # set the max number of tiles in `max_num`
69
+ # Note: original code used load_image, which is not provided. Replaced with simple image loading for demonstration.
70
+ # Please refer to the full GitHub repository for `load_image` implementation and image preprocessing.
71
+ image_path = "./examples/images/screenshot.png" # Replace with your image path
72
+ image = Image.open(image_path).convert('RGB')
73
+ # A placeholder for pixel_values, as dynamic_preprocess is complex.
74
+ # Users should refer to the original GitHub for proper image preprocessing.
75
+ pixel_values = torch.randn(1, 3, 448, 448).to(torch.bfloat16).cuda() # Placeholder
76
+
77
+ generation_config = dict(max_new_tokens=1024, do_sample=True)
78
+
79
+ question = "In the screenshot of this web page, please give me the coordinates of the element I want to click on according to my instructions(with point).\
80
+ \\\"'Champions League' link\\\""
81
+ response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
82
+ print(f'User: {question}\
83
+ Assistant: {response}')
84
+ ```
85
+
86
+ ## Citation
87
+
88
+ If you find this work helpful, please cite the paper:
89
+
90
+ ```bibtex
91
+ @misc{zhou2025guiaimaaligningintrinsicmultimodal,
92
+ title={GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding},
93
+ author={Shijie Zhou and Viet Dac Lai and Hao Tan and Jihyung Kil and Wanrong Zhu and Changyou Chen and Ruiyi Zhang},
94
+ year={2025},
95
+ eprint={2511.00810},
96
+ archivePrefix={arXiv},
97
+ primaryClass={cs.CV},
98
+ url={https://arxiv.org/abs/2511.00810},
99
+ }
100
+ ```