Datasets:
File size: 14,779 Bytes
484de04 6ebc9f6 484de04 52a349c 484de04 c35029f a4c5ae4 c35029f a4c5ae4 484de04 6ebc9f6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 |
---
license: apache-2.0
task_categories:
- image-to-text
language:
- en
- zh
tags:
- agent
- code
size_categories:
- 100K<n<1M
---
<p align="center">
<img src="./docs/assets/logo.svg" alt="Logo" width="120" />
<p align="center">
<a href="https://github.com/PKU-DAIR">
<img alt="Static Badge" src="https://img.shields.io/badge/%C2%A9-PKU--DAIR-%230e529d?labelColor=%23003985">
</a>
</p>
</p>
## **WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning**
[Paper](https://arxiv.org/pdf/2510.04097) | [中文](./docs/Chinese.md)
## **🔍 Overview**
**WebRenderBench** is a large-scale benchmark designed to advance **WebUI-to-Code** research for multimodal large language models (MLLMs) through evaluation on real-world webpages. It provides:
* **45,100** real webpages collected from public portal websites
* **High diversity and complexity**, covering a wide range of industries and design styles
* **Novel evaluation metrics** that quantify **layout and style consistency** based on rendered pages
* The **ALISA reinforcement learning framework**, which uses the new metrics as reward signals to optimize generation quality
---
## **🚀 Key Features**
### **Beyond the Limitations of Traditional Benchmarks**
WebRenderBench addresses the core issues of existing WebUI-to-Code benchmarks in data quality and evaluation methodology:
| Aspect | Traditional Benchmarks | Advantages of WebRenderBench |
| :------------------------- | :---------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------- |
| **Data Quality** | Small-scale, simple-structured, or LLM-synthesized data with limited diversity | Large-scale, real-world, and structurally complex webpages that present higher challenges |
| **Evaluation Reliability** | Relies on visual APIs (high cost) or code-structure comparison (fails to handle code asymmetry) | Objectively and efficiently evaluates layout and style consistency based on rendered results |
| **Training Effectiveness** | Difficult to optimize on crawled data with asymmetric code structures | Proposed metrics can be directly used as RL reward signals to enhance model optimization |
---
### **Dataset Characteristics**
<p align="center">
<img src="./docs/assets/framework.svg" alt="WebRenderBench and ALISA Framework" width="80%" />
</p>
<p align="center"><i>Figure 1: Dataset construction pipeline and the ALISA framework</i></p>
Our dataset is constructed through a systematic process to ensure both **high quality** and **diversity**:
1. **Data Collection**: URLs are obtained from open enterprise portal datasets. A high-concurrency crawler captures 210K webpages along with static resources.
2. **Data Processing**: MHTML pages are converted into HTML files, and cross-domain resources are processed to ensure local renderability and full-page screenshots.
3. **Data Cleaning**: Pages with abnormal sizes, rendering errors, or missing styles are filtered out. Multimodal QA models further remove low-quality samples with large blank areas or overlapping elements, yielding 110K valid pages.
4. **Data Categorization**: Pages are categorized by industry and complexity (measured via *Group Count*) to ensure balanced distribution across difficulty levels and domains.
Finally, we construct a dataset of **45.1K** samples, evenly split into training and test sets.
---
## **🌟 Evaluation Framework**
We propose a novel evaluation protocol based on **rendered webpages**, quantifying model performance along two key dimensions: **layout** and **style consistency**.
---
### **RDA (Relative Layout Difference of Associated Elements)**
**Purpose:** Measures relative layout differences between matched elements.
* **Element Association:** Matches corresponding elements between generated and target pages using text similarity (LCS) and geometric distance.
* **Positional Deviation:** The page is divided into a 3×3 grid. Associated elements are compared quadrant-wise—if located in different quadrants, the score is 0; otherwise, a deviation-based score is computed.
* **Uniqueness Weighting:** Each element is weighted by its uniqueness (inverse group size), giving higher importance to distinctive components.
---
### **GDA (Group-wise Difference in Element Counts)**
**Purpose:** Measures group-level alignment of axis-aligned elements.
* **Grouping:** Elements aligned on the same horizontal or vertical axis are treated as one group.
* **Count Comparison:** Compares whether corresponding groups in the generated and target pages contain the same number of elements.
* **Uniqueness Weighting:** Weighted by element uniqueness to emphasize key structural alignment.
---
### **SDA (Style Difference of Associated Elements)**
**Purpose:** Evaluates fine-grained style differences between associated elements.
* **Multi-Dimensional Style Extraction:** Measures differences in foreground color, background color, font size, and border radius.
* **Weighted Averaging:** Computes a weighted mean of style similarity scores across all associated elements to obtain an overall style score.
---
## **⚙️ Installation Guide**
### **Core Dependencies**
<!--
# Recommended: Use vLLM for faster inference
pip install vllm transformers>=4.40.0 torch>=2.0
# Other dependencies
pip install selenium pandas scikit-learn pillow
Alternatively:
pip install -r requirements.txt
-->
Coming Soon
---
## **📊 Benchmark Workflow**
### **Directory Structure**
```
|- docs/ # Documentation
|- scripts # Evaluation scripts
|- web_render_test.jsonl # Test set metadata
|- web_render_train.jsonl # Training set metadata
|- test_webpages.zip # Test set webpages
|- train_webpages.zip # Training set webpages
|- test_screenshots.zip # Test set screenshots
|- train_screenshots.zip # Training set screenshots
```
---
### **Obtain Datasets**
- Webpages
| File Name | Download Link (ModelScope) |
|--------|---------------------|
| train_webpages.7z.001 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.001) |
| train_webpages.7z.002 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.002) |
| train_webpages.7z.003 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.003) |
| train_webpages.7z.004 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.004) |
| train_webpages.7z.005 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.005) |
| train_webpages.7z.006 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.006) |
| train_webpages.7z.007 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.007) |
| train_webpages.7z.008 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.008) |
| train_webpages.7z.009 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.009) |
| train_webpages.7z.010 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.010) |
| train_webpages.7z.011 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.011) |
| train_webpages.7z.012 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.012) |
| train_webpages.7z.013 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.013) |
| train_webpages.7z.014 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.014) |
| train_webpages.7z.015 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.015) |
| train_webpages.7z.016 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.016) |
| train_webpages.7z.017 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.017) |
| train_webpages.7z.018 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.018) |
| train_webpages.7z.019 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_webpages.7z.019) |
| test_webpages.7z.001 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.001) |
| test_webpages.7z.002 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.002) |
| test_webpages.7z.003 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.003) |
| test_webpages.7z.004 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.004) |
| test_webpages.7z.005 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.005) |
| test_webpages.7z.006 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.006) |
| test_webpages.7z.007 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.007) |
| test_webpages.7z.008 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.008) |
| test_webpages.7z.009 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.009) |
| test_webpages.7z.010 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.010) |
| test_webpages.7z.011 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.011) |
| test_webpages.7z.012 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.012) |
| test_webpages.7z.013 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.013) |
| test_webpages.7z.014 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.014) |
| test_webpages.7z.015 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.015) |
| test_webpages.7z.016 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.016) |
| test_webpages.7z.017 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.017) |
| test_webpages.7z.018 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_webpages.7z.018) |
- Screenshots
| File Name | Download Link (ModelScope) |
|--------|---------------------|
| train_screenshots.7z.001 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_screenshots.7z.001) |
| train_screenshots.7z.002 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/train_screenshots.7z.002) |
| test_screenshots.7z.001 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_screenshots.7z.001) |
| test_screenshots.7z.002 | [Download](https://www.modelscope.cn/datasets/lpc1290/WebRenderBench/resolve/master/test_screenshots.7z.002) |
### **Implementation Steps**
1. **Data Preparation**
* Download the WebRenderBench dataset and extract webpage and screenshot archives.
* Each pair consists of a real webpage (HTML + resources) and its rendered screenshot.
2. **Model Inference**
* Run inference using engines such as **vLLM** or **LLM Deploy**, and save results to the designated directory.
3. **Evaluation**
* Run `scripts/1_get_evaluation.py`.
* The script launches a web server to render both generated and target HTML.
* WebDriver extracts DOM information and computes **RDA**, **GDA**, and **SDA** scores.
* Results are saved under `save_results/`.
* Final scores are aggregated via `scripts/2_compute_alisa_scores.py`.
4. **ALISA Training (Optional)**
* Use `models/train_rl.py` for reinforcement learning fine-tuning. *(Coming Soon)*
* The computed evaluation scores serve as reward signals to optimize policy models via methods such as **GRPO**.
---
## **📈 Model Performance Insights**
We evaluate **17 multimodal large language models** of varying scales and architectures (both open- and closed-source).
* **Combined Scores of RDA, GDA, and SDA (%)**

**Key Findings:**
* Overall, larger models achieve higher consistency. **GPT-4.1-mini** and **Qwen-VL-Plus** perform best among closed-source models.
* While most models perform reasonably on simple pages (*Group Count* < 50), **RDA scores drop sharply** as page complexity increases—precise layout alignment remains a major challenge.
* After reinforcement learning via the **ALISA framework**, **Qwen2.5-VL-7B** shows substantial improvements across all complexity levels, even surpassing **GPT-4.1-mini** on simpler cases.
---
## **📅 Future Work**
* [ ] Release pretrained models fine-tuned with the ALISA framework
* [ ] Expand dataset coverage to more industries and dynamic interaction patterns
* [ ] Open-source the complete toolchain for data collection, cleaning, and evaluation
---
## **📜 License**
The **WebRenderBench dataset** is released for **research purposes only**.
All accompanying code will be published under the **Apache License 2.0**.
All webpages in the dataset are collected from publicly accessible enterprise portals.
To protect privacy, all personal and sensitive information has been removed or modified.
---
## **📚 Citation**
If you use our dataset or framework in your research, please cite the following paper:
```bibtex
@article{webrenderbench2025,
title={WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning},
author={Anonymous Author(s)},
year={2025},
journal={arXiv preprint},
}
``` |