WebRenderBench / README.md
aleversn's picture
Update README.md
a4c5ae4 verified
metadata
license: apache-2.0
task_categories:
  - image-to-text
language:
  - en
  - zh
tags:
  - agent
  - code
size_categories:
  - 100K<n<1M

Logo

Static Badge

WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning

Paper | 中文

🔍 Overview

WebRenderBench is a large-scale benchmark designed to advance WebUI-to-Code research for multimodal large language models (MLLMs) through evaluation on real-world webpages. It provides:

  • 45,100 real webpages collected from public portal websites
  • High diversity and complexity, covering a wide range of industries and design styles
  • Novel evaluation metrics that quantify layout and style consistency based on rendered pages
  • The ALISA reinforcement learning framework, which uses the new metrics as reward signals to optimize generation quality

🚀 Key Features

Beyond the Limitations of Traditional Benchmarks

WebRenderBench addresses the core issues of existing WebUI-to-Code benchmarks in data quality and evaluation methodology:

Aspect Traditional Benchmarks Advantages of WebRenderBench
Data Quality Small-scale, simple-structured, or LLM-synthesized data with limited diversity Large-scale, real-world, and structurally complex webpages that present higher challenges
Evaluation Reliability Relies on visual APIs (high cost) or code-structure comparison (fails to handle code asymmetry) Objectively and efficiently evaluates layout and style consistency based on rendered results
Training Effectiveness Difficult to optimize on crawled data with asymmetric code structures Proposed metrics can be directly used as RL reward signals to enhance model optimization

Dataset Characteristics

WebRenderBench and ALISA Framework

Figure 1: Dataset construction pipeline and the ALISA framework

Our dataset is constructed through a systematic process to ensure both high quality and diversity:

  1. Data Collection: URLs are obtained from open enterprise portal datasets. A high-concurrency crawler captures 210K webpages along with static resources.
  2. Data Processing: MHTML pages are converted into HTML files, and cross-domain resources are processed to ensure local renderability and full-page screenshots.
  3. Data Cleaning: Pages with abnormal sizes, rendering errors, or missing styles are filtered out. Multimodal QA models further remove low-quality samples with large blank areas or overlapping elements, yielding 110K valid pages.
  4. Data Categorization: Pages are categorized by industry and complexity (measured via Group Count) to ensure balanced distribution across difficulty levels and domains.

Finally, we construct a dataset of 45.1K samples, evenly split into training and test sets.


🌟 Evaluation Framework

We propose a novel evaluation protocol based on rendered webpages, quantifying model performance along two key dimensions: layout and style consistency.


RDA (Relative Layout Difference of Associated Elements)

Purpose: Measures relative layout differences between matched elements.

  • Element Association: Matches corresponding elements between generated and target pages using text similarity (LCS) and geometric distance.
  • Positional Deviation: The page is divided into a 3×3 grid. Associated elements are compared quadrant-wise—if located in different quadrants, the score is 0; otherwise, a deviation-based score is computed.
  • Uniqueness Weighting: Each element is weighted by its uniqueness (inverse group size), giving higher importance to distinctive components.

GDA (Group-wise Difference in Element Counts)

Purpose: Measures group-level alignment of axis-aligned elements.

  • Grouping: Elements aligned on the same horizontal or vertical axis are treated as one group.
  • Count Comparison: Compares whether corresponding groups in the generated and target pages contain the same number of elements.
  • Uniqueness Weighting: Weighted by element uniqueness to emphasize key structural alignment.

SDA (Style Difference of Associated Elements)

Purpose: Evaluates fine-grained style differences between associated elements.

  • Multi-Dimensional Style Extraction: Measures differences in foreground color, background color, font size, and border radius.
  • Weighted Averaging: Computes a weighted mean of style similarity scores across all associated elements to obtain an overall style score.

⚙️ Installation Guide

Core Dependencies

Coming Soon


📊 Benchmark Workflow

Directory Structure

|- docs/                     # Documentation
|- scripts                   # Evaluation scripts  
|- web_render_test.jsonl      # Test set metadata  
|- web_render_train.jsonl     # Training set metadata  
|- test_webpages.zip          # Test set webpages  
|- train_webpages.zip         # Training set webpages  
|- test_screenshots.zip       # Test set screenshots  
|- train_screenshots.zip      # Training set screenshots  

Obtain Datasets

  • Webpages
File Name Download Link (ModelScope)
train_webpages.7z.001 Download
train_webpages.7z.002 Download
train_webpages.7z.003 Download
train_webpages.7z.004 Download
train_webpages.7z.005 Download
train_webpages.7z.006 Download
train_webpages.7z.007 Download
train_webpages.7z.008 Download
train_webpages.7z.009 Download
train_webpages.7z.010 Download
train_webpages.7z.011 Download
train_webpages.7z.012 Download
train_webpages.7z.013 Download
train_webpages.7z.014 Download
train_webpages.7z.015 Download
train_webpages.7z.016 Download
train_webpages.7z.017 Download
train_webpages.7z.018 Download
train_webpages.7z.019 Download
test_webpages.7z.001 Download
test_webpages.7z.002 Download
test_webpages.7z.003 Download
test_webpages.7z.004 Download
test_webpages.7z.005 Download
test_webpages.7z.006 Download
test_webpages.7z.007 Download
test_webpages.7z.008 Download
test_webpages.7z.009 Download
test_webpages.7z.010 Download
test_webpages.7z.011 Download
test_webpages.7z.012 Download
test_webpages.7z.013 Download
test_webpages.7z.014 Download
test_webpages.7z.015 Download
test_webpages.7z.016 Download
test_webpages.7z.017 Download
test_webpages.7z.018 Download
  • Screenshots
File Name Download Link (ModelScope)
train_screenshots.7z.001 Download
train_screenshots.7z.002 Download
test_screenshots.7z.001 Download
test_screenshots.7z.002 Download

Implementation Steps

  1. Data Preparation

    • Download the WebRenderBench dataset and extract webpage and screenshot archives.
    • Each pair consists of a real webpage (HTML + resources) and its rendered screenshot.
  2. Model Inference

    • Run inference using engines such as vLLM or LLM Deploy, and save results to the designated directory.
  3. Evaluation

    • Run scripts/1_get_evaluation.py.
    • The script launches a web server to render both generated and target HTML.
    • WebDriver extracts DOM information and computes RDA, GDA, and SDA scores.
    • Results are saved under save_results/.
    • Final scores are aggregated via scripts/2_compute_alisa_scores.py.
  4. ALISA Training (Optional)

    • Use models/train_rl.py for reinforcement learning fine-tuning. (Coming Soon)
    • The computed evaluation scores serve as reward signals to optimize policy models via methods such as GRPO.

📈 Model Performance Insights

We evaluate 17 multimodal large language models of varying scales and architectures (both open- and closed-source).

  • Combined Scores of RDA, GDA, and SDA (%)

Inference Results

Key Findings:

  • Overall, larger models achieve higher consistency. GPT-4.1-mini and Qwen-VL-Plus perform best among closed-source models.
  • While most models perform reasonably on simple pages (Group Count < 50), RDA scores drop sharply as page complexity increases—precise layout alignment remains a major challenge.
  • After reinforcement learning via the ALISA framework, Qwen2.5-VL-7B shows substantial improvements across all complexity levels, even surpassing GPT-4.1-mini on simpler cases.

📅 Future Work

  • Release pretrained models fine-tuned with the ALISA framework
  • Expand dataset coverage to more industries and dynamic interaction patterns
  • Open-source the complete toolchain for data collection, cleaning, and evaluation

📜 License

The WebRenderBench dataset is released for research purposes only. All accompanying code will be published under the Apache License 2.0.

All webpages in the dataset are collected from publicly accessible enterprise portals. To protect privacy, all personal and sensitive information has been removed or modified.


📚 Citation

If you use our dataset or framework in your research, please cite the following paper:

@article{webrenderbench2025,
  title={WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning},
  author={Anonymous Author(s)},
  year={2025},
  journal={arXiv preprint},
}