Official implementation of A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs.
- Institute: Mohamed bin Zayed University of Artificial Intelligence
- Resources: [Paper] [Project Page] [Web2Code Dataset][Croissant]
[2024/7/25] The training code and checkpoints are released! [2024/6/27] The paper and project page are released!
For training Web2Code model, refer to the web2code training.
Explore our comprehensive benchmarks for evaluating webpage-related tasks.
Set up your environment, generate webpage screenshots, and run evaluations efficiently. Get started here: Webpage Code Generation Benchmark
Find clear instructions for setting up your environment, generating outputs, and running evaluations. Begin here: Webpage Understanding Benchmark
- LLaVA: the codebase we built upon. Thanks for their wonderful work.
- WebSRC, WebSight, Pix2Code: some high-quality web page and HTML code related dataset!
If you find our work helpful for your research, please consider giving a star ⭐ and citation 📝
@article{web2code2024,
title={Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs},
author={Sukmin Yun and Haokun Lin and Rusiru Thushara and Mohammad Qazim Bhat and Yongxin Wang and Zutao Jiang and Mingkai Deng and Jinhong Wang and Tianhua Tao and Junbo Li and Haonan Li and Preslav Nakov and Timothy Baldwin and Zhengzhong Liu and Eric P. Xing and Xiaodan Liang and Zhiqiang Shen},
journal={arXiv preprint arXiv:2406.20098},
year={2024}
}
Usage and License Notices: Usage and License Notices: The data is intended and licensed for research use only. The dataset is CC BY 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.