CogAgent

πŸ“— δΈ­ζ–‡η‰ˆREADME

πŸ’‘ The official GitHub repository of CogAgent is located at CogVLM & CogAgent Official Repository. Please visit this repository for more information about CogAgent, including introductions, code, and model checkpoints.

CogVLM

πŸ“– Paper: CogVLM: Visual Expert for Pretrained Language Models

CogVLM is a powerful open-source visual language model (VLM). CogVLM-17B has 10 billion visual parameters and 7 billion language parameters, supporting image understanding and multi-turn dialogue with a resolution of 490*490.

CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC.

CogAgent

πŸ“– Paper: CogAgent: A Visual Language Model for GUI Agents

CogAgent is an open-source visual language model improved based on CogVLM. CogAgent-18B has 11 billion visual parameters and 7 billion language parameters, supporting image understanding at a resolution of 1120*1120. On top of the capabilities of CogVLM, it further possesses GUI image Agent capabilities.

CogAgent-18B achieves state-of-the-art generalist performance on 9 classic cross-modal benchmarks, including VQAv2, OK-VQ, TextVQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. It significantly surpasses existing models on GUI operation datasets including AITW and Mind2Web.

🌐 Web Demo for both CogVLM and CogAgent: this link

πŸ“” For more detailed usage information, please refer to: CogAgent&CogVLM technical documentation(Only Chinese)