Pinned Repositories
acl-style-files
Official style files for papers submitted to venues of the Association for Computational Linguistics
blink
The website for BLINK: Multimodal Large Language Models Can See but Not Perceive
BLINK_Benchmark
This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.org/abs/2404.12390 [ECCV 2024]
Commonsense-T2I
Code for Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? [COLM 2024]
CommonsenseT2I
Project Page for paper CommonGen: Text-to-Image Generation that Requires Commonsense Reasoning: An Adversarial Challenge
Data-Structure-CS-225
UIUC /CS 225 /Data Structure /Lab & mp /2017 Fall
EDL
Software-Design
UIUC/ CS126/ Software Design Studio/ 2017 Fall
Statistical_Computing
UIUC /STAT 428 /Statistical Computing/HW /2018 Fall
TARA
zeyofu's Repositories
zeyofu/BLINK_Benchmark
This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.org/abs/2404.12390 [ECCV 2024]
zeyofu/TARA
zeyofu/Commonsense-T2I
Code for Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? [COLM 2024]
zeyofu/Data-Structure-CS-225
UIUC /CS 225 /Data Structure /Lab & mp /2017 Fall
zeyofu/EDL
zeyofu/Statistical_Computing
UIUC /STAT 428 /Statistical Computing/HW /2018 Fall
zeyofu/Software-Design
UIUC/ CS126/ Software Design Studio/ 2017 Fall
zeyofu/acl-style-files
Official style files for papers submitted to venues of the Association for Computational Linguistics
zeyofu/blink
The website for BLINK: Multimodal Large Language Models Can See but Not Perceive
zeyofu/CommonsenseT2I
Project Page for paper CommonGen: Text-to-Image Generation that Requires Commonsense Reasoning: An Adversarial Challenge
zeyofu/CS241-Lectures-SP20
zeyofu/illinois-cogcomp-nlp
CogComp's main NLP libraries
zeyofu/unsupervised_network_embedding_baselines
zeyofu/vqa-generate-then-select
zeyofu/zeyofu.github.io
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes
zeyofu/image_urls
zeyofu/MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI"
zeyofu/test_website
zeyofu/VLMEvalKit
Open-source evaluation toolkit of large vision-language models (LVLMs), support GPT-4v, Gemini, QwenVLPlus, 30+ HF models, 15+ benchmarks