Welcome! 👋

My name is Yihong Chen, and I am a joint PhD student between Fundamental AI Research at Meta and UCL NLP. My research focuses on AI knowledge acquisition, specifically on how different AI systems can learn to represent and use concepts/symbols efficiecntly.

I am open to collaborations on topics related to embedding learning, link prediction, and language modeling. If you would like to get in touch, you can reach me by emailing yihong-chen AT outlook DOT com, poke me on Threads or simply booking a Zoom meeting with me.

Looking for Some Inspirations?

💥 Sep 2023, our latest work Improving Language Plasticity via Pretraining with Active Forgetting is accepted by NeurIPS 2023!

💥 Sep 2023, I presented our latest work on forgetting at IST-Unbabel seminar.

💥 Jul 2023, I presented our latest work on forgetting language modelling at ELLIS Unconference 2023. The slides are available here. Feel free to leave your comments.

💥 Jul 2023, discover the power of forgetting in language modelling! Our latest work, Improving Language Plasticity via Pretraining with Active Forgetting, shows how pretraining a language model with active forgetting can help it quickly learn new languages. You'll be amazed by the model plasticity imbued via pretraining with forgetting. Check it out :)

💥 Nov 2022, our paper, REFACTOR GNNS: Revisiting Factorisation-based Models from a Message-Passing Perspective, will appear in NeurIPS 2022! If you're interested in understanding why FMs can be some special GNNs and make them usable on new graphs, check it out!

💥 Jun 2022, if you're looking for a hands-on repo to start experimenting with link prediction, check out our repo ssl-relation-prediction. Simple code, easy to hack 🚀