Pinned Repositories
ALBEF
Code for ALBEF: a new vision-language pre-training method
awesome-align
A neural word aligner based on multilingual BERT
awesome-mcp-servers
A collection of MCP servers.
HJQE
Human judgement on the word-level quality estimation for the machine translation
NMT
Attention-based NMT with Coverage and Context Gate
NMT_GAN
generative adversarial nets for neural machine translation
openfst-tools
Automatically exported from code.google.com/p/openfst-tools
UNIT
unsupervised-NMT
Unsupervised neural machine translation; weight sharing; GAN
WeTS
A benchmark for the task of translation suggestion
ZhenYangIACAS's Repositories
ZhenYangIACAS/awesome-mcp-servers
A collection of MCP servers.
ZhenYangIACAS/cambrian
Cambrian-1 is a family of multimodal LLMs with a vision-centric design.
ZhenYangIACAS/Chinese-CLIP
Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
ZhenYangIACAS/CodeBERT
CodeBERT
ZhenYangIACAS/CognitiveKernel-Pro
Deep Research Agent CognitiveKernel-Pro from Tencent AI Lab. Paper: https://arxiv.org/pdf/2508.00414
ZhenYangIACAS/ControlNet
Let us control diffusion models!
ZhenYangIACAS/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
ZhenYangIACAS/FastChat
The release repo for "Vicuna: An Open Chatbot Impressing GPT-4"
ZhenYangIACAS/gemma
Open weights LLM from Google DeepMind.
ZhenYangIACAS/generative-recommenders
Repository hosting code used to reproduce results in "Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations" (https://arxiv.org/abs/2402.17152, ICML'24).
ZhenYangIACAS/img2dataset
Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.
ZhenYangIACAS/Llama-X
Open Academic Research on Improving LLaMA to SOTA LLM
ZhenYangIACAS/LLaVA
Large Language-and-Vision Assistant built towards multimodal GPT-4 level capabilities.
ZhenYangIACAS/LLM4POI
ZhenYangIACAS/MinerU
A high-quality tool for convert PDF to Markdown and JSON.一站式开源高质量数据提取工具,将PDF转换成Markdown和JSON格式。
ZhenYangIACAS/MiniCPM
MiniCPM-2B: An end-side LLM outperforms Llama2-13B.
ZhenYangIACAS/NExT-GPT
Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model
ZhenYangIACAS/open-r1
Fully open reproduction of DeepSeek-R1
ZhenYangIACAS/Open-Sora-Plan
This project aim to reproducing Sora (Open AI T2V model), but we only have limited resource. We deeply wish the all open source community can contribute to this project.
ZhenYangIACAS/Resume-Matcher
Improve your resumes with Resume Matcher. Get insights, keyword suggestions and tune your resumes to job descriptions.
ZhenYangIACAS/ROLL
An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models
ZhenYangIACAS/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
ZhenYangIACAS/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
ZhenYangIACAS/torchscale
Transformers at any scale
ZhenYangIACAS/trl
Train transformer language models with reinforcement learning.
ZhenYangIACAS/trpc
A multi-language, pluggable, high-performance RPC framework
ZhenYangIACAS/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
ZhenYangIACAS/VAR
[NeurIPS 2024 Oral][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-simple, user-friendly yet state-of-the-art* codebase for autoregressive image generation!
ZhenYangIACAS/Vary-toy
Official code implementation of Vary-toy (Small Language Model Meets with Reinforced Vision Vocabulary)
ZhenYangIACAS/verl-agent
verl-agent is an extension of veRL, designed for training LLM/VLM agents via RL. verl-agent is also the official code for paper "Group-in-Group Policy Optimization for LLM Agent Training"