Pinned Repositories
better-sh
comments
utterances for hero
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
IE-Datasets-Collections
中英文信息抽取数据集整理
IE-Papers-Reading
Information Extraction Papers Reading
Megatron-LM
Ongoing research training transformer models at scale
nudtbeamer
nudt 开题/毕业 答辩模版
SciCN
SciCN: A Scientific Dataset For Chinese Named Entity Recognition
transformers
🤗 Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
yangjingla's Repositories
yangjingla/nudtbeamer
nudt 开题/毕业 答辩模版
yangjingla/IE-Datasets-Collections
中英文信息抽取数据集整理
yangjingla/IE-Papers-Reading
Information Extraction Papers Reading
yangjingla/SciCN
SciCN: A Scientific Dataset For Chinese Named Entity Recognition
yangjingla/better-sh
yangjingla/comments
utterances for hero
yangjingla/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
yangjingla/LLaMA-Factory
Unify Efficient Fine-Tuning of 100+ LLMs
yangjingla/Math-Competition-Problem-Solving
蒲和平大学生数学竞赛课后习题解析
yangjingla/Megatron-LM
Ongoing research training transformer models at scale
yangjingla/mybooklist
mobi for kindle
yangjingla/pytorch_lighting_template
yangjingla/transformers
🤗 Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.
yangjingla/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
yangjingla/lm-evaluation-harness
A framework for few-shot evaluation of language models.
yangjingla/nextjs-notion-starter-kit
Deploy your own Notion-powered website in minutes with Next.js and Vercel.
yangjingla/unsloth
Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory
yangjingla/yangjingla.github.io