zhpeng24's Stars
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
mli/paper-reading
深度学习经典、新论文逐段精读
princeton-nlp/tree-of-thought-llm
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
julycoding/ChatGPT_principle_fine-tuning_code_paper
本『ChatGPT资源库(原理/微调/代码/论文)』的初始版本来自July CSDN博客上阅读量高达50万的ChatGPT系列,联合发起人:七月ChatGPT原理课学员,6月初正式对外发布
OpenBMB/BMTrain
Efficient Training (including pre-training and fine-tuning) for Big Models
DayBreak-u/chineseocr_lite
超轻量级中文ocr,支持竖排文字识别, 支持ncnn、mnn、tnn推理 ( dbnet(1.8M) + crnn(2.5M) + anglenet(378KB)) 总模型仅4.7M
terrifyzhao/spo_extract
基于transformers的三元组抽取
KaiyuanGao/NLP-RoadMap
📚 机器学习、深度学习、自然语言处理学习路线图 及 AI方向学习资源、工具