minghsuanwu
Writing code is like building a Lego. Choose what brick you need and put them together. Sometimes, you do it by yourself. Sometimes, you work with other people.
Taipei
Pinned Repositories
ACL-duconv
This is forked from baidu duconv ACL
agibot_x1_hardware
The hardware design for AgiBot X1.
agibot_x1_infer
The inference module for AgiBot X1.
agibot_x1_train
The reinforcement learning training code for AgiBot X1.
albert
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
albert-chinese-ner
使用预训练语言模型ALBERT做中文NER
albert_zh
A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型
DolphinGen
这个名字将"海豚"和"生成式语言模型"结合在一起,同时也暗示着这个模型能够像海豚一样聪明、好奇、灵活地创造新的内容和语言。
NLPer-Interview
该仓库主要记录 NLP 算法工程师相关的面试题
minghsuanwu's Repositories
minghsuanwu/libphonenumber
Google's common Java, C++ and JavaScript library for parsing, formatting, and validating international phone numbers.
minghsuanwu/ckip-transformers
CKIP Transformers
minghsuanwu/KnowledgeGraphData
史上最大规模1.4亿中文知识图谱开源下载
minghsuanwu/LeetCode-Py
⛽️「算法通关手册」,超详细的「算法与数据结构」基础讲解教程,「LeetCode」700+ 道题目的详细解析。通过「算法理论学习」和「编程实战练习」相结合的方式,从零基础到彻底掌握算法知识。
minghsuanwu/lightseq
LightSeq: A High Performance Library for Sequence Processing and Generation
minghsuanwu/ColossalAI
Making big AI models cheaper, easier, and scalable
minghsuanwu/petals
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
minghsuanwu/nebullvm
Plug and play modules to optimize the performances of your AI systems 🚀
minghsuanwu/whisper.cpp
Port of OpenAI's Whisper model in C/C++
minghsuanwu/trlx
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
minghsuanwu/llama.cpp
Port of Facebook's LLaMA model in C/C++
minghsuanwu/OpenChatKit
minghsuanwu/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
minghsuanwu/trl
Train transformer language models with reinforcement learning.
minghsuanwu/chatglm_finetuning
chatglm 6b 大模型微调
minghsuanwu/onnxmltools
ONNXMLTools enables conversion of models to ONNX
minghsuanwu/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
minghsuanwu/BELLE
BELLE: Bloom-Enhanced Large Language model Engine(开源中文对话大模型-70亿参数)
minghsuanwu/traditional-chinese-alpaca
A Traditional-Chinese instruction-following model with datasets based on Alpaca.
minghsuanwu/Chinese-alpaca-lora
骆驼:A Chinese finetuned instruction LLaMA. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技
minghsuanwu/modelzoo
minghsuanwu/ChatGLM-MNN
Pure C++, Easy Deploy ChatGLM-6B.
minghsuanwu/onnx
Open standard for machine learning interoperability
minghsuanwu/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
minghsuanwu/the-algorithm
Source code for Twitter's Recommendation Algorithm
minghsuanwu/baize
Baize is an open-source chatbot trained with ChatGPT self-chatting data, developed by researchers at UCSD and Sun Yat-sen University.
minghsuanwu/CamelBell-Chinese-LoRA
CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-LLM project created by 冷子昂 @ 商汤科技 & 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技
minghsuanwu/awesome-chatgpt-prompts
This repo includes ChatGPT prompt curation to use ChatGPT better.
minghsuanwu/Chinese-Vicuna
Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
minghsuanwu/Alpaca-CoT
We extend CoT data to Alpaca to boost its reasoning ability. We are constantly expanding our collection of instruction-tuning data, and integrating more LLMs together for easy use. (我们将CoT数据扩展到Alpaca以提高其推理能力,同时我们将不断收集更多的instruction-tuning数据集,并在我们框架下集成进更多的LLM,打造一个通用的LLM-IFT平台。)