Pinned Repositories
60_Days_RL_Challenge
Learn Deep Reinforcement Learning in Depth in 60 days
AIRIS_Public
AIRIS Public Release
ALANN2018
Adaptive Logic and Neural Network (ALANN) version of NARS style General Machine Intelligence (GMI)
alpaca-lora
Instruct-tune LLaMA on consumer hardware
ANSNA
Adaptive Neuro-Symbolic Network Agent
Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
awesome-artificial-general-intelligence
Resources about Artificial General Intelligence
ChatYuan
ChatYuan: Large Language Model for Dialogue in Chinese and English
FindTheChatGPTer
汇总那些ChatGPT的平替们
non-axiomatic-reasoner
A Non-Axiomatic Inference System built on Holochain.
ShawnHowell's Repositories
ShawnHowell/FindTheChatGPTer
汇总那些ChatGPT的平替们
ShawnHowell/alpaca-lora
Instruct-tune LLaMA on consumer hardware
ShawnHowell/Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
ShawnHowell/babyagi
ShawnHowell/BELLE
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
ShawnHowell/ChatGLM-6B
ChatGLM-6B:开源双语对话语言模型 | An Open Bilingual Dialogue Language Model
ShawnHowell/Chinese-alpaca-lora
骆驼:A Chinese finetuned instruction LLaMA. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技
ShawnHowell/Chinese-ChatLLaMA
中文LLaMA基础模型;中文ChatLLaMA对话模型;NLP预训练/指令微调数据集
ShawnHowell/Chinese-LangChain
中文langchain项目|小必应,Q.Talk,强聊,QiangTalk
ShawnHowell/Chinese-Vicuna
Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
ShawnHowell/ColossalAI
Making large AI models cheaper, faster and more accessible
ShawnHowell/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
ShawnHowell/dolly
Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform
ShawnHowell/EasyLM
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
ShawnHowell/FastChat
The release repo for "Vicuna: An Open Chatbot Impressing GPT-4"
ShawnHowell/GLM-130B
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
ShawnHowell/GPT-4-LLM
ShawnHowell/GPTQ-for-LLaMa
4 bits quantization of LLaMA using GPTQ
ShawnHowell/langtorch
Building composable LLM applications with Java / JVM.
ShawnHowell/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Model for All.
ShawnHowell/Megatron-LM
Ongoing research training transformer models at scale
ShawnHowell/nebullvm
Plug and play modules to optimize the performances of your AI systems 🚀
ShawnHowell/Open-Llama
The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.
ShawnHowell/open_flamingo
An open-source framework for training large multimodal models.
ShawnHowell/Plan4MC
Reinforcement learning and planning for Minecraft.
ShawnHowell/StableLM
StableLM: Stability AI Language Models
ShawnHowell/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
ShawnHowell/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
ShawnHowell/trlx
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
ShawnHowell/unit-minions
《AI 研发提效研究:自己动手训练 LoRA》,包含 Llama (Alpaca LoRA)模型、ChatGLM (ChatGLM Tuning)相关 Lora 的训练。训练内容:用户故事生成、测试代码生成、代码辅助生成、文本转 SQL、文本生成代码……