sepilqi
We are Anonymous. We are legion. We do not forgive. We do not forget. Expect us.
AnonymousAnonymous
sepilqi's Stars
microsoft/autogen
A programming framework for agentic AI 🤖 PyPi: autogen-agentchat Discord: https://aka.ms/autogen-discord Office Hour: https://aka.ms/autogen-officehour
huggingface/trl
Train transformer language models with reinforcement learning.
vitalets/github-trending-repos
Track GitHub trending repositories in your favorite programming language by native GitHub notifications!
mit-han-lab/streaming-llm
[ICLR 2024] Efficient Streaming Language Models with Attention Sinks
bigcode-project/bigcode-evaluation-harness
A framework for the evaluation of autoregressive code generation language models.
noahshinn/reflexion
[NeurIPS 2023] Reflexion: Language Agents with Verbal Reinforcement Learning
nickrosh/evol-teacher
Open Source WizardCoder Dataset
PeiQi0/PeiQi-WIKI-Book
面向网络安全从业者的知识文库🍃
joonspk-research/generative_agents
Generative Agents: Interactive Simulacra of Human Behavior
immersive-translate/immersive-translate
沉浸式双语网页翻译扩展 , 支持输入框翻译, 鼠标悬停翻译, PDF, Epub, 字幕文件, TXT 文件翻译 - Immersive Dual Web Page Translation Extension
geekan/HowToLiveLonger
程序员延寿指南 | A programmer's guide to live longer
Hannibal046/Awesome-LLM
Awesome-LLM: a curated list of Large Language Model
TransformerOptimus/SuperAGI
<⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
tunib-ai/oslo
OSLO: Open Source framework for Large-scale model Optimization
EleutherAI/gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
princeton-nlp/tree-of-thought-llm
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
PlexPt/awesome-chatgpt-prompts-zh
ChatGPT 中文调教指南。各种场景使用指南。学习怎么让它听你的话。
THUDM/ChatGLM2-6B
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Victorwz/LongMem
Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".
salesforce/CodeTF
CodeTF: One-stop Transformer Library for State-of-the-art Code LLM
SUSYUSTC/MathTranslate
translate scientific papers in latex, especially arxiv papers
mosaicml/llm-foundry
LLM training code for Databricks foundation models
NVIDIA/FasterTransformer
Transformer related optimization, including BERT, GPT
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
Witiko/scm-at-arqmath3
The Soft Cosine Measure system developed for the ARQMath-3 shared task evaluation of math information retrieval systems
nlpxucan/WizardLM
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
sangmichaelxie/doremi
Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets
sahil280114/codealpaca
salesforce/CodeGen
CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
s0md3v/roop
one-click face swap