noanti's Stars
LAION-AI/Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
acheong08/ChatGPT
Reverse engineered ChatGPT API
AvaloniaUI/Avalonia
Develop Desktop, Embedded, Mobile and WebAssembly apps with C# and XAML. The most popular .NET UI client technology
borisdayma/dalle-mini
DALL·E Mini - Generate images from a text prompt
huggingface/trl
Train transformer language models with reinforcement learning.
NVIDIA/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
nebuly-ai/nebuly
The user analytics platform for LLMs
lucidrains/PaLM-rlhf-pytorch
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
yangjianxin1/Firefly
Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
CompVis/taming-transformers
Taming Transformers for High-Resolution Image Synthesis
tencent-ailab/IP-Adapter
The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
euske/pdfminer
Python PDF Parser (Not actively maintained). Check out pdfminer.six.
CarperAI/trlx
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
attardi/wikiextractor
A tool for extracting plain text from Wikipedia dumps
google-research/t5x
qwj/python-proxy
HTTP/HTTP2/HTTP3/Socks4/Socks5/Shadowsocks/ShadowsocksR/SSH/Redirect/Pf TCP/UDP asynchronous tunnel proxy implemented in Python 3 asyncio.
microsoft/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
erikrose/parsimonious
The fastest pure-Python PEG parser I can muster
THUDM/CogView
Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
codeplea/tinyexpr
tiny recursive descent expression parser, compiler, and evaluation engine for math expressions
CStanKonrad/long_llama
LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transformer (FoT) method.
baidu/DuReader
Baseline Systems of DuReader Dataset
THUDM/CogView2
official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"
PaddlePaddle/RocketQA
🚀 RocketQA, dense retrieval for information retrieval and question answering, including both Chinese and English state-of-the-art models.
kakaobrain/mindall-e
PyTorch implementation of a 1.3B text-to-image generation model trained on 14 million image-text pairs
huggingface/transformers-bloom-inference
Fast Inference Solutions for BLOOM
thuanz123/enhancing-transformers
An unofficial implementation of both ViT-VQGAN and RQ-VAE in Pytorch
robvanvolt/DALLE-models
Here is a collection of checkpoints for DALLE-pytorch models, from where you can keep on training or start generating images.
joanrod/ocr-vqgan
OCR-VQGAN, a discrete image encoder (tokenizer and detokenizer) for figure images in Paper2Fig100k dataset. Implementation of OCR Perceptual loss for clear text-within-image generation. Fork from VQGAN in CompVis/taming-transformers
DACUS1995/pytorch-mmap-dataset
A custom pytorch Dataset extension that provides a faster iteration and better RAM usage