yesdtrx's Stars
nomic-ai/gpt4all
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
xtekky/gpt4free
The official gpt4free repository | various collection of powerful language models
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
hpcaitech/ColossalAI
Making large AI models cheaper, faster and more accessible
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Vision-CAIR/MiniGPT-4
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
tloen/alpaca-lora
Instruct-tune LLaMA on consumer hardware
ymcui/Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Stability-AI/StableLM
StableLM: Stability AI Language Models
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
geekyutao/Inpaint-Anything
Inpaint anything using Segment Anything and inpainting models.
gaomingqi/Track-Anything
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
open-mmlab/mmcv
OpenMMLab Computer Vision Foundation
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
SysCV/sam-hq
Segment Anything in High Quality [NeurIPS 2023]
promptslab/Promptify
Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research
facebookresearch/ijepa
Official codebase for I-JEPA, the Image-based Joint-Embedding Predictive Architecture. First outlined in the CVPR paper, "Self-supervised learning from images with a joint-embedding predictive architecture."
DAMO-NLP-SG/Video-LLaMA
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
qianqianwang68/omnimotion
imaurer/awesome-llm-json
Resource list for generating JSON using LLMs via function calling, tools, CFG. Libraries, Models, Notebooks, etc.
Timothyxxx/Chain-of-ThoughtsPapers
A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".
google-research/FLAN
PKU-Alignment/safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
ali-vilab/videocomposer
Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllability
XanaduAI/quantum-neural-networks
This repository contains the source code used to produce the results presented in the paper "Continuous-variable quantum neural networks". Due to subsequent interface upgrades, these scripts will work only with Strawberry Fields version <= 0.10.0.
qigitphannover/DeepQuantumNeuralNetworks
booydar/LM-RMT
Recurrent Memory Transformer
yh08037/quantum-neural-network
Qiskit Hackathon Korea 2021 Community Choice Award Winner : Exploring Hybrid quantum-classical Neural Networks with PyTorch and Qiskit
yesdtrx/AATAE