alfredcs's Stars
facebookresearch/llama
Inference code for LLaMA models
karpathy/nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
microsoft/TaskMatrix
huggingface/pytorch-image-models
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
ultralytics/ultralytics
Ultralytics YOLO11 🚀
openai/chatgpt-retrieval-plugin
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
yoheinakajima/babyagi
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
tloen/alpaca-lora
Instruct-tune LLaMA on consumer hardware
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
nebuly-ai/nebuly
The user analytics platform for LLMs
facebookresearch/ImageBind
ImageBind One Embedding Space to Bind Them All
huggingface/accelerate
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
huggingface/chat-ui
Open source codebase powering the HuggingChat app
TimDettmers/bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
amazon-science/mm-cot
Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)
MCG-NJU/VideoMAE
[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
fahadshamshad/awesome-transformers-in-medical-imaging
A collection of resources on applications of Transformers in Medical Imaging.
timojl/clipseg
This repository contains the code of the CVPR 2022 paper "Image Segmentation Using Text and Image Prompts".
YehLi/xmodaler
X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsense reasoning, and cross-modal retrieval).
aws/deep-learning-containers
AWS Deep Learning Containers are pre-built Docker images that make it easier to run popular deep learning frameworks and tools on AWS.
qunash/stable-diffusion-2-gui
Lightweight Stable Diffusion v 2.1 web UI: txt2img, img2img, depth2img, inpaint and upscale4x.
kbressem/medAlpaca
LLM finetuned for medical question answering
RizwanMunawar/yolov8-object-tracking
YOLOv8 Object Tracking Using PyTorch, OpenCV and Ultralytics
aws-samples/sagemaker-ssh-helper
A helper library to connect into Amazon SageMaker with AWS Systems Manager and SSH (Secure Shell)
jackaduma/Alpaca-LoRA-RLHF-PyTorch
A full pipeline to finetune Alpaca LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Alpaca architecture. Basically ChatGPT but with Alpaca
aws-samples/amazon-sagemaker-studio-vpc-networkfirewall
This solution demonstrates the setup and deployment of Amazon SageMaker Studio into a private VPC and implementation of multi-layer security controls, such as data encryption, network traffic monitoring and restriction, usage of VPC endpoints, subnets and security groups, IAM resource policies.
aws-samples/sagemaker-ground-truth-label-training-data
alfredcs/distributed-training
alfredcs/CVWorkshop17
For CV workshop #17