JH-GEECS's Stars
meta-llama/llama
Inference code for Llama models
ray-project/ray
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
meta-llama/llama3
The official Meta Llama 3 GitHub site
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Dao-AILab/flash-attention
Fast and memory-efficient exact attention
NVIDIA/DeepLearningExamples
State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
facebookresearch/xformers
Hackable and optimized Transformers building blocks, supporting a composable construction.
facebookresearch/ImageBind
ImageBind One Embedding Space to Bind Them All
LiheYoung/Depth-Anything
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
rom1504/img2dataset
Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.
cvxgrp/cvxpylayers
Differentiable convex optimization layers
kellwinr/galaxybook_mask
This script will allow you to mimic your windows pc as a Galaxy Book laptop, this is usually used to bypass Samsung Notes
facebookresearch/MetaCLIP
ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Experts via Clustering
philschmid/deep-learning-pytorch-huggingface
OFA-Sys/ONE-PEACE
A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
Alibaba-MIIL/ImageNet21K
Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
ByungKwanLee/MoAI
[ECCV 2024] Official PyTorch implementation code for realizing the technical part of Mixture of All Intelligence (MoAI) to improve performance of numerous zero-shot vision language tasks.
IntelLabs/academic-budget-bert
Repository containing code for "How to Train BERT with an Academic Budget" paper
allenai/unified-io-inference
microsoft/BridgeTower
Open source code for AAAI 2023 Paper "BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning"
rabeehk/compacter
InhwanBae/LMTrajectory
Official Code for "Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction (CVPR 2024)"
igorbrigadir/DownloadConceptualCaptions
Reliably download millions of images efficiently
princeton-nlp/DinkyTrain
Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃
TsinghuaC3I/SoRA
[EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models
JinhwiPark/DepthPrompting
[CVPR24] Depth Prompting for Sensor-Agnostic Depth Estimation
sangho-vision/avbert
activatedgeek/tight-pac-bayes
Code for PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization, NeurIPS 2022
ssyze/EVE
EVE: Efficient Vision-Language Pre-training with Masked Prediction and Modality-Aware MoE