Pinned Repositories
eksctl
The official CLI for Amazon EKS
awsome-distributed-training
Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.
deep-learning-containers
AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet.
metaseq
Repo for external large-scale work
nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
serial-j
Validating and Serializing JSON data into Python object with minimal effort.
test-infra
This repository hosts code that supports the testing infrastructure for the main PyTorch repo. For example, this repo hosts the logic to track disabled tests and slow tests, as well as our continuation integration jobs HUD/dashboard.
vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
examples
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
llama_index
LlamaIndex is a data framework for your LLM applications
junpuf's Repositories
junpuf/serial-j
Validating and Serializing JSON data into Python object with minimal effort.
junpuf/awsome-distributed-training
Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.
junpuf/deep-learning-containers
AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet.
junpuf/metaseq
Repo for external large-scale work
junpuf/nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
junpuf/test-infra
This repository hosts code that supports the testing infrastructure for the main PyTorch repo. For example, this repo hosts the logic to track disabled tests and slow tests, as well as our continuation integration jobs HUD/dashboard.
junpuf/vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch