Pinned Repositories
astroport-core
Astroport DEX core contracts
BentoML
Unified Model Serving Framework 🍱
data
A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
gpt_index
GPT Index (LlamaIndex) is a project consisting of a set of data structures designed to make it easier to use large external knowledge bases with LLMs.
llama-hub
A library of data loaders for LLMs made by the community -- to be used with GPT Index and/or LangChain
osmosis
The AMM Laboratory
sagemaker-pytorch-container
Docker container for running PyTorch scripts to train and host PyTorch models on SageMaker
torchx
TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and support for E2E production ML pipelines when you're ready.
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Kbhat1's Repositories
Kbhat1/astroport-core
Astroport DEX core contracts
Kbhat1/BentoML
Unified Model Serving Framework 🍱
Kbhat1/data
A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.
Kbhat1/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Kbhat1/gpt_index
GPT Index (LlamaIndex) is a project consisting of a set of data structures designed to make it easier to use large external knowledge bases with LLMs.
Kbhat1/llama-hub
A library of data loaders for LLMs made by the community -- to be used with GPT Index and/or LangChain
Kbhat1/osmosis
The AMM Laboratory
Kbhat1/sagemaker-pytorch-container
Docker container for running PyTorch scripts to train and host PyTorch models on SageMaker
Kbhat1/torchx
TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and support for E2E production ML pipelines when you're ready.
Kbhat1/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Kbhat1/weaviate
Weaviate is an open source vector search engine that stores both objects and vectors, allowing for combining vector search with structured filtering with the fault-tolerance and scalability of a cloud-native database, all accessible through GraphQL, REST, and various language clients.