Pinned Repositories
angular2-sandbox
A collection of simple AngularJS 2.x projects and POCs.
deep_data_bench
Highly verstatile Python SQL Benchmarking tool that generates realistic queries to test performance of any SQL database schema on any given hardware platform. I was a contributing member of this project originally developed at Deep Information Sciences.
deepops
Tools for building GPU clusters
DeepTools
A collection of Python, PHP, and Bash utilities developed to ease loading, migration, and benchmarking of SQL databases. I modified and supported these scripts as a member of the Deep Information Sciences team.
hello-world-github
This is a hello world git project that was used to teach/mentor several students and colleagues on Git basics, GitHub use, and some Python/Java basics.
Kaggle-public
A collection of Python and Matlab projects aimed at utilizing various machine learning techniques to solve big data problems.
nim-kserve
Temporary location for documentation an examples showcasing how to deploy and manage NVIDIA NIM with KServe
notebooks-extended
RAPIDS Community Notebooks
public-scripts
A collection of useful scripts and utilities that I have written in various languages.
udacity-sdc-master
A collection of external dependencies, final projects, lesson scripts, and other resources used in the Udacity Self Driving Car Nanodegree.
supertetelman's Repositories
supertetelman/deepops
Tools for building GPU clusters
supertetelman/hello-world-github
This is a hello world git project that was used to teach/mentor several students and colleagues on Git basics, GitHub use, and some Python/Java basics.
supertetelman/nim-kserve
Temporary location for documentation an examples showcasing how to deploy and manage NVIDIA NIM with KServe
supertetelman/public-scripts
A collection of useful scripts and utilities that I have written in various languages.
supertetelman/ansible-role-chrony
supertetelman/ansible-role-nvidia-driver
supertetelman/Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
supertetelman/charts
Helm Charts
supertetelman/cloud-native-stack
Run cloud native workloads on NVIDIA GPUs
supertetelman/Crazyflie
AADL models for the Crazyflie UAV -- OMSCS Class CS7639
supertetelman/DeepLearningExamples
Deep Learning Examples
supertetelman/deepops-hot
Hands-on Training Materials for DeepOps
supertetelman/easybuild-framework
EasyBuild is a software installation framework in Python that allows you to install software in a structured and robust way.
supertetelman/gpu-operator
NVIDIA GPU Operator creates/configures/manages GPUs atop Kubernetes
supertetelman/helmchart
Helm Chart for Pachyderm
supertetelman/k8s-rapids-dask
A collection of examples for integrating k8s, dask, and rapids and the supporing Docker infrastructure.
supertetelman/kserve
Standardized Serverless ML Inference Platform on Kubernetes
supertetelman/kubecon-2019-gitops-handson
Hands-On: GitOps with Kustomize and Argo CD
supertetelman/kubeflow
Machine Learning Toolkit for Kubernetes
supertetelman/langchain
⚡ Building applications with LLMs through composability ⚡
supertetelman/llama_index
LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.
supertetelman/local-denoiser
A project to denoise and improve audio to form a local AI-Improved feedback loop between an input and output audio device
supertetelman/manifests
A repository for Kustomize manifests
supertetelman/markdown-cheatsheet
Markdown Cheatsheet for Github Readme.md
supertetelman/mig-parted
MIG Partition Editor for NVIDIA GPUs
supertetelman/models
Models and examples built with TensorFlow
supertetelman/nim-deploy
A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deployment.
supertetelman/rapids
http://rapids.ai
supertetelman/rook
Storage Orchestration for Kubernetes
supertetelman/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.