Pinned Repositories
cluster-prep
Set of useful scripts to set up a HPC cluster
darkflow
translate darknet to tensorflow. Load trained weights, retrain/fine-tune them using tensorflow, export constant graph def to C++
elias-nlp
NLP processing code written in 2001 for a commercial NLP system. Released as open source. Might still be useful.
invertnn
limit_visible_cpus
Limits number of CPUs reported to programs on Linux - well suited for python code. Using LD_PRELOAD to hook sysconf call.
netbatch
nimbix-utils
Utils to manage servers in the nimbix cloud
pypgmc
PyPGMC: Fast discrete inference submodels for PyMC and PyMC3
pysmile
Python wrapper for the SMILE Bayesian Network Library
kadeng's Repositories
kadeng/pysmile
Python wrapper for the SMILE Bayesian Network Library
kadeng/access_example
Some helpful code snippets
kadeng/Adversarial_Video_Generation
A TensorFlow Implementation of "Deep Multi-Scale Video Prediction Beyond Mean Square Error" by Mathieu, Couprie & LeCun.
kadeng/AITemplate
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
kadeng/colab_tutorials
Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2021
kadeng/Conditional_Diffusion_MNIST
Conditional diffusion model to generate MNIST. Minimal script. Based on 'Classifier-Free Diffusion Guidance'.
kadeng/console-dev-setup
kadeng/cutlass
CUDA Templates for Linear Algebra Subroutines
kadeng/denoising-diffusion-pytorch
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
kadeng/distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
kadeng/EternalTerminal
Re-Connectable secure remote shell
kadeng/fp6_llm
An efficient GPU support for LLM inference with 6-bit quantization (FP6).
kadeng/guided-diffusion
kadeng/homebrew-et
Homebrew formula for Eternal Terminal
kadeng/linux-system-debugging-examples
kadeng/llm-course
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
kadeng/llvm-project
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
kadeng/lora
Using Low-rank adaptation to quickly fine-tune diffusion models.
kadeng/MicroLlama
Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget
kadeng/nsight-vscode-edition
A Visual Studio Code extension for building and debugging CUDA applications.
kadeng/pykurs
kadeng/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
kadeng/pytorch_mixed_mode_gdb
kadeng/rdkit-tutorials
Tutorials to learn how to work with the RDKit
kadeng/simpleGEMM
The simplest but fast implementation of matrix multiplication in CUDA.
kadeng/streamlit-example
Example Streamlit app that you can fork to test out share.streamlit.io
kadeng/templight
Templight is a Clang-based tool to profile the time and memory consumption of template instantiations and to perform interactive debugging sessions to gain introspection into the template instantiation process.
kadeng/torchrl
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
kadeng/ubuntu-kernel-build-env
kadeng/x-transformers
A simple but complete full-attention transformer with a set of promising experimental features from various papers