Pinned Repositories
fuse-med-ml
A python framework accelerating ML based discovery in the medical field by encouraging code reuse. Batteries included :)
adv_deep_learning
adversarially-robust-neural-style-transfer
ColabFold
Making Protein folding accessible to all!
hot_budr_snake
A snake on the lookout for some good ol' hot_budr
MoCoV2_CIFAR10
Training MoCoV2 on the CIFAR10 Dataset
PatentGPT
Training transformers on all of USPTO.
slow-transformers
🐌 slow transformers
Unofficial-Poster-Template-for-Technion-Computer-Science
Unofficial Poster Template for Technion Computer Science. Copied from the UChicago version here: https://www.overleaf.com/latex/templates/unofficial-poster-template-for-uchicago-computer-science/kbbmbdxwbypb
shatz01's Repositories
shatz01/MoCoV2_CIFAR10
Training MoCoV2 on the CIFAR10 Dataset
shatz01/slow-transformers
🐌 slow transformers
shatz01/hot_budr_snake
A snake on the lookout for some good ol' hot_budr
shatz01/Unofficial-Poster-Template-for-Technion-Computer-Science
Unofficial Poster Template for Technion Computer Science. Copied from the UChicago version here: https://www.overleaf.com/latex/templates/unofficial-poster-template-for-uchicago-computer-science/kbbmbdxwbypb
shatz01/adv_deep_learning
shatz01/ColabFold
Making Protein folding accessible to all!
shatz01/fuse-drug
FuseMedML based molecular biochemistry library for drug discovery/repurposing
shatz01/fuse-med-ml
A python framework accelerating ML based discovery in the medical field by encouraging code reuse. Batteries included :)
shatz01/PatentGPT
Training transformers on all of USPTO.
shatz01/smartdanny.github.io
shatz01/budr_blog
Me fastpages blog
shatz01/cs236605-tutorials-spring2021
shatz01/full_attention
Full attention for neural networks
shatz01/gaeble-diffusion
A latent text-to-image diffusion model
shatz01/hrdl
shatz01/HungaBungaV2
shatz01/imagenette_starter
starter kit for imagenette.
shatz01/improved-neural-algorithm-of-artistic-style
Improving style transfer of VGG using adversarial training
shatz01/kaggle_distillbert
shatz01/litgpt
Pretrain, finetune, deploy 20+ LLMs on your own data. Uses state-of-the-art techniques: flash attention, FSDP, 4-bit, LoRA, and more.
shatz01/lltest
shatz01/MMseqs2-App
MMseqs2 app to run on your workstation or servers
shatz01/more_better
MORE CRITERION MORE BETTERRR
shatz01/Neural-Network-Architecture-Diagrams
Diagrams for visualizing neural network architecture (Created with diagrams.net)
shatz01/pdb-llama
The pdb guru
shatz01/pdbpp
pdb++, a drop-in replacement for pdb (the Python debugger)
shatz01/RWKV-LM
RWKV-2 is a RNN with transformer-level performance. It can be directly trained like a GPT transformer (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
shatz01/shatz_rag
shatz01/slow_transformers
🐌 slow_transformers
shatz01/smartdanny