Phirefly9
Interested in RL, Deep Learning, HPC, and general programming
Toyon Research Corp.Dayton, OH
Pinned Repositories
Advent-of-Code-2018
Learning rust through the Advent of code https://adventofcode.com
CoreNLP
Stanford CoreNLP: A Java suite of core NLP tools.
Minimal-PKM-layer
Minimal working example of Product-Key Memory layers
nanoQRWK_experiments
RWKV in nanoGPT style
NeuralNetworkMemories
experiment with adding sample memory to neural networks on MNIST
pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
QTCN
Baseline Code for the Quaternion Temporal Convolution Network
ray
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
RL-basics
A repo for learning basic RL and coding different agents
ML_Project
Group Project for CSE 5523 - Machine Learning
Phirefly9's Repositories
Phirefly9/nanoQRWK_experiments
RWKV in nanoGPT style
Phirefly9/Advent-of-Code-2018
Learning rust through the Advent of code https://adventofcode.com
Phirefly9/CoreNLP
Stanford CoreNLP: A Java suite of core NLP tools.
Phirefly9/Minimal-PKM-layer
Minimal working example of Product-Key Memory layers
Phirefly9/NeuralNetworkMemories
experiment with adding sample memory to neural networks on MNIST
Phirefly9/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Phirefly9/QTCN
Baseline Code for the Quaternion Temporal Convolution Network
Phirefly9/ray
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
Phirefly9/RL-basics
A repo for learning basic RL and coding different agents
Phirefly9/tcn-pytorch
TCN implementation in pytorch
Phirefly9/RWKV-SeqMNIST
An implementation of sequential MNIST, using PyTorch Lightning
Phirefly9/unilm
UniLM - Unified Language Model Pre-training