Pinned Repositories
BBBRNN
Bayesian Backprop RNN implementation pytorch https://arxiv.org/abs/1704.02798
lightning_dino
pytorch lightning version of DINO
lmnce
Tensorflow implementation of training a log-linear language model in NCE
Pytorch_EqProp
A Pytorch implementation of Equilibrium Propagation https://arxiv.org/pdf/1602.05179.pdf
research
Supervised and RL Models for No Press Diplomacy
SILTranslationGame
Code for "Countering Language Drift with Seeded Iterated Learning"
translation_game_drift
Countering Language Drift via Grounding Reproduction https://github.com/reproducibility-challenge/iclr_2019/pull/141
VQVAE
a replicate of https://arxiv.org/pdf/1711.00937.pdf
JACKHAHA363's Repositories
JACKHAHA363/lightning_dino
pytorch lightning version of DINO
JACKHAHA363/SILTranslationGame
Code for "Countering Language Drift with Seeded Iterated Learning"
JACKHAHA363/lmnce
Tensorflow implementation of training a log-linear language model in NCE
JACKHAHA363/langauge_drift_lewis_game
JACKHAHA363/airdialogue
JACKHAHA363/algo
Set up a personal VPN in the cloud
JACKHAHA363/alpaca-lora
Instruct-tune LLaMA on consumer hardware
JACKHAHA363/AmericaOpposeAmerica
《美国反对美国》是王沪宁先生在上世纪80年代末赴美观察写作的。我们知道在那个年代**对西方特别是美国的追捧有多高,所以突然看到一个学者在80年代就有如此清楚的认识,十分钦佩。由于网上只有效果很差的PDF扫描版,所以我想利用OCR技术和肉眼(人体OCR)来转成现代化的文本格式。目前已经全部完成。
JACKHAHA363/buster
JACKHAHA363/byol-pytorch
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch
JACKHAHA363/chatgpt-retrieval-plugin
The ChatGPT Retrieval Plugin lets you easily search and find personal or work documents by asking questions in everyday language.
JACKHAHA363/DL-without-Weight-Transport-PyTorch
JACKHAHA363/jackhaha363.github.io
My website
JACKHAHA363/lipsync
An aggregate of lip sync
JACKHAHA363/llama.cpp
Port of Facebook's LLaMA model in C/C++
JACKHAHA363/llama_index
LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.
JACKHAHA363/MAE-pytorch
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners
JACKHAHA363/multimodal
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
JACKHAHA363/project_template
Personal project template for research
JACKHAHA363/ReferentialGym
This framework provides out-of-the-box implementations of Referential Games variants in order to study the emergence of artificial languages using deep learning, relying on PyTorch (https://www.pytorch.org).
JACKHAHA363/SiMPL
JACKHAHA363/slurm_gpustat
A simple command line tool to show GPU usage on a SLURM cluster
JACKHAHA363/spirl
Official implementation of "Accelerating Reinforcement Learning with Learned Skill Priors", Pertsch et al., CoRL 2020
JACKHAHA363/ssast
Code for the AAAI 2022 paper "SSAST: Self-Supervised Audio Spectrogram Transformer".
JACKHAHA363/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
JACKHAHA363/swav
PyTorch implementation of SwAV https//arxiv.org/abs/2006.09882
JACKHAHA363/theses
JACKHAHA363/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
JACKHAHA363/tre
Measuring compositionality in representation learning
JACKHAHA363/VAE
Exploiting interesting stuff in VAE.....