Pinned Repositories
accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
ai-tech-interview
👩💻👨💻 AI 엔지니어 기술 면접 스터디
Algorithms-Practice
code for algorithms problems
baekjoon
코딩테스트 대비 문제집(Baekjoon Online Judge)
bigcode-dataset
Boostcamp-AI-Tech-Product-Serving
부스트캠프 AI Tech - Product Serving 자료
DenseRetrieval
Implementation of DPR, GradCache, DSI etc.
plant-growth-prediction
puctuation-restoration-bert
lassl
Easy Language Model Pretraining leveraging Huggingface's Transformers and Datasets
Doohae's Repositories
Doohae/DenseRetrieval
Implementation of DPR, GradCache, DSI etc.
Doohae/accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
Doohae/bigcode-dataset
Doohae/DeepSpeedExamples
Example models using DeepSpeed
Doohae/GradCache
Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint
Doohae/Megatron-LM
Ongoing research training transformer models at scale
Doohae/RETRO-pytorch
Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Doohae/Building-Python-Web-APIs-with-FastAPI
Building Python Web APIs with FastAPI, published by Packt
Doohae/Chatbot_data
Chatbot_data_for_Korean
Doohae/Concurrent-Programming
Doohae/course
The Hugging Face course
Doohae/cv-update
Doohae/data-engineering
Doohae/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Doohae/FiD
Fusion-in-Decoder
Doohae/kolmev
Evaluation for korean language models (e.g. bert, roberta, bart, t5, gpt2...)
Doohae/lassl
Easy Language Model Pretraining leveraging Huggingface's Transformers and Datasets
Doohae/llmss
LLM simple serving (tensor model parallel, pubsub, grpc)
Doohae/lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
Doohae/marcopolo
Marco Emilio Polo (/ˈmɑːrkoʊ ˈpoʊloʊ/ (listen), Venetian: [ˈmaɾko ˈpolo], Italian: [ˈmarko ˈpɔːlo] (listen); c. 1254 – January 8, 1324)[1] was a Venetian merchant,[2][3] explorer, and writer from the Republic of Venice who travelled through Asia along the Silk Road between 1271 and 1295.
Doohae/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
Doohae/Physics-Exp3
Doohae/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Doohae/spark-practice
Doohae/Toy-Streamlit
Doohae/tppys
Text processing by pyspark (just sample project)
Doohae/tpu-starter
Everything you want to know about Google Cloud TPU
Doohae/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Doohae/trlx
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
Doohae/YaLM-100B
Pretrained language model with 100B parameters