Pinned Repositories
kan-gpt
The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling
AGI-Papers
Papers and Book to look at when starting AGI π
DistilKoBiLSTM
Distilling Task-Specific Knowledge from Teacher Model into BiLSTM
KoGPT2-FineTuning
π₯ Korean GPT-2, KoGPT2 FineTuning cased. νκ΅μ΄ κ°μ¬ λ°μ΄ν° νμ΅ π₯
Korea-Startups
π κ΅λ΄ μ€ννΈμ λͺ©λ‘ λ° μ€λͺ π
MLLMArxivTalk
[Google Meet] MLLM Arxiv Casual Talk
National-Petition
μ²μλ κ΅λ―Όμ²μ λΆμμΌλ‘ κ΅λ―Όμ μκ° μμ보기 ππ¬
PyTorch
PyTorch tutorials A to Z
Sequence-Models-coursera
Sequence Models by Andrew Ng on Coursera. Programming Assignments and Quiz Solutions.
Vision-Transformer-Papers
Papers to look at when starting Vision Transformer π
gyunggyung's Repositories
gyunggyung/AGI-Papers
Papers and Book to look at when starting AGI π
gyunggyung/KoGPT2-FineTuning
π₯ Korean GPT-2, KoGPT2 FineTuning cased. νκ΅μ΄ κ°μ¬ λ°μ΄ν° νμ΅ π₯
gyunggyung/Korea-Startups
π κ΅λ΄ μ€ννΈμ λͺ©λ‘ λ° μ€λͺ π
gyunggyung/DistilKoBiLSTM
Distilling Task-Specific Knowledge from Teacher Model into BiLSTM
gyunggyung/KoAlpaca.cpp
Locally run an Instruction-Tuned Chat-Style LLM KoAlpaca
gyunggyung/OpenMLLM
Open Source + Multilingual MLLM + Fine-tuning + Distillation + More efficient models and learning + ?
gyunggyung/Korean-GPT2
GPT2 Base model FineTuning μΌλ‘ νκ΅μ΄ κΈμ μ¬μμ±νλ λͺ¨λΈ
gyunggyung/LiOnConnect
"Learning-based One-line intelligence Owner Network Connectivity Tool"
gyunggyung/GPT.asm
gpt-assembly-example
gyunggyung/gyunggyung
gyunggyung/AGI
gyunggyung/OldOpenMLLM
Open Source + Multilingual MLLM + Fine-tuning + Distillation + More efficient models and learning + ?
gyunggyung/codealpaca
gyunggyung/GPT-5
gyunggyung/gyunggyung.github.io
gyunggyung.github.io
gyunggyung/kan-gpt
The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling
gyunggyung/LLMAgentPapers
Must-read Papers on Multiagents of LLMs.
gyunggyung/petals
πΈ Run large language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
gyunggyung/aider
aider is GPT powered coding in your terminal
gyunggyung/Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
gyunggyung/Auto-GPT-Plugins
Plugins for Auto-GPT
gyunggyung/chameleon-llm
Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".
gyunggyung/DB-GPT
Interact your data and environment using the local GPT, no data leaks, 100% privately, 100% security
gyunggyung/EVAL
[Corca / DEV] EVAL(Elastic Versatile Agent with Langchain) will execute all your requests. Just like an eval method!
gyunggyung/LLM-Blender
[ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diverse strengths of multiple open-source LLMs. LLM-Blender cut the weaknesses through ranking and integrate the strengths through fusing generation to enhance the capability of LLMs.
gyunggyung/LLMZoo
β‘LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.β‘
gyunggyung/open_flamingo
An open-source framework for training large multimodal models
gyunggyung/qlora
QLoRA: Efficient Finetuning of Quantized LLMs
gyunggyung/StableLM
StableLM: Stability AI Language Models
gyunggyung/sunsagi.github.io