Open Source + Multilingual MLLM + Fine-tuning + Distillation + More efficient models and learning + ?
We're looking for someone to join us in implementing a top-performing MLLM model.
- 🔥GPU/TPU/NPU support for the project🔥.
- someone to fine-tune and change LLAMA together
- someone to help with serving
- someone to create time series images such as webtoons
- Someone to create videos such as movies
- someone who can monetize services like OpenAI, Stability AI, and Huggingface.
Reach out to me at the email below with a little bit about yourself.
- gyunggyung/LiOn: https://github.com/gyunggyung/LiOn
- gyunggyung/KoAlpaca.cpp: https://github.com/gyunggyung/KoAlpaca.cpp
- antimatter15/alpaca.cpp: https://github.com/antimatter15/alpaca.cpp
- ggerganov/llama.cpp: https://github.com/ggerganov/llama.cpp
- Beomi/KoAlpaca: https://github.com/Beomi/KoAlpaca
- gyunggyung/DistilKoBiLSTM: https://github.com/gyunggyung/DistilKoBiLSTM
- microsoft/unilm: https://github.com/microsoft/unilm
- deepmind/code_contests: https://github.com/deepmind/code_contests
- HeegyuKim/language-model: https://github.com/HeegyuKim/language-model
- google-research/t5x: https://github.com/google-research/t5x
- kojima-takeshi188/zero_shot_cotp: https://github.com/kojima-takeshi188/zero_shot_cot
- NVlabs/prismer: https://github.com/NVlabs/prismer
- microsoft/visual-chatgpt: https://github.com/microsoft/visual-chatgpt
- GPT-4: https://www.facebook.com/groups/6129390073749513/permalink/6131959123492608
- USM: https://arxiv.org/abs/2303.01037
- MuAViC: https://arxiv.org/abs/2303.00628
- GLOM: https://arxiv.org/pdf/2102.12627.pdf
- CACTI: https://cacti-framework.github.io/
- PaLM-E: https://palm-e.github.io
- Youtube: https://www.youtube.com/playlist?list=PLsmJteXozP3oHVB5TCrXEcrfQnInMxkoT