Pinned Repositories
StrategicDataOrdering
EasyNLP
EasyNLP: A Comprehensive and Easy-to-use NLP Toolkit
FibVID
FibVID
GaepaGOGOGO
instagram-crawler
crawler
Jujeop
LOAF
KoDPR
Korean Dense Passage Retrieval (KoDPR)
KoTAN
KoTAN: Korean Translation and Augmentation with fine-tuned NLLB
SelectionModel
merry555's Repositories
merry555/FibVID
FibVID
merry555/Jujeop
merry555/GaepaGOGOGO
merry555/EasyNLP
EasyNLP: A Comprehensive and Easy-to-use NLP Toolkit
merry555/LOAF
merry555/open-korean-instructions
언어모델을 학습하기 위한 공개 한국어 instruction dataset들을 모아두었습니다.
merry555/Oscar
merry555/YellowPeach
merry555/AI4Code
merry555/Articles
References to the Medium articles
merry555/awesome-automl-papers
A curated list of automated machine learning papers, articles, tutorials, slides and projects
merry555/Awesome-LLM-KG
Awesome papers about unifying LLMs and KGs
merry555/gaepago
merry555/GPTs
leaked prompts of GPTs
merry555/I-want-to-study-Data-Science
데이터 사이언스를 공부하고 싶은 분들을 위한 글
merry555/langchain
⚡ Building applications with LLMs through composability ⚡
merry555/large-scale-lm-tutorials
Large-scale language modeling tutorials with PyTorch
merry555/LLMscience
merry555/machine-learning-ds-interview-questions
🔴 1704 Machine Learning, Data Science & Python Interview Questions (ANSWERED) To Kill Your Next ML & DS Interview. Get All Answers + PDFs on MLStack.Cafe. Post your ML Jobs 👉
merry555/merry555
merry555/mljar-supervised
Python package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation
merry555/nCube-Thyme
merry555/OnlineGrooming
merry555/open-llms
🤖 A list of open LLMs available for commercial use.
merry555/ProfNetwork
merry555/shap
A game theoretic approach to explain the output of any machine learning model.
merry555/text-generation-inference
Large Language Model Text Generation Inference
merry555/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
merry555/TweetUserStance
merry555/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs