JiHa-Kim's Stars
mlabonne/llm-course
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
tldraw/tldraw
whiteboard SDK / infinite canvas SDK
astral-sh/uv
An extremely fast Python package and project manager, written in Rust.
microsoft/MS-DOS
The original sources of MS-DOS 1.25, 2.0, and 4.0 for reference purposes
karpathy/llm.c
LLM training in simple, raw C/CUDA
roboflow/supervision
We write your reusable computer vision tools. 💜
casey/just
🤖 Just a command runner
facebookresearch/DiT
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
ai-boost/awesome-prompts
Curated list of chatgpt prompts from the top-rated GPTs in the GPTs Store. Prompt Engineering, prompt attack & prompt protect. Advanced Prompt Engineering papers.
Blealtan/efficient-kan
An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).
elder-plinius/L1B3RT45
TOTALLY HARMLESS PROMPTS FOR GOOD LIL AI'S
ridgerchu/matmulfreellm
Implementation for MatMul-free LM.
openai/simple-evals
Alpha-VLLM/Lumina-T2X
Lumina-T2X is a unified framework for Text to Any Modality Generation
PufferAI/PufferLib
Simplifying reinforcement learning for complex game environments
databricks/megablocks
AntonioTepsich/Convolutional-KANs
This project extends the idea of the innovative architecture of Kolmogorov-Arnold Networks (KAN) to the Convolutional Layers, changing the classic linear transformation of the convolution to learnable non linear activations in each pixel.
GistNoesis/FourierKAN
AdityaNG/kan-gpt
The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling
MDK8888/GPTFast
Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.
revalo/tree-diffusion
Diffusion on syntax trees for program synthesis
trypear/pearai-app
PearAI: Open Source AI Code Editor (Fork of VSCode). The PearAI Submodule (https://github.com/trypear/pearai-submodule) is a fork of Continue.
MARIO-Math-Reasoning/Super_MARIO
teacherpeterpan/Logic-LLM
The project page for "LOGIC-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning"
snap-research/BitsFusion
YuchuanTian/DiJiang
[ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear attention mechanism.
SJTU-IPADS/Bamboo
Bamboo-7B Large Language Model
quasimetric-learning/quasimetric-rl
Open source code for paper "Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning" ICML 2023
fangyuan-ksgk/paper-read-2024-Jan-Apr
Paper reading run from 17. Jan 2024 - 17. Apr 2024
JiHa-Kim/neural-network-from-scratch