hoelzl's Stars
meilisearch/meilisearch
A lightning-fast search API that fits effortlessly into your apps, websites, and workflow
All-Hands-AI/OpenHands
🙌 OpenHands: Code Less, Make More
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
QwenLM/Qwen2.5
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
mshumer/gpt-prompt-engineer
catboost/catboost
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
Stability-AI/StableCascade
Official Code for Stable Cascade
yl4579/StyleTTS2
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
SciPhi-AI/R2R
The most advanced AI retrieval system. Containerized, Retrieval-Augmented Generation (RAG) with a RESTful API.
philz1337x/clarity-upscaler
Clarity AI | AI Image Upscaler & Enhancer - free and open-source Magnific Alternative
microsoft/aici
AICI: Prompts as (Wasm) Programs
OpenCodeInterpreter/OpenCodeInterpreter
OpenCodeInterpreter is a suite of open-source code generation systems aimed at bridging the gap between large language models and sophisticated proprietary systems like the GPT-4 Code Interpreter. It significantly enhances code generation capabilities by integrating execution and iterative refinement functionalities.
hrishioa/lumentis
AI powered one-click comprehensive docs from transcripts and text.
jiaweizzhao/GaLore
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
uclaml/SPIN
The official implementation of Self-Play Fine-Tuning (SPIN)
yunabe/tslab
Interactive JavaScript and TypeScript programming with Jupyter
google-deepmind/concordia
A library for generative social simulation
NVIDIA/NeMo-Curator
Scalable data pre processing and curation toolkit for LLMs
kyegomez/LongNet
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
JuliaWeb/HTTP.jl
HTTP for Julia
thmsmlr/instructor_ex
Structured outputs for LLMs in Elixir
FranxYao/Long-Context-Data-Engineering
Implementation of paper Data Engineering for Scaling Language Models to 128K Context
JuliaGizmos/Blink.jl
Web-based GUIs for Julia
ML-KULeuven/problog
ProbLog is a Probabilistic Logic Programming Language for logic programs with probabilities.
center-for-humans-and-machines/transformer-heads
Toolkit for attaching, training, saving and loading of new heads for transformer models
JuliaWeb/WebSockets.jl
A WebSockets library for Julia
Muhtasham/pod-helper
🎧 Pod-Helper: Real-time audio transcription and repair on consumer hardware
dananau/GTPyhop
A task-planning system based on Pyhop, but generalized to plan for both goals and tasks.
GoodAI/goodai-ltm-benchmark
A library for benchmarking the Long Term Memory and Continual learning capabilities of LLM based agents. With all the tests and code you need to evaluate your own agents. See more in the blogpost:
GoodAI/goodai-ltm
A Python library for long-term memory in language models. Improve conversational scenarios and create autonomous learning agents with enhanced context.