Pinned Repositories
automindx
Professor Codephreak local language model in pursuit of agency. codephreak has a desire to create automind from aGLM
easyAGI
Autonomous General Learning Model framework not integrated version of modules
funAGI
fundamental AGI with logic and SocraticReasoning for funAGI workflow point of departure
imaginarium
imaginarium UIUX NLP Natural Language Programming openUI imaginator
ollama-webui-lite
Ollama WebUI Stripped 🦙
open-ui
Maintain an open standard for UI and promote its adherence and adoption.
pgvectorscale
A complement to pgvector for high performance, cost efficient vector search on large workloads.
RAGE
Retrieval Augmented Generative Engine
README-md
Autonomous General Learning Model
vectara-ingest
An open source framework to crawl data sources and ingest into Vectara
aGLM's Repositories
autoGLM/easyAGI
Autonomous General Learning Model framework not integrated version of modules
autoGLM/funAGI
fundamental AGI with logic and SocraticReasoning for funAGI workflow point of departure
autoGLM/ollama-webui-lite
Ollama WebUI Stripped 🦙
autoGLM/pgvectorscale
A complement to pgvector for high performance, cost efficient vector search on large workloads.
autoGLM/README-md
Autonomous General Learning Model
autoGLM/vectara-ingest
An open source framework to crawl data sources and ingest into Vectara
autoGLM/agnai
AI Agnostic (Multi-user and Multi-bot) Chat with Fictional Characters. Designed with scale in mind.
autoGLM/imaginarium
imaginarium UIUX NLP Natural Language Programming openUI imaginator
autoGLM/open-ui
Maintain an open standard for UI and promote its adherence and adoption.
autoGLM/RAGE
Retrieval Augmented Generative Engine
autoGLM/.github
autoGLM/aGLM-uiux
Simple HTML UI for extrapolation and connection with Ollama from html as point of departure
autoGLM/anthropic-sdk-python
anthropic claude python sdk
autoGLM/anything-llm
The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.
autoGLM/bash-completion
Programmable completion functions for bash
autoGLM/groq-ai-toolkit
A versatile CLI and Python wrapper for Groq AI's breakthrough LPU Inference Engine. Streamline the creation of chatbots and generate dynamic text with speeds of up to 800 tokens/sec.
autoGLM/jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
autoGLM/levanter
Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax
autoGLM/litellm
Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
autoGLM/llama3
The official Meta Llama 3 GitHub site
autoGLM/llama_index
LlamaIndex is a data framework for your LLM applications
autoGLM/llamafile
Distribute and run LLMs with a single file.
autoGLM/llm_slerp_generation
Repo hosting codes and materials related to speeding LLMs' generative abilities while preserving quality using token merging.
autoGLM/megalodon
Megalodon 7B model with unlimited learning context thanks META
autoGLM/nicegui
web development UIUX builder for diverse applications including robotics, IoT solutions, smart home automation, and machine learning
autoGLM/ollama
Get up and running with Llama2, Llama 3, Mistral, Gemma, and other large language models
autoGLM/open-interpreter
A natural language interface for computers
autoGLM/pinecone-python-client
The Pinecone Python client
autoGLM/RAGLM
Microsoft Node Engine as a Python service to execute computational flow designed for rapid prototyping of machine learning services and applications
autoGLM/unsloth
2-5X faster 80% less memory LLM finetuning