Pinned Repositories
llama-cpp-agent
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output. Works also with models not fine-tuned to JSON output and function calls.
apollo-server-integration-svelte
Apollo server integration with Svelte
covid19ecuador
Status of COVID-19 in Ecuador
dolphin-bot
Cognitive Computations: Dolphin llm discord bot
fcom
fcom (short for file_combiner) is a versatile Rust CLI tool designed to process folders and files, combining their contents into a single output file. It offers a range of features for file manipulation and directory analysis.
hf-exllama
HuggingFace space with ExllamaV2
mio-gitea
Gitea Dockerized
mio-startpage
Startpage and bookmarks
nix-terraform-cloud-vm
Terraform deploying with NixOS
Objective-C
Repository with all my project's I made to learn iOS programming.
pabl-o-ce's Repositories
pabl-o-ce/mio-gitea
Gitea Dockerized
pabl-o-ce/dolphin-bot
Cognitive Computations: Dolphin llm discord bot
pabl-o-ce/apollo-server-integration-svelte
Apollo server integration with Svelte
pabl-o-ce/llama-cpp-agent
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). It provides a simple yet robust interface using llama-cpp-python, allowing users to chat with LLM models, execute structured function calls and get structured output.
pabl-o-ce/llama-cpp-python
Python bindings for llama.cpp
pabl-o-ce/mio-startpage
Startpage and bookmarks
pabl-o-ce/poscye-discord-ai-bot
Poscye ΔI BØT
pabl-o-ce/ToolAgents
ToolAgents is a lightweight and flexible framework for creating function-calling agents with various language models and APIs.
pabl-o-ce/fcom
fcom (short for file_combiner) is a versatile Rust CLI tool designed to process folders and files, combining their contents into a single output file. It offers a range of features for file manipulation and directory analysis.
pabl-o-ce/hf-exllama
HuggingFace space with ExllamaV2
pabl-o-ce/catppuccin-gitea
🍵 Soothing pastel theme for Gitea
pabl-o-ce/code-review-hf-space
pabl-o-ce/dolphin-space
🐬
pabl-o-ce/exllamav2
A fast inference library for running LLMs locally on modern consumer-class GPUs
pabl-o-ce/expo-3nstar-scanner
pabl-o-ce/hf-tess
Tess-Reasoning-1 (Tess-R1) series of models. Tess-R1 is designed with test-time compute in mind, and has the capabilities to produce a Chain-of-Thought (CoT) reasoning before producing the final output.
pabl-o-ce/llama-cpp-agent-documentation
Documentation for the llama-cpp-agent framework
pabl-o-ce/llama-cpp-agent-webui
pabl-o-ce/llama.cpp
LLM inference in C/C++
pabl-o-ce/mio-forgejo
pabl-o-ce/mistral.rs
Blazingly fast LLM inference.
pabl-o-ce/model-card-gen-hf
pabl-o-ce/nixos
NixOS configuration with support for amd64 and arm64
pabl-o-ce/nixos-pi
pabl-o-ce/omni-zero
A diffusers pipeline for zero shot stylised portrait creation
pabl-o-ce/opnsense-cpt
opnsense captive portal template
pabl-o-ce/poscye-lmsys
pabl-o-ce/srt-inference-backends
Launchers and scripts for managing inference APIs
pabl-o-ce/StorySphere
Navigate the Universe of Your Imagination
pabl-o-ce/tabbyAPI
An OAI compatible exllamav2 API that's both lightweight and fast