Pinned Repositories
edgen
⚡ Edgen: Local, private GenAI server alternative to OpenAI. No GPU required. Run AI models locally: LLMs (Llama2, Mistral, Mixtral...), Speech-to-text (whisper) and many others.
atoma-node
Atoma's node infra
atoma-node-inference
Paged attention cuda kernels for the Atoma protocol
Degiro2IRS-Autofiller
Preencher automaticamente a tabela Anexo J-9.2-A
docs
edgen-client-node
Edgen client library for node / typescript
edgen-client-python
Python client for the Edgen API
esp32-mcp3564
llama.cpp
LLM inference in C/C++
mistral.rs
Blazingly fast LLM inference.
francis2tm's Repositories
francis2tm/Degiro2IRS-Autofiller
Preencher automaticamente a tabela Anexo J-9.2-A
francis2tm/atoma-node
Atoma's node infra
francis2tm/atoma-node-inference
Paged attention cuda kernels for the Atoma protocol
francis2tm/docs
francis2tm/edgen-client-node
Edgen client library for node / typescript
francis2tm/edgen-client-python
Python client for the Edgen API
francis2tm/esp32-mcp3564
francis2tm/llama.cpp
LLM inference in C/C++
francis2tm/mistral.rs
Blazingly fast LLM inference.
francis2tm/onnx2torch
Convert ONNX models to PyTorch.
francis2tm/open-webui
francis2tm/OrangeCrab-test-sw
Software, firmware, and gateware for OrangeCrab ATE.
francis2tm/SaxonSoc
SoC based on VexRiscv and ICE40 UP5K
francis2tm/SWE-agent
SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It solves 12.29% of bugs in the SWE-bench evaluation set and takes just 1.5 minutes to run.
francis2tm/python-rust-crypto
francis2tm/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs