Pinned Repositories
cortex.cpp
Local AI API Platform
cortex.llamacpp
cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server at runtime.
cortex.onnx
cortex.tensorrt-llm
Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU accelerated inference on NVIDIA's GPUs.
jan
Jan is an open source alternative to ChatGPT that runs 100% offline on your computer. Multiple engine support (llama.cpp, TensorRT-LLM)
GR2_FreighExchange
Graduated Research HEDSPI
pyspark
Qt
Repo for our Qt source
ShipperProject-Shipper
Spark
vansangpfiev's Repositories
vansangpfiev/GR2_FreighExchange
Graduated Research HEDSPI
vansangpfiev/pyspark
vansangpfiev/Qt
Repo for our Qt source
vansangpfiev/ShipperProject-Shipper
vansangpfiev/Spark