Pinned Repositories
llama.cpp
LLM inference in C/C++
gpu-jupyter
GPU-Jupyter: Leverage the flexibility of Jupyterlab through the power of your NVIDIA GPU to run your code from Tensorflow and Pytorch in collaborative notebooks on the GPU.
CasparCG-Client-Example
CasparCG Client Example
embeddings
This application is a FastAPI server that provides an API for working with embeddings. It's built with Python and packaged with Docker for easy deployment and scaling.
itodorovic's Repositories
itodorovic/CasparCG-Client-Example
CasparCG Client Example
itodorovic/embeddings
This application is a FastAPI server that provides an API for working with embeddings. It's built with Python and packaged with Docker for easy deployment and scaling.