Pinned Repositories
doc-insights
examples
Xorbits Example Notebooks
inference
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
inference-client
mars
Mars is a tensor-based unified framework for large-scale data computation which scales numpy, pandas, scikit-learn and Python functions.
MMKV
An efficient, small mobile key-value storage framework developed by WeChat. Works on Android, iOS, macOS, Windows, and POSIX.
my-helm-charts
TestTorchSocket
velox
A C++ vectorized database acceleration library aimed to optimizing query engines and data processing systems.
xorbits
Scalable Python data science, in an API compatible & lightning fast way.
ChengjieLi28's Repositories
ChengjieLi28/doc-insights
ChengjieLi28/examples
Xorbits Example Notebooks
ChengjieLi28/inference
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
ChengjieLi28/inference-client
ChengjieLi28/mars
Mars is a tensor-based unified framework for large-scale data computation which scales numpy, pandas, scikit-learn and Python functions.
ChengjieLi28/MMKV
An efficient, small mobile key-value storage framework developed by WeChat. Works on Android, iOS, macOS, Windows, and POSIX.
ChengjieLi28/my-helm-charts
ChengjieLi28/TestTorchSocket
ChengjieLi28/velox
A C++ vectorized database acceleration library aimed to optimizing query engines and data processing systems.
ChengjieLi28/xorbits
Scalable Python data science, in an API compatible & lightning fast way.
ChengjieLi28/xoscar
Python actor framework for heterogeneous computing.