codefuse-ai/ModelCache
A LLM semantic caching system aiming to enhance user experience by reducing response time via cached query-result pairs.
PythonNOASSERTION
Issues
- 1
[编程挑战季] 开发对接大模型 Adapter。
#51 opened by peng3307165 - 1
[编程挑战季] More Embedding Generator
#54 opened by peng3307165 - 2
[编程挑战季] ModelCache检索的Reranker 能力
#55 opened by peng3307165 - 1
[编程挑战季] Fast API 接口能力
#53 opened by peng3307165 - 1
[编程挑战季] Docker File For ModelCache
#52 opened by peng3307165 - 0
[编程挑战季] MultiCache 中英文Readme。
#49 opened by peng3307165 - 1
[编程挑战季] ModelCache文档编写
#50 opened by peng3307165 - 1
请问一下scope中的model字段是做什么用的?
#48 opened by cgq0816 - 1
[Feature: Ranking ability] Add ranking model to refine the order of data after embedding recall
#42 opened by isHuangXin - 1
Is the project still being maintained, or are there any new plans for updates?
#45 opened by wongyan-data - 1
Can ModelChat be used in FastChat?
#43 opened by 3togo - 1
Params not used in code
#29 opened by liwenshipro - 1
cache是基于prompt的缓存?
#20 opened by ArachisTong - 3
非常感谢蚂蚁开源code模型
#13 opened by wengyuan722