marqo-ai/marqo

Marqo Integration into cacheGPT

mattma1970 opened this issue · 1 comments

Is your feature request related to a problem? Please describe.
I'm building a customer service voice-bot and time to first utterance (a. time to yield first sentence) is critical for creating a synchronous LLM conversation CX. However, current commercial API's provide c. 50 tokens/second on a good day. If the first utterance has more than 20 words, then the entire latency budget is blown. Caching is the obvious solution and, in particular semantic caching, as exemplified by cacheGPT (6.1k stars) in order to use natural language understanding to hit the cache.

Describe the solution you'd like

  1. Create an integration of marqo with cacheGPT. https://github.com/zilliztech/GPTCache?tab=readme-ov-file
  2. For bonus points, create an alternative semantic layer that uses Marqo DNA which allows us to bring our own embedding models for caching.

Describe alternatives you've considered

  • cacheGPT but this adds yet another backend that I need to maintain and I'd like my stack to get smaller not larger.
  • redisVL but like cachGPT has limited embedding model support (and text only).

Additional context
Add any other context or screenshots about the feature request here.

Hey @mattma1970 , I have created this integration in GPTCache. Let me know if you find it useful, or any suggestions/improvements, or if there are any bugs. Looking forward to feedback!