Generative AI systems are transforming industries, with methods like Retrieval Augmented Generation (RAG) and Compound AI systems at the forefront. These systems enhance tasks like information retrieval, decision-making, and content generation but can come with high computational costs.
Semantic caching is a technique widely used to reduce the computational demands of AI systems by storing previously processed queries and responses. This prevents redundant computations for similar queries, improving efficiency by reducing latency and server load. It is especially valuable for scaling agentic applications, where query variations are less common.
Databricks offers an ideal platform for building AI agents with semantic caching through its Mosaic AI solution. It provides integrated components like a vector database, agent framework / evaluation, all governed centrally. This solution accelerator implements semantic caching, optimizing response times and reducing computational overhead for similar queries.
ryuta.yoshimatsu@databricks.com, nehme.tohme@databricks.com, ellen.hirt@databricks.com
Please note the code in this project is provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects. The source in this project is provided subject to the Databricks License. All included or referenced third party libraries are subject to the licenses set forth below.
Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.
© 2024 Databricks, Inc. All rights reserved. The source in this notebook is provided subject to the Databricks License [https://databricks.com/db-license-source]. All included or referenced third party libraries are subject to the licenses set forth below.