Welcome to the Spring AI project!
The Spring AI project provides a Spring-friendly API and abstractions for developing AI applications.
Let's make your @Beans
intelligent!
🥳 The Spring AI project has graduated out of the repository!
January 24, 2024 Update
- Moving the
prompt
andmessages
andmetadata
packages to subpackages oforg.sf.ai.chat
- New functionality is text to image clients. Classes are
OpenAiImageClient
andStabilityAiImageClient
. See the integration tests for usage, docs are coming soon. - A new package
model
that contains interfaces and base classes to support creating AI Model Clients for any input/output data type combination. At the moment the chat and image model packages implement this. We will be updating the embedding package to this new model soon. - A new "portable options" design pattern. We wanted to provide as much portability in the
ChatClient
as possible across different chat based AI Models. There is a common set of generation options and then those that are specific to a model provider. A sort ofduck typing
approach is used.ModelOptions
in the model package is a marker interface indicating implementations of this class will provide the options for a model. SeeImageOptions
, a subinterface that defines portable options across all text->imageImageClient
implementations. ThenStabilityAiImageOptions
andOpenAiImageOptions
provide the options specific to each model provider. All options classes are created via a fluent API builder all can be passed into the portableImageClient
API. These option data types are using in autoconfiguration/configurationproperties for theImageClient
implementations.
January 13, 2024 Update
The following OpenAi Autoconfiguration chat properties has changed
- from
spring.ai.openai.model
tospring.ai.openai.chat.model
. - from
spring.ai.openai.temperature
tospring.ai.openai.chat.temperature
.
Find updated documentation about the OpenAi properties: https://docs.spring.io/spring-ai/reference/api/clients/openai.html
December 27, 2023 Update
Merge SimplePersistentVectorStore and InMemoryVectorStore into SimpleVectorStore
- Replace InMemoryVectorStore with SimpleVectorStore
December 20, 2023 Update
Refactor the Ollama client and related classes and package names
- Replace the org.springframework.ai.ollama.client.OllamaClient by org.springframework.ai.ollama.OllamaChatClient.
- The OllamaChatClient method signatures have changed.
- Rename the org.springframework.ai.autoconfigure.ollama.OllamaProperties into org.springframework.ai.autoconfigure.ollama.OllamaChatProperties and change the suffix to:
spring.ai.ollama.chat
. Some of the properties have changed as well.
December 19, 2023 Update
Renaming of AiClient and related classes and packagenames
- Rename AiClient to ChatClient
- Rename AiResponse to ChatResponse
- Rename AiStreamClient to StreamingChatClient
- Rename package org.sf.ai.client to org.sf.ai.chat
Rename artifact ID of
transformers-embedding
tospring-ai-transformers
Moved Maven modules from top level directoryand embedding-clients
subdirectory to all be under a single models
directory.
December 1, 2023
We are transitioning the project's Group ID:
- FROM:
org.springframework.experimental.ai
- TO:
org.springframework.ai
Artifacts will still be hosted in the snapshot repository as shown below.
The main branch will move to the version 0.8.0-SNAPSHOT
.
It will be unstable for a week or two.
Please use the 0.7.1-SNAPSHOT if you don't want to be on the bleeding edge.
You can access 0.7.1-SNAPSHOT
artifacts as before and still access 0.7.1-SNAPSHOT Documentation.
This repository contains large model files. To clone it you have to either:
- Ignore the large files (won't affect the spring-ai behaviour) :
GIT_LFS_SKIP_SMUDGE=1 git clone git@github.com:spring-projects/spring-ai.git
. - Or install the Git Large File Storage before cloning the repo.
- Documentation
- Issues
- Discussions - Go here if you have a question, suggestion, or feedback!
- JavaDocs
- Follow the Workshop
- Overview of Spring AI @ Devoxx 2023
- Introducing Spring AI - Add Generative AI to your Spring Applications
The Spring AI project provides artifacts in the Spring Milestone Repository. You will need to add configuration to add a reference to the Spring Milestone repository in your build file. For example, in maven, add the following repository definition.
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<releases>
<enabled>false</enabled>
</releases>
</repository>
</repositories>
And the Spring Boot Starter depending on if you are using Azure Open AI or Open AI.
The main branch has move to version 0.8.0-SNAPSHOT. It will be unstable for a week or two. Please use the 0.7.1-SNAPSHOT if you don't want to be on the bleeding edge.
- Azure OpenAI
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-azure-openai-spring-boot-starter</artifactId>
<version>0.8.0-SNAPSHOT</version>
</dependency>
- OpenAI
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
<version>0.8.0-SNAPSHOT</version>
</dependency>
- Azure OpenAI
<dependency>
<groupId>org.springframework.experimental.ai</groupId>
<artifactId>spring-ai-azure-openai-spring-boot-starter</artifactId>
<version>0.7.1-SNAPSHOT</version>
</dependency>
- OpenAI
<dependency>
<groupId>org.springframework.experimental.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
<version>0.7.1-SNAPSHOT</version>
</dependency>
Following vector stores are supported:
- Azure Vector Store
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-azure-vector-store-spring-boot-starter</artifactId>
<version>0.8.0-SNAPSHOT</version>
</dependency>
- Chroma
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-chroma-store-spring-boot-starter</artifactId>
<version>0.8.0-SNAPSHOT</version>
</dependency>
- Milvus
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-milvus-store-spring-boot-starter</artifactId>
<version>0.8.0-SNAPSHOT</version>
</dependency>
- PGVector
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-pgvector-store-spring-boot-starter</artifactId>
<version>0.8.0-SNAPSHOT</version>
</dependency>
- Pinecone
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-pinecone-store-spring-boot-starter</artifactId>
<version>0.8.0-SNAPSHOT</version>
</dependency>
- Weaviate
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-weaviate-store-spring-boot-starter</artifactId>
<version>0.8.0-SNAPSHOT</version>
</dependency>
- Neo4j
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-neo4j-store-spring-boot-starter</artifactId>
<version>0.8.0-SNAPSHOT</version>
</dependency>
- You can try out the features of Spring AI by following the workshop material for Azure OpenAI
- To use the workshop material with OpenAI (not Azure's offering) you will need to replace the Azure Open AI Boot Starter in the
pom.xml
with the Open AI Boot Starter.
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
<version>0.7.1-SNAPSHOT</version>
</dependency>
Despite the extensive history of AI, Java's role in this domain has been relatively minor. This is mainly due to the historical reliance on efficient algorithms developed in languages such as C/C++, with Python serving as a bridge to access these libraries. The majority of ML/AI tools were built around the Python ecosystem. However, recent progress in Generative AI, spurred by innovations like OpenAI's ChatGPT, has popularized the interaction with pre-trained models via HTTP. This eliminates much of the dependency on C/C++/Python libraries and opens the door to the use of programming languages such as Java.
The Python libraries LangChain and LlamaIndex have become popular to implement Generative AI solutions and can be implemented in other programming languages. These Python libraries share foundational themes with Spring projects, such as:
- Portable Service Abstractions
- Modularity
- Extensibility
- Reduction of boilerplate code
- Integration with diverse data sources
- Prebuilt solutions for common use cases
Taking inspiration from these libraries, the Spring AI project aims to provide a similar experience for Spring developers in the AI domain.
Note, that the Spring AI API is not a direct port of either LangChain or LlamaIndex. You will see significant differences in the API if you are familiar with those two projects, though concepts and ideas are fairly portable.
This is a high level feature overview. The features that are implemented lay the foundation, with subsequent more complex features building upon them.
You can find more details in the Reference Documentation
AI Client: A foundational feature of Spring AI is a standardized client API for interfacing with generative AI models. With this common API, you can initially target g OpenAI's Chat endpoint and easily swap about the implementation to use other platforms, such as HuggingFace's Inference Endpoints
Dive deeper into Models. in our concept guide. For usage details, consult the ChatClient API guide
Prompts: Central to AI model interaction is the Prompt, which provides specific instructions for the AI to act upon. Crafting an effective Prompt is both an art and science, giving rise to the discipline of "Prompt Engineering". These prompts often leverage a templating engine for easy data substitution within predefined text using placeholders.
Explore more on Prompts in our concept guide. To learn about the Prompt class, refer to the Prompt API guide.
Prompt Templates: Prompt Templates support the creation of prompts, particularly when a Template Engine is employed.
Delve into PromptTemplates in our concept guide. For a hands-on guide to PromptTemplate, see the PromptTemplate API guide.
Output Parsers: AI model outputs often come as raw java.lang.String
values. Output Parsers restructure these raw strings into more programmer-friendly formats, such as CSV or JSON.
Get insights on Output Parsers in our concept guide.. For implementation details, visit the OutputParser API guide.
Incorporating proprietary data into Generative AI without retraining the model has been a breakthrough. Retraining models, especially those with billions of parameters, is challenging due to the specialized hardware required. The 'In-context' learning technique provides a simpler method to infuse your pre-trained model with data, whether from text files, HTML, or database results. The right techniques are critical for developing successful solutions.
Retrieval Augmented Generation, or RAG for short, is a pattern that enables you to bring your data to pre-trained models. RAG excels in the 'query over your docs' use-case.
Learn more about Retrieval Augmented Generation.
Bringing your data to the model follows an Extract, Transform, and Load (ETL) pattern. The subsequent classes and interfaces support RAG's data preparation.
Documents:
The Document
class encapsulates your data, including text and metadata, for the AI model.
While a Document can represent extensive content, such as an entire file, the RAG approach
segments content into smaller pieces for inclusion in the prompt.
The ETL process uses the interfaces DocumentReader
, DocumentTransformer
, and DocumentWriter
, ending with data storage in a Vector Database.
This database later discerns the pieces of data that are pertinent to a user's query.
Document Readers:
Document Readers produce a List<Document>
from diverse sources like PDFs, Markdown files, and Word documents.
Given that many sources are unstructured, Document Readers often segment based on content semantics, avoiding splits within tables or code sections.
After the initial creation of the List<Document>
, the data flows through transformers for further refinement.
Document Transformers:
Transformers further modify the List<Document>
by eliminating superfluous data, like PDF margins, or appending metadata (e.g., primary keywords or summaries).
Another critical transformation is subdividing documents to fit within the AI model's token constraints.
Each model has a context-window indicating its input and output data limits. Typically, one token equates to about 0.75 words. For instance, in model names like gpt-4-32k, "32K" signifies the token count.
Document Writers:
The final ETL step within RAG involves committing the data segments to a Vector Database.
Though the DocumentWriter
interface isn't exclusively for Vector Database writing, it the main type of implementation.
Vector Stores: Vector Databases are instrumental in incorporating your data with AI models.
They ascertain which document sections the AI should use for generating responses.
Examples of Vector Databases include Chroma, Postgres, Pinecone, Weaviate, Mongo Atlas, and Redis. Spring AI's VectorStore
abstraction permits effortless transitions between database implementations.
To build with running unit tests
./mvnw clean package
To build including integration tests. Set API key environment variables for OpenAI and Azure OpenAI before running.
./mvnw clean verify -Pintegration-tests
To run a specific integration test allowing for up to two attempts to succeed. This is useful when a hosted service is not reliable or times out.
./mvnw -pl vector-stores/spring-ai-pgvector-store -Pintegration-tests -Dfailsafe.rerunFailingTestsCount=2 -Dit.test=PgVectorStoreIT verify
To build the docs
./mvnw -pl spring-ai-docs antora
The docs are then in the directory spring-ai-docs/target/antora/site/index.html