Haystack is an end-to-end NLP framework that enables you to build NLP applications powered by LLMs, Transformer models, vector search and more. Whether you want to perform question answering, answer generation, semantic document search, or build tools that are capable of complex decision making and query resolution, you can use the state-of-the-art NLP models with Haystack to build end-to-end NLP applications solving your use case.
πββοΈ Pipelines: This is the standard Haystack structure that can connect to your data and perform on it NLP tasks that you define. The data in a Pipeline flows from one Node to the next. You define how Nodes interact with each other, and how one Node pushes data to the next.
An example pipeline would consist of one Retriever
Node and one Reader
Node. When the pipeline runs with a query, the Retriever first retrieves the documents relevant to the query and then the Reader extracts the final answer.
βοΈ Nodes: Each Node achieves one thing. Such as preprocessing documents, retrieving documents, using language models to answer questions and so on.
π΅οΈ Agent: (since 1.15) An Agent is a component that is powered by an LLM, such as GPT-3. It can decide on the next best course of action so as to get to the result of a query. It uses the Tools available to it to achieve this. While a pipeline has a clear start and end, an Agent is able to decide whether the query has resolved or not. It may also make use of a Pipeline as a Tool.
π οΈ Tools: You can think of a Tool as an expert, that is able to do something really well. Such as a calculator, good at mathematics. Or a WebRetriever, good at retrieving pages from the internet. A Node or pipeline in Haystack can also be used as a Tool. A Tool is a component that is used by an Agent, to resolve complex queries.
DocumentStores: A DocumentStore is database where you store your text data for Haystack to access. Haystack DocumentStores are available with ElasticSearch, Opensearch, Weaviate, Pinecone, FAISS and more. For a full list of available DocumentStores, check out our documentation.
- Perform Question Answering in natural language to find granular answers in your documents.
- Generate answers or content with the use of LLM such as articles, tweets, product descriptions and more, the sky is the limit π
- Perform semantic search and retrieve documents according to meaning.
- Build applications that can do complex decisions making to answer complex queries: such as systems that can resolve complex customer queries, do knowledge search on many disconnected resources and so on.
- Use off-the-shelf models or fine-tune them to your data.
- Use user feedback to evaluate, benchmark, and continuously improve your models.
- Latest models: Haystack allows you to use and compare models available from OpenAI, Cohere and Hugging Face, as well as your own local models. Use the latest LLMs or Transformer-based models (for example: BERT, RoBERTa, MiniLM).
- Modular: Multiple choices to fit your tech stack and use case. A wide choice of DocumentStores to store your data, file conversion tools and more
- Open: Integrated with Hugging Face's model hub, OpenAI, Cohere and various Azure services.
- Scalable: Scale to millions of docs using retrievers and production-scale components like Elasticsearch and a fastAPI REST API.
- End-to-End: All tooling in one place: file conversion, cleaning, splitting, training, eval, inference, labeling, and more.
- Customizable: Fine-tune models to your domain or implement your custom Nodes.
- Continuous Learning: Collect new training data from user feedback in production & improve your models continuously.
π Docs | Components, Pipeline Nodes, Guides, API Reference |
πΎ Installation | How to install Haystack |
π Tutorials | See what Haystack can do with our Notebooks & Scripts |
π Haystack Extras | A repository that lists extra Haystack packages and components that can be installed separately. |
π° Demos | A repository containing Haystack demo applications with Docker Compose and a REST API |
π Community | Discord, Twitter, Stack Overflow, GitHub Discussions |
π Contributing | We welcome all contributions! |
π Benchmarks | Speed & Accuracy of Retriever, Readers and DocumentStores |
π Roadmap | Public roadmap of Haystack |
π° Blog | Learn about the latest with Haystack and NLP |
βοΈ Jobs | We're hiring! Have a look at our open positions |
For a detailed installation guide see the official documentation. There youβll find instructions for custom installations handling Windows and Apple Silicon.
Basic Installation
Use pip to install a basic version of Haystack's latest release:
pip install farm-haystack
This command installs everything needed for basic Pipelines that use an Elasticsearch DocumentStore.
Full Installation
To use more advanced features, like certain DocumentStores, FileConverters, OCR, or Ray, install further dependencies. The following command installs the latest version of Haystack and all its dependencies from the main branch:
pip install --upgrade pip
pip install 'farm-haystack[all]' ## or 'all-gpu' for the GPU-enabled dependencies
Installing the REST API Haystack comes packaged with a REST API so that you can deploy it as a service. Run the following command from the root directory of the Haystack repo to install REST_API:
pip install rest_api/
You can find out more about our PyPi package on our PyPi page.
You can find some of our hosted demos with instructions to run them locally too on our haystack-demos repository
π₯ Should I follow? - Twitter demo
π Explore The World demo
If you have a feature request or a bug report, feel free to open an issue in Github. We regularly check these and you can expect a quick response. If you'd like to discuss a topic, or get more general advice on how to make Haystack work for your project, you can start a thread in Github Discussions or our Discord channel. We also check Twitter and Stack Overflow.
We are very open to the community's contributions - be it a quick fix of a typo, or a completely new feature! You don't need to be a Haystack expert to provide meaningful improvements. To learn how to get started, check out our Contributor Guidelines first.
You can also find instructions to run the tests locally there.
Thanks so much to all those who have contributed to our project!
Here's a list of organizations that we know about from our community. Don't hesitate to send a PR to let the world know that you use Haystack. Join our growing community!