Please fill out the Ruby AI Survey 2023.
Results will be anonymized and shared!
⚡ Building LLM-powered applications in Ruby ⚡
For deep Rails integration see: langchainrb_rails gem.
Available for paid consulting engagements! Email me.
- Retrieval Augmented Generation (RAG) and vector search
- Chat bots
- AI agents
- Installation
- Usage
- Large Language Models (LLMs)
- Prompt Management
- Output Parsers
- Building RAG
- Building chat bots
- Evaluations
- Examples
- Logging
- Development
- Discord
Install the gem and add to the application's Gemfile by executing:
bundle add langchainrb
If bundler is not being used to manage dependencies, install the gem by executing:
gem install langchainrb
require "langchain"
Langchain.rb wraps all supported LLMs in a unified interface allowing you to easily swap out and test out different models.
LLM providers | embed() | complete() | chat() | summarize() | Notes |
---|---|---|---|---|---|
OpenAI | ✅ | ✅ | ✅ | ❌ | Including Azure OpenAI |
AI21 | ❌ | ✅ | ❌ | ✅ | |
Anthropic | ❌ | ✅ | ❌ | ❌ | |
AWS Bedrock | ✅ | ✅ | ❌ | ❌ | Provides AWS, Cohere, AI21, Antropic and Stability AI models |
Cohere | ✅ | ✅ | ✅ | ✅ | |
GooglePalm | ✅ | ✅ | ✅ | ✅ | |
Google Vertex AI | ✅ | ❌ | ❌ | ❌ | |
HuggingFace | ✅ | ❌ | ❌ | ❌ | |
Ollama | ✅ | ✅ | ❌ | ❌ | |
Replicate | ✅ | ✅ | ✅ | ✅ |
Add gem "ruby-openai", "~> 6.1.0"
to your Gemfile.
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
You can pass additional parameters to the constructor, it will be passed to the OpenAI client:
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"], llm_options: { ... })
Generate vector embeddings:
llm.embed(text: "foo bar")
Generate a text completion:
llm.complete(prompt: "What is the meaning of life?").completion
Generate a chat completion:
llm.chat(prompt: "Hey! How are you?").completion
Summarize the text:
llm.summarize(text: "...").completion
You can use any other LLM by invoking the same interface:
llm = Langchain::LLM::GooglePalm.new(api_key: ENV["GOOGLE_PALM_API_KEY"], default_options: { ... })
Create a prompt with input variables:
prompt = Langchain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke about {content}.", input_variables: ["adjective", "content"])
prompt.format(adjective: "funny", content: "chickens") # "Tell me a funny joke about chickens."
Creating a PromptTemplate using just a prompt and no input_variables:
prompt = Langchain::Prompt::PromptTemplate.from_template("Tell me a funny joke about chickens.")
prompt.input_variables # []
prompt.format # "Tell me a funny joke about chickens."
Save prompt template to JSON file:
prompt.save(file_path: "spec/fixtures/prompt/prompt_template.json")
Loading a new prompt template using a JSON file:
prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.json")
prompt.input_variables # ["adjective", "content"]
Create a prompt with a few shot examples:
prompt = Langchain::Prompt::FewShotPromptTemplate.new(
prefix: "Write antonyms for the following words.",
suffix: "Input: {adjective}\nOutput:",
example_prompt: Langchain::Prompt::PromptTemplate.new(
input_variables: ["input", "output"],
template: "Input: {input}\nOutput: {output}"
),
examples: [
{ "input": "happy", "output": "sad" },
{ "input": "tall", "output": "short" }
],
input_variables: ["adjective"]
)
prompt.format(adjective: "good")
# Write antonyms for the following words.
#
# Input: happy
# Output: sad
#
# Input: tall
# Output: short
#
# Input: good
# Output:
Save prompt template to JSON file:
prompt.save(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json")
Loading a new prompt template using a JSON file:
prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json")
prompt.prefix # "Write antonyms for the following words."
Loading a new prompt template using a YAML file:
prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.yaml")
prompt.input_variables #=> ["adjective", "content"]
Parse LLM text responses into structured output, such as JSON.
You can use the StructuredOutputParser
to generate a prompt that instructs the LLM to provide a JSON response adhering to a specific JSON schema:
json_schema = {
type: "object",
properties: {
name: {
type: "string",
description: "Persons name"
},
age: {
type: "number",
description: "Persons age"
},
interests: {
type: "array",
items: {
type: "object",
properties: {
interest: {
type: "string",
description: "A topic of interest"
},
levelOfInterest: {
type: "number",
description: "A value between 0 and 100 of how interested the person is in this interest"
}
},
required: ["interest", "levelOfInterest"],
additionalProperties: false
},
minItems: 1,
maxItems: 3,
description: "A list of the person's interests"
}
},
required: ["name", "age", "interests"],
additionalProperties: false
}
parser = Langchain::OutputParsers::StructuredOutputParser.from_json_schema(json_schema)
prompt = Langchain::Prompt::PromptTemplate.new(template: "Generate details of a fictional character.\n{format_instructions}\nCharacter description: {description}", input_variables: ["description", "format_instructions"])
prompt_text = prompt.format(description: "Korean chemistry student", format_instructions: parser.get_format_instructions)
# Generate details of a fictional character.
# You must format your output as a JSON value that adheres to a given "JSON Schema" instance.
# ...
Then parse the llm response:
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
llm_response = llm.chat(prompt: prompt_text).completion
parser.parse(llm_response)
# {
# "name" => "Kim Ji-hyun",
# "age" => 22,
# "interests" => [
# {
# "interest" => "Organic Chemistry",
# "levelOfInterest" => 85
# },
# ...
# ]
# }
If the parser fails to parse the LLM response, you can use the OutputFixingParser
. It sends an error message, prior output, and the original prompt text to the LLM, asking for a "fixed" response:
begin
parser.parse(llm_response)
rescue Langchain::OutputParsers::OutputParserException => e
fix_parser = Langchain::OutputParsers::OutputFixingParser.from_llm(
llm: llm,
parser: parser
)
fix_parser.parse(llm_response)
end
Alternatively, if you don't need to handle the OutputParserException
, you can simplify the code:
# we already have the `OutputFixingParser`:
# parser = Langchain::OutputParsers::StructuredOutputParser.from_json_schema(json_schema)
fix_parser = Langchain::OutputParsers::OutputFixingParser.from_llm(
llm: llm,
parser: parser
)
fix_parser.parse(llm_response)
See here for a concrete example
RAG is a methodology that assists LLMs generate accurate and up-to-date information. A typical RAG workflow follows the 3 steps below:
- Relevant knowledge (or data) is retrieved from the knowledge base (typically a vector search DB)
- A prompt, containing retrieved knowledge above, is constructed.
- LLM receives the prompt above to generate a text completion. Most common use-case for a RAG system is powering Q&A systems where users pose natural language questions and receive answers in natural language.
Langchain.rb provides a convenient unified interface on top of supported vectorsearch databases that make it easy to configure your index, add data, query and retrieve from it.
Database | Open-source | Cloud offering |
---|---|---|
Chroma | ✅ | ✅ |
Epsilla | ✅ | ✅ |
Hnswlib | ✅ | ❌ |
Milvus | ✅ | ✅ Zilliz Cloud |
Pinecone | ❌ | ✅ |
Pgvector | ✅ | ✅ |
Qdrant | ✅ | ✅ |
Weaviate | ✅ | ✅ |
Elasticsearch | ✅ | ✅ |
Pick the vector search database you'll be using, add the gem dependency and instantiate the client:
gem "weaviate-ruby", "~> 0.8.9"
Choose and instantiate the LLM provider you'll be using to generate embeddings
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
client = Langchain::Vectorsearch::Weaviate.new(
url: ENV["WEAVIATE_URL"],
api_key: ENV["WEAVIATE_API_KEY"],
index_name: "Documents",
llm: llm
)
You can instantiate any other supported vector search database:
client = Langchain::Vectorsearch::Chroma.new(...) # `gem "chroma-db", "~> 0.6.0"`
client = Langchain::Vectorsearch::Epsilla.new(...) # `gem "epsilla-ruby", "~> 0.0.3"`
client = Langchain::Vectorsearch::Hnswlib.new(...) # `gem "hnswlib", "~> 0.8.1"`
client = Langchain::Vectorsearch::Milvus.new(...) # `gem "milvus", "~> 0.9.2"`
client = Langchain::Vectorsearch::Pinecone.new(...) # `gem "pinecone", "~> 0.1.6"`
client = Langchain::Vectorsearch::Pgvector.new(...) # `gem "pgvector", "~> 0.2"`
client = Langchain::Vectorsearch::Qdrant.new(...) # `gem "qdrant-ruby", "~> 0.9.3"`
client = Langchain::Vectorsearch::Elasticsearch.new(...) # `gem "elasticsearch", "~> 8.2.0"`
Create the default schema:
client.create_default_schema
Add plain text data to your vector search database:
client.add_texts(
texts: [
"Begin by preheating your oven to 375°F (190°C). Prepare four boneless, skinless chicken breasts by cutting a pocket into the side of each breast, being careful not to cut all the way through. Season the chicken with salt and pepper to taste. In a large skillet, melt 2 tablespoons of unsalted butter over medium heat. Add 1 small diced onion and 2 minced garlic cloves, and cook until softened, about 3-4 minutes. Add 8 ounces of fresh spinach and cook until wilted, about 3 minutes. Remove the skillet from heat and let the mixture cool slightly.",
"In a bowl, combine the spinach mixture with 4 ounces of softened cream cheese, 1/4 cup of grated Parmesan cheese, 1/4 cup of shredded mozzarella cheese, and 1/4 teaspoon of red pepper flakes. Mix until well combined. Stuff each chicken breast pocket with an equal amount of the spinach mixture. Seal the pocket with a toothpick if necessary. In the same skillet, heat 1 tablespoon of olive oil over medium-high heat. Add the stuffed chicken breasts and sear on each side for 3-4 minutes, or until golden brown."
]
)
Or use the file parsers to load, parse and index data into your database:
my_pdf = Langchain.root.join("path/to/my.pdf")
my_text = Langchain.root.join("path/to/my.txt")
my_docx = Langchain.root.join("path/to/my.docx")
client.add_data(paths: [my_pdf, my_text, my_docx])
Supported file formats: docx, html, pdf, text, json, jsonl, csv, xlsx.
Retrieve similar documents based on the query string passed in:
client.similarity_search(
query:,
k: # number of results to be retrieved
)
Retrieve similar documents based on the query string passed in via the HyDE technique:
client.similarity_search_with_hyde()
Retrieve similar documents based on the embedding passed in:
client.similarity_search_by_vector(
embedding:,
k: # number of results to be retrieved
)
RAG-based querying
client.ask(
question:
)
Choose and instantiate the LLM provider you'll be using:
llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
Instantiate the Conversation class:
chat = Langchain::Conversation.new(llm: llm)
(Optional) Set the conversation context:
chat.set_context("You are a chatbot from the future")
Exchange messages with the LLM
chat.message("Tell me about future technologies")
To stream the chat response:
chat = Langchain::Conversation.new(llm: llm) do |chunk|
print(chunk)
end
Open AI Functions support
chat.set_functions(functions)
The Evaluations module is a collection of tools that can be used to evaluate and track the performance of the output products by LLM and your RAG (Retrieval Augmented Generation) pipelines.
Ragas helps you evaluate your Retrieval Augmented Generation (RAG) pipelines. The implementation is based on this paper and the original Python repo. Ragas tracks the following 3 metrics and assigns the 0.0 - 1.0 scores:
- Faithfulness - the answer is grounded in the given context.
- Context Relevance - the retrieved context is focused, containing little to no irrelevant information.
- Answer Relevance - the generated answer addresses the actual question that was provided.
# We recommend using Langchain::LLM::OpenAI as your llm for Ragas
ragas = Langchain::Evals::Ragas::Main.new(llm: llm)
# The answer that the LLM generated
# The question (or the original prompt) that was asked
# The context that was retrieved (usually from a vectorsearch database)
ragas.score(answer: "", question: "", context: "")
# =>
# {
# ragas_score: 0.6601257446503674,
# answer_relevance_score: 0.9573145866787608,
# context_relevance_score: 0.6666666666666666,
# faithfulness_score: 0.5
# }
Additional examples available: /examples
LangChain.rb uses standard logging mechanisms and defaults to :warn
level. Most messages are at info level, but we will add debug or warn statements as needed.
To show all log messages:
Langchain.logger.level = :debug
git clone https://github.com/andreibondarev/langchainrb.git
cp .env.example .env
, then fill out the environment variables in.env
bundle exec rake
to ensure that the tests pass and to run standardrbbin/console
to load the gem in a REPL session. Feel free to add your own instances of LLMs, Tools, Agents, etc. and experiment with them.- Optionally, install lefthook git hooks for pre-commit to auto lint:
gem install lefthook && lefthook install -f
Join us in the Langchain.rb Discord server.
Bug reports and pull requests are welcome on GitHub at https://github.com/andreibondarev/langchainrb.
The gem is available as open source under the terms of the MIT License.