FlareChain.from_llm ignores supplied llm
Closed this issue · 1 comments
Checked other resources
- This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
- I added a clear and descriptive title that summarizes this issue.
- I used the GitHub search to find a similar question and didn't find it.
- I am sure that this is a bug in LangChain rather than my code.
- The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
- I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
Example Code
Included here is a copy of the from_llm method from langchain.chains.flare.base. From this code example it is clear to see the llm the user pass gets overridden.
# Source excerpt from langchain.chains.flare.base (current main)
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel,
max_generation_len: int = 32,
**kwargs: Any,
) -> FlareChain:
"""Creates a FlareChain from a language model.
Args:
llm: Language model to use.
max_generation_len: Maximum length of the generated response.
kwargs: Additional arguments to pass to the constructor.
Returns:
FlareChain class with the given language model.
"""
try:
from langchain_openai import ChatOpenAI
except ImportError as e:
msg = (
"OpenAI is required for FlareChain. "
"Please install langchain-openai."
"pip install langchain-openai"
)
raise ImportError(msg) from e
llm = ChatOpenAI(
max_completion_tokens=max_generation_len,
logprobs=True,
temperature=0,
)Error Message and Stack Trace (if applicable)
No response
Description
Description: Calling FlareChain.from_llm with a user‑constructed BaseLanguageModel instance does not use that instance. The method unconditionally creates a new ChatOpenAI (temperature=0, its own max_completion_tokens) and overwrites the passed llm variable.
Expected: The provided llm instance (with its custom configuration, e.g. temperature=0.55, max_completion_tokens=17) is used to build response_chain and question_generator_chain.
Actual: A new ChatOpenAI(logprobs=True, temperature=0, max_completion_tokens=<max_generation_len>) is created inside from_llm, replacing the supplied object; the original instance is absent from the constructed chain graph.
Impact:
Prevents passing a preconfigured / alternative-compatible model via this API.
Signature and docstring are misleading (argument is ignored).
Users cannot tune temperature or other parameters through their own instance here.
Proposed Fix: Stop overwriting the passed llm; validate it is a ChatOpenAI with logprobs enabled, then compose the pipelines with that instance (optional: only construct a default if llm is None).
preposed fix #32847
System Info
System Information
OS: Darwin
OS Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:39:07 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6000
Python Version: 3.11.9 (main, May 4 2025, 14:33:11) [Clang 16.0.0 (clang-1600.0.26.3)]
Package Information
langchain_core: 0.3.75
langchain: 0.3.27
langsmith: 0.4.26
langchain_openai: 0.3.32
langchain_text_splitters: 0.3.11
Optional packages not installed
langserve
Other Dependencies
async-timeout<5.0.0,>=4.0.0;: Installed. No version info available.
httpx<1,>=0.23.0: Installed. No version info available.
jsonpatch<2.0,>=1.33: Installed. No version info available.
langchain-anthropic;: Installed. No version info available.
langchain-aws;: Installed. No version info available.
langchain-azure-ai;: Installed. No version info available.
langchain-cohere;: Installed. No version info available.
langchain-community;: Installed. No version info available.
langchain-core<1.0.0,>=0.3.72: Installed. No version info available.
langchain-core<1.0.0,>=0.3.74: Installed. No version info available.
langchain-core<2.0.0,>=0.3.75: Installed. No version info available.
langchain-deepseek;: Installed. No version info available.
langchain-fireworks;: Installed. No version info available.
langchain-google-genai;: Installed. No version info available.
langchain-google-vertexai;: Installed. No version info available.
langchain-groq;: Installed. No version info available.
langchain-huggingface;: Installed. No version info available.
langchain-mistralai;: Installed. No version info available.
langchain-ollama;: Installed. No version info available.
langchain-openai;: Installed. No version info available.
langchain-perplexity;: Installed. No version info available.
langchain-text-splitters<1.0.0,>=0.3.9: Installed. No version info available.
langchain-together;: Installed. No version info available.
langchain-xai;: Installed. No version info available.
langsmith-pyo3>=0.1.0rc2;: Installed. No version info available.
langsmith>=0.1.17: Installed. No version info available.
langsmith>=0.3.45: Installed. No version info available.
openai-agents>=0.0.3;: Installed. No version info available.
openai<2.0.0,>=1.99.9: Installed. No version info available.
opentelemetry-api>=1.30.0;: Installed. No version info available.
opentelemetry-exporter-otlp-proto-http>=1.30.0;: Installed. No version info available.
opentelemetry-sdk>=1.30.0;: Installed. No version info available.
orjson>=3.9.14;: Installed. No version info available.
packaging>=23.2: Installed. No version info available.
pydantic<3,>=1: Installed. No version info available.
pydantic<3.0.0,>=2.7.4: Installed. No version info available.
pydantic>=2.7.4: Installed. No version info available.
pytest>=7.0.0;: Installed. No version info available.
PyYAML>=5.3: Installed. No version info available.
requests-toolbelt>=1.0.0: Installed. No version info available.
requests<3,>=2: Installed. No version info available.
requests>=2.0.0: Installed. No version info available.
rich>=13.9.4;: Installed. No version info available.
SQLAlchemy<3,>=1.4: Installed. No version info available.
tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
tiktoken<1,>=0.7: Installed. No version info available.
typing-extensions>=4.7: Installed. No version info available.
vcrpy>=7.0.0;: Installed. No version info available.
zstandard>=0.23.0: Installed. No version info available.
The Proposed Fix
The core of the fix is to replace the hardcoded ChatOpenAI(model="gpt-3.5-turbo") instances with the llm variable that is passed into the method.
Here is a simplified pseudo-code representation of the change:
Current (Buggy) Code Logic
Python
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel, # llm is received here
# ... other parameters
) -> "FlareChain":
# ... some setup code ...
# The bug is here: a new, default LLM is created, ignoring the 'llm' parameter.
question_generator_chain = LLMChain(
llm=ChatOpenAI(model="gpt-3.5-turbo"), prompt=PROMPT
)
# ... other internal chains are also created with the default LLM ...
return cls(
question_generator_chain=question_generator_chain,
# ... other components
)
Corrected Code Logic
Python
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel, # llm is received here
# ... other parameters
) -> "FlareChain":
# ... some setup code ...
# The fix: Pass the 'llm' object to the internal chain.
question_generator_chain = LLMChain(
llm=llm, prompt=PROMPT
)
# ... ensure 'llm' is used for all other internal chains as well ...
return cls(
question_generator_chain=question_generator_chain,
# ... other components
)
Why This Fix Works
This change ensures that the language model object provided by the user is correctly propagated to all the sub-components of the FlareChain. By doing this, the chain respects the user's configuration, allowing for the use of any custom or specified LLM..