AI streaming works locally but responses are cut off in Vercel deployment
Opened this issue · 0 comments
glaksmono commented
Checked other resources
- This is a bug, not a usage question.
- I added a clear and descriptive title that summarizes this issue.
- I used the GitHub search to find a similar question and didn't find it.
- I am sure that this is a bug in LangChain rather than my code.
- The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
- This is not related to the langchain-community package.
- I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
Example Code
I fixed it by updating my package.json for langchain from the following:
"langchain": "^0.3.19",to the following:
"langchain": "0.3.19",Error Message and Stack Trace (if applicable)
No response
Description
Here's the full issue: https://community.vercel.com/t/ai-streaming-works-locally-but-is-being-cut-off-in-vercel/22063/9
System Info
The issue come up in Vercel, worked in localhost