langgenius/dify

[openai_api_compatible]Error: API request failed

Closed this issue · 3 comments

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

0.13.2

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

I added glm-4-9b-chat to openai-api-compatible and set up tool call and support for stream function calling. However, when I was debugging the agent, I received a 400 error, and due to the limitations of the message box, I was unable to see the complete error information.
8054B3AFC6730F7C4586E16617D942FA

01F8DCC8B490F42158FFD89A176BAEA7

What is the cause of the problem, and how should I go about resolving it?

✔️ Expected Behavior

I want to set the agent mode to function calling and use the agent normally.

❌ Actual Behavior

When I debug the agent, it reports a 400 error.

@lijiaqi0121 Hello there! I'm here to help you with bug reports, questions, and becoming a contributor. Let's tackle this issue together while we wait for a human maintainer.

The 400 error you're encountering when using the openai-api-compatible feature in Dify, particularly with the glm-4-9b-chat model, seems to be related to the configuration settings for tool choice and function calling. The error message indicates that --enable-auto-tool-choice and tool-call-parser need to be set.

Here are some steps you can take to resolve this issue:

  1. Check Configuration Settings: Ensure that the --enable-auto-tool-choice and tool-call-parser options are correctly configured in your setup. These settings are crucial for enabling tool choice and function calling features.

  2. Review Model Configuration: Verify that the glm-4-9b-chat model is configured correctly. This includes checking parameters like stream, max tokens, temperature, top_p, top_k, and others that might affect the model's operation [1][2].

  3. API Endpoint and Key: Double-check that the API endpoint URL and API key (if used) are correctly set in your configuration. These are essential for the API requests to function properly [3].

  4. Debugging: Since the error message is truncated, try to capture the full error log if possible. This might provide more insights into what is causing the BadRequestError.

  5. Documentation and Discussions: Although I couldn't find specific documentation on enabling --enable-auto-tool-choice and tool-call-parser, reviewing the Dify documentation or related discussions might provide additional context or solutions.

If these steps do not resolve the issue, consider checking for any updates or patches that might address this problem in the latest version of Dify.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@lijiaqi0121 Hello there! I'm here to help you with bug reports, questions, and becoming a contributor. Let's tackle this issue together while we wait for a human maintainer.

The 400 error you're encountering when using the openai-api-compatible feature in Dify, particularly with the glm-4-9b-chat model, seems to be related to the configuration settings for tool choice and function calling. The error message indicates that --enable-auto-tool-choice and tool-call-parser need to be set.

Here are some steps you can take to resolve this issue:

  1. Check Configuration Settings: Ensure that the --enable-auto-tool-choice and tool-call-parser options are correctly configured in your setup. These settings are crucial for enabling tool choice and function calling features.
  2. Review Model Configuration: Verify that the glm-4-9b-chat model is configured correctly. This includes checking parameters like stream, max tokens, temperature, top_p, top_k, and others that might affect the model's operation [1][2].
  3. API Endpoint and Key: Double-check that the API endpoint URL and API key (if used) are correctly set in your configuration. These are essential for the API requests to function properly [3].
  4. Debugging: Since the error message is truncated, try to capture the full error log if possible. This might provide more insights into what is causing the BadRequestError.
  5. Documentation and Discussions: Although I couldn't find specific documentation on enabling --enable-auto-tool-choice and tool-call-parser, reviewing the Dify documentation or related discussions might provide additional context or solutions.

If these steps do not resolve the issue, consider checking for any updates or patches that might address this problem in the latest version of Dify.

To continue talking to Dosu, mention @dosu.

Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosu How can I view the complete error message?

To view the complete error message, you can try the following steps:

  1. Check Logs: Access the logs of your application or server where the API is running. This can often be done through your hosting platform's dashboard or by accessing the server directly if you have SSH access.

  2. Increase Log Verbosity: If the logs are not detailed enough, consider increasing the verbosity level of your logging configuration. This might involve changing settings in your application's configuration files to capture more detailed error messages.

  3. Use Developer Tools: If you're testing the API through a web interface, use your browser's developer tools (usually accessible via F12) to inspect network requests. This can show you the full request and response, including any error messages.

  4. Capture Console Output: If you're running the application locally, ensure that your console or terminal is capturing all output. You might need to redirect output to a file for easier reading.

These steps should help you capture the full error message, providing more context for troubleshooting.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other