n8n-io/n8n

HTTP Node Problem in node ‘HTTP Request‘

igortsev opened this issue · 15 comments

Bug Description

i have 2 installed N8N instance. I did simple GET request.

All that instannce use the same GET request

  1. Windows 11 - Localhost - docker -> HTTP Node request success
  2. VPS Ubuntu - docker -> HTTP Node not working
The connection to the server wes closed unexpectedly, perhaps it is offline. You can retry request immidiately or wait and retry later.


{
  "errorMessage": "The connection to the server wes closed unexpectedly, perhaps it is offline. You can retry request immidiately or wait and retry later.",
  "errorDetails": {
    "rawErrorMessage": [
      "read ECONNRESET"
    ],
    "httpCode": "rejected"
  },
  "n8nDetails": {
    "nodeName": "HTTP Request",
    "nodeType": "n8n-nodes-base.httpRequest",
    "nodeVersion": 4.2,
    "itemIndex": 0,
    "time": "02.05.2024, 18:42:18",
    "n8nVersion": "1.39.1 (Self Hosted)",
    "binaryDataMode": "default",
    "stackTrace": [
      "NodeApiError: The connection to the server wes closed unexpectedly, perhaps it is offline. You can retry request immidiately or wait and retry later.",
      "    at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/HttpRequest/V3/HttpRequestV3.node.js:1571:35)",
      "    at processTicksAndRejections (node:internal/process/task_queues:95:5)",
      "    at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:728:19)",
      "    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:660:53",
      "    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:1062:20"
    ]
  }
}

Issue the same with 1.39.1 and 1.38.2. The same time when i do request with SSH Curl from the same server i gor answer without any delay

To Reproduce

  1. Create Manual trigger
  2. Make simple GET request

Expected behavior

got answer

Operating System

Ubuntu 22.04

n8n Version

1.39.1

Node.js Version

4.2

Database

PostgreSQL

Execution mode

main (default)

Server debug mode log

2024-05-02T12:42:18.718Z | verbose  | Workflow execution finished with error "{\n  error: {\n    level: 'warning',\n    tags: {},\n    extra: undefined,\n    context: { itemIndex: 0, request: [Object] },\n    functionality: 'regular',\n    name: 'NodeApiError',\n    timestamp: 1714653738678,\n    errorResponse: { status: 'rejected', reason: [AxiosError] },\n    node: {\n      parameters: [Object],\n      id: '7e583969-4045-40bc-a462-c7490130a54a',\n      name: 'HTTP Request',\n      type: 'n8n-nodes-base.httpRequest',\n      typeVersion: 4.2,\n      position: [Array]\n    },\n    messages: [ 'read ECONNRESET' ],\n    httpCode: 'rejected',\n    description: undefined,\n    message: 'The connection to the server wes closed unexpectedly, perhaps it is offline. You can retry request immidiately or wait and retry later.',\n    stack: 'NodeApiError: The connection to the server wes closed unexpectedly, perhaps it is offline. You can retry request immidiately or wait and retry later.\\n' +\n      '    at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/HttpRequest/V3/HttpRequestV3.node.js:1571:35)\\n' +\n      '    at processTicksAndRejections (node:internal/process/task_queues:95:5)\\n' +\n      '    at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:728:19)\\n' +\n      '    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:660:53\\n' +\n      '    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:1062:20'\n  },\n  workflowId: 's8VC81EjhovCYZjz',\n  file: 'LoggerProxy.js',\n  function: 'exports.verbose'\n}"

read ECONNRESET implies that the server n8n trying to talk to is disconnecting in the middle of the request, which to me sounds like networking issues on the VPS.
Can you use curl on the VPS to check if the URL responds correctly ?

read ECONNRESET implies that the server n8n trying to talk to is disconnecting in the middle of the request, which to me sounds like networking issues on the VPS. Can you use curl on the VPS to check if the URL responds correctly ?

Curl via SSH - success answer from the same server. DNS resolve correct too

curl --location 'https://DOMAIN/api/methods/product.getlist' 

RESPONSE -> JSON

I would say based on it working from one server and not another the issue is not related to n8n and is environmental. Is the http request node able to reach other hosts and what do you have the Docker networking set to on the VPS?

I would say based on it working from one server and not another the issue is not related to n8n and is environmental. Is the http request node able to reach other hosts and what do you have the Docker networking set to on the VPS?

HTTP GET https://google.com - success

the same time
curl --location 'https://DOMAIN/api/methods/product.getlist' - working via SSH Curl

Configuration with Traeffik and publised 5678 - Docker Ingress

Can you share the URL for the API you are trying?

It looks like if Google is working then it may point towards the site itself or maybe the region / IP of the VPS. Out of interest what happens if you add a header to the HTTP Request node and set it to User-Agent with a value of n8n/1.39.1?

I still wouldn't rule out an issue with the Docker configuration but there is still nothing here to suggest it is a bug with n8n.

Can you share the URL for the API you are trying?

It looks like if Google is working then it may point towards the site itself or maybe the region / IP of the VPS. Out of interest what happens if you add a header to the HTTP Request node and set it to User-Agent with a value of n8n/1.39.1?

I still wouldn't rule out an issue with the Docker configuration but there is still nothing here to suggest it is a bug with n8n.

https://igortsev.ru/api-240e6da7-b484-400f-abbc-a9ab09fff822/methods/product.getlist

Can you share the URL for the API you are trying?

It looks like if Google is working then it may point towards the site itself or maybe the region / IP of the VPS. Out of interest what happens if you add a header to the HTTP Request node and set it to User-Agent with a value of n8n/1.39.1?

I still wouldn't rule out an issue with the Docker configuration but there is still nothing here to suggest it is a bug with n8n.

Tested User-Agent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36 Edg/124.0.0.0 and n8n/1.39.1 - the same error

n8n.-.My.workflow.9._.Microsoft.Edge.2024-05-02.19-28-06.mp4

Problem in node ‘HTTP Request‘
The connection to the server wes closed unexpectedly, perhaps it is offline. You can retry request immidiately or wait and retry later.

The URL works from n8n cloud and from my self hosted instance of n8n, Sadly this appears to be environmental and I can't see anything we can do to help from the application.

I would recommend checking the configuration of Docker and your VPS to see if there is anything that looks out of place.

The URL works from n8n cloud and from my self hosted instance of n8n, Sadly this appears to be environmental and I can't see anything we can do to help from the application.

I would recommend checking the configuration of Docker and your VPS to see if there is anything that looks out of place.

Solved. Issue was with custom network MTU from provider. Value was 1450 for me but Docker default network MTU was 1500.

docker swarm leave - if you use docker swarm

docker network rm docker_gwbridge

docker network create -d bridge \
   --subnet 172.18.0.0/16 \
   --opt com.docker.network.bridge.name=docker_gwbridge \
   --opt com.docker.network.bridge.enable_icc=false \
   --opt com.docker.network.bridge.enable_ip_masquerade=true \
   --opt com.docker.network.driver.mtu=1450 \
   docker_gwbridge

Value was 1450

was that a Hetzer machine ?

Value was 1450

was that a Hetzer machine ?

No. You can do simple check to be sure.

Run ip a and check ens3 MTU and the same time Docker networks MTU in the same list

That is an interesting one, For now as this is solved I am going to close this. Thanks for the update.