php-mqtt/client

Proxy error over Apache

tukusejssirs opened this issue · 5 comments

Sorry to bother you again, but it seems like I’m missing something.


I created an MQTT broker using aedes. It works as expected.

server.js
const aedes = require('aedes')()
const server = require('net').createServer(aedes.handle)
const port = 1883

server.listen(port, function () {
   console.log('Server started and listening on port ' + port + '.')
})

aedes.subscribe('#', function(packet, cb) {
   packet.payload = packet.payload.toString()
   console.log(packet)
})

For reasons unrelated to this issue, we need to use PHP to get some messages from the server over an Apache webserver on CentOS 8, therefore I created a client using php-mqtt/client, which also works when run from terminal (but not when accessed via webserver).

client.php
<?php
require 'vendor/autoload.php';

$server   = 'localhost';
$port     = 1883;

$mqtt = new PhpMqtt\Client\MqttClient($server, $port);
$mqtt->connect();
$mqtt->subscribe('#', function ($topic, $message) {
    echo sprintf("<br><br>Received message on topic [%s]: %s\n", $topic, $message);
}, 0);
$mqtt->loop(true);
$mqtt->disconnect();
?>

When the PHP client is accessed via webserver, although the server logs a new client and its subscription, but nothing is output on the screen. After a while (maybe a couple of minutes), the following error is output in the web browser:

Proxy Error

The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /path/to/client.php.

Reason: Error reading from remote server

Do I need to configure some proxy settings of the Apache server? If so, what paths (URLs) do I need to proxy?

Or will I need to use websockets?

Thanks for your help! 😉

That's a bit of a weird issue to be honest... or a weird use case. 😄

The problem here is most likely that the proxy server is terminating the connection because no data is sent for too long. This should be fixable with an increased timeout. There might also be an issue with output buffering of echo, not sure about that.

But careful, there is another issue with this approach: when you close your browser window, the server process does not get terminated (immediately). It blocks resources and will fail at some point.

To be fair though, I'm not sure this is a good thing to do anyway. If you only need to send a command to your MQTT broker and wait for a (single) response, you can do that easily (it's called RPC). But dumping all received messages over time to the browser like your code indicates, is not something you should (or really can) do.

Let me describe our use case more specifically: we want to be able to send data from server to client dynamically. That data is a simple JSON string. Currently, we use Socket.io for that (Node.js server, JS client; PHP + Ajax + JS), but now we wanted to display that data on Samsung Smart Signage QM49F (Tizen 4.0), but the web browser had some troubles with outputing the data (it had some troubles with Socket.io or websocket or both). Now, the data is output correctly using php-mqtt/client, but only when run on local network. We have a requirement to make it work both on LAN and over the Internet.

But careful, there is another issue with this approach: when you close your browser window, the server process does not get terminated (immediately). It blocks resources and will fail at some point.

Is there a way to auto-closes a client on the server side when no there is no response from the client after x seconds? Or something similar? We don’t need the client to stay online when the tab/window is closed; we only need to supply the frontend with the initial data (on connection) and the changed data whenever they happen.

The problem here is most likely that the proxy server is terminating the connection because no data is sent for too long.

Well, what is too long? When I tested this, first I started the server, then open the PHP client (in a web browser), finally open another client (in a Node JS). It took by max one minute to start the JS client after opening the PHP client (I tested it on a wall mounted smart monitor). I don’t think it should be a problem.

This should be fixable with an increased timeout.

I could change the values of max_execution_time (current value: 30) and max_input_time (current value: 60).

There might also be an issue with output buffering of echo, not sure about that.

I don’t think so, as it works when accessed over LAN (it does not work only over the Internet).

To be fair though, I'm not sure this is a good thing to do anyway. If you only need to send a command to your MQTT broker and wait for a (single) response, you can do that easily (it's called RPC). But dumping all received messages over time to the browser like your code indicates, is not something you should (or really can) do.

In this case, we don’t need to send (more or less) any data to the broker; we only need to receive the data from the broker. I wait[ed] for a (single) response only because of the test (development); in production, the client should be connected to the broker until the tab/window is closed and it would receive initial data + data anytime there is change (by a change I mean a change in the data, as we want to display realtime status data).

Is there a way to auto-closes a client on the server side when no there is no response from the client after x seconds? Or something similar? We don’t need the client to stay online when the tab/window is closed; we only need to supply the frontend with the initial data (on connection) and the changed data whenever they happen.

Not that I'm aware of. In case of php-fpm, I think the webserver will only terminate a script if it reaches the max_execution_time or maybe also if the client closes the socket; which, especially behind a proxy, is a bit problematic. That is because your client only sends a request once. Which means the proxy has not really a way to determine whether the client closed the connection or not. If you use a low timeout, the proxy will close the connection quite soon. If you use a high timeout, the proxy will not remove actually closed sockets.

Well, what is too long? When I tested this, first I started the server, then open the PHP client (in a web browser), finally open another client (in a Node JS). It took by max one minute to start the JS client after opening the PHP client (I tested it on a wall mounted smart monitor). I don’t think it should be a problem.
[...]
I could change the values of max_execution_time (current value: 30) and max_input_time (current value: 60).

With the default timeout being 30 seconds for most webservers, I guess one minute is already too long. Changing the max_execution_time seems like another good thing to do though. Almost forgot about that.

In this case, we don’t need to send (more or less) any data to the broker; we only need to receive the data from the broker. I wait[ed] for a (single) response only because of the test (development); in production, the client should be connected to the broker until the tab/window is closed and it would receive initial data + data anytime there is change (by a change I mean a change in the data, as we want to display realtime status data).

The use case you describe is totally something you'd want to use websockets for. Maybe combined with a background job which subscribes to MQTT and forwards messages to a websocket hub (e.g. soketi/pws, which implements the pusher protocol).

[...] but the web browser had some troubles with outputing the data (it had some troubles with Socket.io or websocket or both).

Frankly, you should try solving this issue instead of looking for a workaround with the wrong technology. Websockets (and socket.io as sub protocol) are actually the way to go for such a use case.

There might be another solution involving long polling, a server-side background job and a server-side cache like Redis:

  • A background job subscribes using MQTT and stores retrieved messages using the receipt time as score in Redis. The background job also regularly removes too old entries from this set (e.g. everything older than 60 seconds).
  • (Browser based) clients regularly (e.g. every 5 seconds) fetch updates from the server (which reads from Redis), passing the timestamp at which they last polled data from the server.

In this case, we don’t need to send (more or less) any data to the broker; we only need to receive the duWebSockets.jsata from the broker. I wait[ed] for a (single) response only because of the test (development); in production, the client should be connected to the broker until the tab/window is closed and it would receive initial data + data anytime there is change (by a change I mean a change in the data, as we want to display realtime status data).

The use case you describe is totally something you'd want to use websockets for. Maybe combined with a background job which subscribes to MQTT and forwards messages to a websocket hub (e.g. soketi/pws, which implements the pusher protocol).

Well, Tizen 4.0 in our smart monitor actually does not like websockets. I tried to use both Socket.io and mqtt.js clients using websockets (ws://), but it does not work at all (it works in the terminal, in non-Tizen browser, but not on the smart monitor with Tizen). That’s the primary reason to move to PHP client. Secondary reason is that we need to use MQTT for other (backend) stuff.

[...] but the web browser had some troubles with outputing the data (it had some troubles with Socket.io or websocket or both).

Frankly, you should try solving this issue instead of looking for a workaround with the wrong technology. Websockets (and socket.io as sub protocol) are actually the way to go for such a use case.

I did try to solve the issue, but WebSockets does not work in my tests on that monitor at all. Looking for a workaround another possible solution was the next iteration of fixing the issue.

There might be another solution involving long polling, a server-side background job and a server-side cache like Redis:

* A background job subscribes using MQTT and stores retrieved messages using the receipt time as score in Redis. The background job also regularly removes _too old_ entries from this set (e.g. everything older than 60 seconds).

I’d rather not use Redis.

* (Browser based) clients regularly (e.g. every 5 seconds) fetch updates from the server (which reads from Redis), passing the timestamp at which they last polled data from the server.

Prior to using Socket.io, we did something like this (not exactly though): we used cgi to load the changes every second (the data changes could occur every second and we need to have the data displayed on the website as current as possible). Switching to Socket.io was a good move for us, but now we need to display the data on a Samsung Smart Signage, which is (IMHO) more dumb that smart (it is actually inferior to regular Samsung smart TVs IMHO). However, the hardware used is out of my control.


Anyway, thanks for your help! 🙏 I’d be very grateful if you could help me fix the proxy error, that would save my day. Otherwise, I’ll keep looking for a solution elsewhere. 😉

I don't use Apache myself, so I cannot really help you with that. I think ProxyTimeout is what you are looking for (after a quick search). But from Nginx, I'm used to have a few more options to work with when it comes to timeouts (see Nginx docs).

As already written, increasing max_execution_time is most likely required as well. Otherwise, the process will be killed on the server after 30 seconds (which is the default). Scripts with open streams have some special handling, but I'm not too familiar with that part.