mtrudel/bandit

Longpolling creates unexpected Error

PJUllrich opened this issue · 3 comments

Hey folks,

first of all, thank you very much for this wonderful library ❤️

I switched from Cowboy to Bandit the other day and enablet longpolling as described in the documentation.

Now, I unfortunately have the issue that my server logs one error every second. They all look like this:

19:22:38.901 [error] Bandit.HTTP1.Handler #PID<0.4320.0> received unexpected message in handle_info/2: {:now_available, #Reference<0.3504149931.1078460417.100701>}

19:22:39.986 [error] Bandit.HTTP1.Handler #PID<0.4322.0> received unexpected message in handle_info/2: {:now_available, #Reference<0.3504149931.1078460417.100882>}

19:22:41.044 [error] Bandit.HTTP1.Handler #PID<0.4324.0> received unexpected message in handle_info/2: {:now_available, #Reference<0.3504149931.1078460417.101069>}

Also, I see now that LiveView falls back to longpolling a lot instead of establishing a Websocket connection although my server is in the next city. Could this be related?

This is my config:

Versions

Elixir: 1.15.7
Erlang: 26.1.2
Phoenix: 1.7.11
LiveView: 0.20.5
Bandit: 1.2.0
# endpoint.ex
  @session_options [
    store: :cookie,
    key: "_my_key",
    signing_salt: "my-salt",
    same_site: "Lax"
  ]

  socket "/live", Phoenix.LiveView.Socket,
    websocket: [connect_info: [session: @session_options]],
    longpoll: [connect_info: [session: @session_options]]
# app.js

let liveSocket = new LiveSocket("/live", Socket, {
  params: { _csrf_token: csrfToken },
  longPollFallbackMs: 2500,
  hooks: Hooks,
});
# config/config.exs

config :my_app, MyAppWeb.Endpoint,
  adapter: Bandit.PhoenixAdapter,
  url: [host: "localhost"],
  render_errors: [
    formats: [html: MyAppWeb.ErrorHTML, json: MyAppWeb.ErrorJSON],
    layout: false
  ],
  pubsub_server: MyAppPubSub,
  live_view: [signing_salt: "my-salt"]

Thank you for your help in advance 🙏

The fallback to long polling is most likely related to being on the latest Phoenix; there's an improvement that shipped in 1.7.11 that changes how sockets fail back to long polling in certain error cases; see the Phoenix changelog for details.

In terms of those errors, they're something we know about on long poll connections. They're benign and 'expected'; we're just not quite sure how to handle them yet.

The data point that you're seeing them a lot is helpful though; do you have a rough idea of how the volume of those messages correlates to site traffic? Like percentage wise or whatever other measures you may have? It'll help deciding how acute of an issue this may be. Thanks!

Should be fixed in 1.2.1, just released. Logging these messages just isn't worth the support burden of having to answer this same question for the next thousand years. There's a log_unknown_messages config option in case people want the previous (verbose) behaviour.

Thanks for the issue!