Trivial app segfaults when there are more than two routes
Opened this issue · 3 comments
This trivial served
app below segfaults immediately when there are more than two routes. It works fine as long as only /route1
and /route2
are present, but as soon as I add /route3
, it segfaults.
This does not happen if served
is built with build type set to Debug
. I have also tried building served
with build type set to RelWithDebInfo
and running the app in a debugger. The debugger pointed out multiplexer.cpp:139
as the culprit, which I find hard to believe since it's just std::vector::push_back
:
_handler_candidates.push_back(
path_handler_candidate(get_segments(path), served::methods_handler(_base_path + path, info), path));
The app:
#include <served/plugins.hpp>
#include <served/served.hpp>
int main(int, char **) {
served::multiplexer mux;
mux.use_after(served::plugin::access_log);
mux.handle("/route1").get([](served::response &response, const served::request &) {
response.set_body("1");
});
mux.handle("/route2").get([](served::response &response, const served::request &) {
response.set_body("version2");
});
mux.handle("/route3").get([](served::response &response, const served::request &) {
response.set_body("3");
});
served::net::server server("127.0.0.1", "8080", mux);
server.run();
return 0;
}
If I comment out
mux.handle("/route3").get([](served::response &response, const served::request &) {
response.set_body("3");
});
it works fine, but with three or more routes, boom!
Actually this is enough to reproduce:
#include <served/served.hpp>
int main(int, char **) {
served::multiplexer mux;
mux.handle("/version/1").get([](served::response &, const served::request &) {
});
mux.handle("/version/2").get([](served::response &, const served::request &) {
});
mux.handle("/version/3").get([](served::response &, const served::request &) {
});
return 0;
}
This looks like a stdlib bug on macOS. Compiling compilers other than system default works just fine.
Hey @krzysztofwos, I can't reproduce this myself and the issue is pretty old now, I'm closing but feel free to reopen if this issue still impacts you.