aspnet/SignalR

Meeting notes: C++ Client

BrennanConroy opened this issue ยท 20 comments

  • Use STL Naming: snake cased
  • Look into using std::future wrapping pplx::task
  • Make a move ctor for hub_connection
  • Use hubconnectionbuilder pattern
  • Try to match C# api, withurl takes configure func
  • Explore: UTF8 strings everywhere (users convert wide to utf8 before passing in?)
  • Test out co_await usage (Possible C++20 feature, shouldn't depend on it)
  • on_{eventName} for callbacks: only have on_closed right now
  • Remove get_connectiond_id and get_connection_state
  • Investigate msgpack libraries to see parsing patterns
  • Hide low-level connection class
  • Sketch .h files without implementation to see how "ideal" API could look

Explore: UTF8 strings everywhere (users convert wide to utf8 before passing in?)

why not _XPLATSTR as it is being used now?

One interesting thing to look at is type conversion. In the old client everything was JSON so the user was given a JSON object and they were supposed to turn it into an object/primitive value. This is no longer the case now but since there is no reflection this is still the user who will need to provide means to convert from wire format to objects and vice versa.

Yep, we've been talking about both of those issues.

Regarding _XPLATSTR, yeah, I'm not sure either way and just don't know that we have the experience here yet, hence wanting to explore it a bit. If it is the case that all Windows applications use wide chars for all strings, then it makes sense since we're providing the interface that the application expects. That's one of the things we could talk to the C++ team about (since they're signed up to look at our plans/designs). SignalR is a utf8 protocol, even on Windows. We need wide chars for things that go in to Windows APIs (like URLs if we're using winhttp/cpprest/etc.) but when it comes to method names and other things we'll need utf8. Maybe that's just handled by the JSON/MessagePack layer though and we should just always operate with "Platform Strings" (i.e. Wide on Windows, Narrow (?) on Linux).

For Type Conversion, that was the focus of a good part of this meeting. We can't really use web::json::value any more since we need to support a format for MessagePack and possibly other protocols. We have a few options that we talked about:

  • Require the caller to pre-serialize the message. We can give them access to a "writer" object that makes this easier though. This writer could be coupled to the active protocol and directly render the bytes that need to be embedded in the payload. It's a somewhat more awkward API but it would be efficient since there's no intermediate storage:
auto connection = /*...*/
auto writer = connection.create_message_writer();
writer.write(42);
writer.write_map_header(2);
writer.write("FirstName");
writer.write("Andrew");
writer.write("LastName");
writer.write("Stanton-Nurse");
connection.invoke("RegisterCustomer", writer);
  • Require the caller to use our own object model to format the message. Basically this means replicating web::json::value and descendents but in a form that is cross-protocol. This is a little more flexible since you're building up a graph of objects but it means storing that intermediate graph and walking it to render it each time:
auto connection = /* ... */
auto id = signalr::msg::int(42);
auto customer_record = signalr::msg::map()
  .set("FirstName", "Andrew")
  .set("LastName", "Stanton-Nurse");
connection.invoke("RegisterCustomer", customer_record);
  • Support JSON and MessagePack explicitly via overloads (that throw if the negotated protocol doesn't match) based on their respective libraries. This would limit us to those protocols only. So effectively, we have:
std::future invoke(const char* method_name, web::json::value& value);
std::future invoke(const char* method_name, msgpack::sbuffer& value);

(MessagePack form based on https://github.com/berkus/msgpack which is just a rando library I just found to illustrate)

  • Keep using web::json::value and make that the primitive that gets passed around. AFAIK it's just an object model, there's nothing necessarily stopping us from walking that graph and rendering it as MessagePack. Seems pretty awkward though. This isn't really much better than creating signalr::msg except that we get to be lazy and not build the object model :)

@anurse and I came up with an interesting approach that looks good

template <typename T>
web::json::value to_json(T item)
{
    return web::json::value(item);
}

// explicit instantiation to allow customization and adding more supported types
template <>
web::json::value to_json<int>(int item)
{
    return 42;
}

template <typename T>
void tryparse(web::json::value& json, T first)
{
    json[json.size()] = to_json(first);
}

template <typename arg, typename ...T>
void tryparse(web::json::value& json, arg first, T... arguments)
{
    tryparse(json, first);
    tryparse(json, arguments...);
}

Error when you try to use an unsupported type (compile time error)

c:\github\signalr\clients\cpp\samples\hubconnectionsample\hubconnectionsample.cpp(23): error C2440: '<function-style-cast>': cannot convert from 'T' to 'web::json::value'
        with
        [
            T=std::map<int,int,std::less<int>,std::allocator<std::pair<const int,int>>>
        ]

And you can add your own explicit instantiation for that type to fix the issue, as shown above for to_json<int>

For pplx::task wrapper, it should be possible to do something like

std::future<json::value> hub_connection_impl::invoke_json(...)
{
    // ...
    auto task = pplx::create_task(tce);
    return std::async(std::launch::async, [task]() { return task.get(); });
}

not sure how efficient it is yet. But it's a possibility.

Metaprogramming!

https://media1.tenor.com/images/e367669952e6b7a69c2adb0c09de0b60/tenor.gif?itemid=5643831

Super simple hub_connection_builder

class hub_connection_builder
{
public:
    hub_connection_builder() : mUrl("")
    {
    }

    hub_connection build()
    {
        return std::move(hub_connection(mUrl, ""));
    }

    hub_connection_builder& configure_logging()
    {
        return *this;
    }

    hub_connection_builder& with_url(const std::string& url, std::function<void __cdecl()> configure = nullptr)
    {
        mUrl = url;
        if (configure)
        {
            configure();
        }

        return *this;
    }
private:
    std::string mUrl;
};

auto connection = hub_connection_builder()
        .with_url("http://example.com", []() {})
        .configure_logging()
        .build();

What do you guys think about using Boost ASIO and Boost Beast as the building blocks for networking, HTTP and WebSockets support? Boost ASIO is on track for becoming a part of the C++ standard library and Beast's author has the goal of getting it standardized as well. I feel like ASIO at least is more commonly used from the networking perspective, and it also introduces a lot of support for async machinery as well that could be useful.

Also, a few other comments:

  • My opinion is that having the APIs expose std::string would keep them the most accessible rather than a custom xplat string.
  • Do you guys have any plans for testing frameworks? Google Test works fairly well with Google Mock and is a very widely used test framework. There is also a Microsoft Test Adapter for discovering and running tests in VS.
  • Thoughts on supporting MSVC toolset v140 as the lowest supported version? I know it would be extremely beneficial in my own use case for the client as I have a very large codebase that is currently fixed to that compiler version for some time. Also, it seems to be a common target for many other large libraries.
  • Will the client be distributed on the likes of NuGet and Vcpkg? I also hope that a static library version may be available in addition to the DLL version.

Sorry for the long winded comment, just wanted to open up some more discussion on the C++ client. Thank you for all the great work! I am looking forward to bringing SignalR into the C++ world.

Do you guys have any plans for testing frameworks?

We used Google Test before, and currently have that in the repository as a submodule

Thoughts on supporting MSVC toolset v140 as the lowest supported version?

We will want the library to be broadly usable, so we'll have to look at exactly which toolset we will target. In the worst case, we can also cross-compile the code for multiple toolsets.

Will the client be distributed on the likes of NuGet and Vcpkg? I also hope that a static library version may be available in addition to the DLL version.

At this point, the only planned distribution is in source form, just because it's the most accessible format and we have very limited resources for getting this C++ client moving. We're looking at both NuGet and Vcpkg (in fact, we're probably going to be using Vcpkg to get manage dependencies for the client) but have no specific plans yet. The primary advantage of shipping as source right now is that you can take the code into your repository as a submodule and build it however you need (static, dynamic, and using whatever toolset you require). The downside of course is that you have to do that :).


In general, this is a very early effort and we're not sure exactly when it will land and in what form. Also, we're not really experienced with distributing C++ libraries so we'll be looking to get a lot of feedback on this :). We're trying to make sure we get something broadly usable, even if it takes some work to use it, so that people can start trying it out and let us know where we need to make changes.

Source distribution works as well, like you said, having the flexibility to compile it however we need is beneficial, despite the extra steps. Thanks!

In terms of the C++ resources, I am happy to provide any feedback or review prototype implementations / API shape as well.

Awesome! Feel free to comment on any C++ PRs you see :). We'll be looking to have a "minimum viable product" version ready in the first preview (but that's subject to resources, scheduling, blah blah), and would love to get some feedback on it.

What do you guys think about using Boost ASIO and Boost Beast as the building blocks for networking, HTTP and WebSockets support?

Circling back to this. The current prototype uses cpprestsdk because that's what the ASP.NET SignalR C++ client uses, but we're definitely open to looking at other library options.

Current API for what hub_connection could look like. The templates would mean we'd probably have to do source distribution.

class hub_connection
{
public:
    explicit hub_connection(const std::string url, const std::string& query_string = "");

    ~hub_connection();

    hub_connection(hub_connection &&);

    hub_connection(const hub_connection&) = delete;

    hub_connection& operator=(const hub_connection&) = delete;

    std::future<void> __cdecl start();

    std::future<void> __cdecl stop();

    void __cdecl on_closed(const std::function<void __cdecl()>& closed_callback);

    void __cdecl set_client_config(const signalr_client_config& config);

    template <typename ...T>
    void __cdecl on(const std::string& name, const std::function<void __cdecl (T...)>& methodHandler);

    template <typename R, typename ...T>
    std::future<R> invoke(const std::string& method_name, T... args);

    template <typename ...T>
    std::future<void> send(const std::string& method_name, T... args);
private:
};

One problem with the templated on, invoke, and send methods is that we support multiple protocols and in order to make that work the hub_connection would need to be templated to the protocol type.

That could look like:

auto connection = hub_connection_builder<JsonHubProtocol>()
    .with_url("http://example.com", []() {})
    .configure_logging()
    .build();

There were 3 main things that cpprestsdk offered that boost did not offer (at least at that time):

  • websockets
  • http stack
  • tasks (futures were in TR1 at that time)

I don't think boost gives you websockets. websockets in cpprestsk are implemented using websocketpp (unless you target WinRT) which is built on top of boost asio. http client/stack is built on top of boost asio unles you are running on windows where it wraps native Win32 or WinRT APIs. I have not been following boost recently so am not sure if something like this is provided out of the box. One of the biggest advantages of cpprestsdk was that all of this is exposed via a consistent async API (tasks). The other advantage was it was cross platform - the old SignalR worked on Windows, Linux and Mac.

Yep, it didn't have web sockets or HTTP at the time, until Boost Beast was added recently (linked above). Here is their Github page. The reason I mentioned it is that both ASIO (networking) and Beast (web sockets / http) are planned to be standardized. However, ASIO is much further along in this process than Beast admittedly. Nevertheless, it is nice that both have been through the Boost review process.

As for the async machinery, I am not familiar with the differences between the pplx tasks and futures.

I am not inclined to favor one implementation over another, just thought it was something that merited further investigation. Especially if any of the concepts were to be exposed via the public API, it might make sense to lean towards something that is a de facto standard.

If boost provides the three things above now then it should definitely be looked at. I just wanted to point out what were the main things we needed that were not there at the time when we built the C++ client for the classic SignalR. I am sure that many more people are using boost than cpprestsdk so for them it would be a win if they did not have to bridge their code to use SignalR.

in order to make that work the hub_connection would need to be templated to the protocol type.

Could this be done via methods that apply the template? If there was a magical way to have .use_json_protocol() take this as a hub_connection_builder and return a hub_connection_builder<json_hub_protocol>. Unfortunately, I think the only way to do this in an "extensible" way would be a global function, which looks ugly.

I can make the following work with the help of another class that will be "hidden" from the user. It does have some annoying side effects, but public API is nice :D

auto connection = hub_connection_builder()
    .use_protocol<ProtocolTest>()
    .with_url("http://example.com")
    .build();

We periodically close 'discussion' issues that have not been updated in a long period of time.

We apologize if this causes any inconvenience. We ask that if you are still encountering an issue, please log a new issue with updated information and we will investigate.