Tradias/asio-grpc

assertion failed: !started_

Closed this issue · 2 comments

neucer commented

I am trying to use asio-grpc to make requests to Triton Inference Server.
I tried to fallow https://github.com/Tradias/asio-grpc/blob/master/example/hello-world-client.cpp on how to make a single request and https://github.com/triton-inference-server/client/blob/main/src/c%2B%2B/library/grpc_client.cc#L1377 on how to fill the request.

This is what I have:

TritonClient.h

#pragma once

#include <agrpc/asio_grpc.hpp>

#include "Client.h"
#include "grpc_client.h"  // triton's grpc client

class TritonClient {
   private:
    agrpc::GrpcContext grpc_context;
    std::unique_ptr<inference::GRPCInferenceService::Stub> stub;

   public:
    TritonClient(const std::string &server_host, int server_port);
    virtual ~TritonClient();
    virtual void SendRequest(std::unique_ptr<std::ifstream> sample,
                             long long id) override;
};

TritonClient.cpp

#include "TritonClient.h"

#include <boost/asio/co_spawn.hpp>
#include <boost/asio/detached.hpp>
#include <fstream>

TritonClient::TritonClient(const std::string& server_host, int server_port) {
    const auto host_port = server_host + ":" + std::to_string(server_port);

    stub = std::make_unique<inference::GRPCInferenceService::Stub>(
        grpc::CreateChannel(host_port, grpc::InsecureChannelCredentials()));
}

TritonClient::~TritonClient() {}

void TritonClient::SendRequest(std::unique_ptr<std::ifstream> sample, long long id) {
    boost::asio::co_spawn(
        grpc_context,
        [&]() -> boost::asio::awaitable<void> {
            // structure of inputs and outputs can be found at
            // https://github.com/triton-inference-server/common/blob/main/protobuf/grpc_service.proto
            using RPC =
                agrpc::ClientRPC<&inference::GRPCInferenceService::Stub::AsyncModelInfer>;
            grpc::ClientContext client_context;
            inference::ModelInferRequest request;
            request.set_model_name("model_name");
            request.set_id(std::to_string(id));

            // input tensor
            auto input = request.add_inputs();
            input->set_name("IMAGE");
            input->set_datatype("INT8");

            auto fsize = sample->tellg();
            sample->seekg(0, std::ios::end);
            fsize = sample->tellg() - fsize;
            sample->seekg(0, std::ios::beg);
            input->add_shape(fsize);

            std::stringstream buffer;
            buffer << sample->rdbuf();
            std::string* raw_contents = request.add_raw_input_contents();
            *raw_contents = buffer.str();

            // requested output tensors
            auto output_text = request.add_outputs();
            output_text->set_name("TEXT1");
            auto output_punctuated = request.add_outputs();
            output_punctuated->set_name("TEXT2");

            inference::ModelInferResponse response;
            grpc::Status status =
                co_await RPC::request(grpc_context, *stub, client_context, request,
                                      response, boost::asio::use_awaitable);
            std::cout << status.ok()
                      // << " response: " << response.Message()
                      << std::endl;
        },
        boost::asio::detached);

    grpc_context.run();

}

Output:


E1115 12:08:46.976544949 1268436 async_unary_call.h:236]     assertion failed: !started_

As far as I can tell, the problem is that the method ClientAsyncResponseReader::StartCall gets called twice

On first glance the issue is on this line:

            using RPC =
                agrpc::ClientRPC<&inference::GRPCInferenceService::Stub::AsyncModelInfer>;

which should be

            using RPC =
                agrpc::ClientRPC<&inference::GRPCInferenceService::Stub::PrepareAsyncModelInfer>;
neucer commented

nice