Ready for production?
DaniGuardiola opened this issue ยท 12 comments
Hello there, thanks for your work on Nameko :)
We want to use nameko + grpc in our company and we're wondering about the state of this project. We are still far from a production version ourselves so we don't really need nameko-grpc to be production ready right now, but we would like to know about the roadmap so that we can decide whether to use it or not.
If we do decide to use it we could report any bugs and suggestions we find along the way, possibly even contribute with PRs at some point.
Thank you!
Hi @DaniGuardiola,
I put this project together as a prototype, but it quickly became more than that. The test coverage is extensive and the "equivalence" tests demonstrate that nameko-grpc and the official python client behave in the same way -- at least at the application level (i.e. grpc messages sent/received, status codes etc; there may be differences in terms of bytes on the wire)
I have not had an opportunity to use it in production yet, but it is likely that my company will be doing so in the next couple of months, so if you do decide to use it you will have someone along on the journey with you.
I think gRPC is fantastic and conceptually pairs very well with Nameko. I have a (draft) blog post on the subject that you might like to read: https://gist.github.com/mattbennett/1fdc9df9ccde3cd4798af5e47a714fce
Nevermind, typo in my code
I'm gonna post the little issues I find here to avoid spamming the issues tab, also because probably most of them are gonna be little mistakes on my side as I'm not very familiar with the project (if that's alright with you).
Some questions:
- Is TLS supported?
- Can you add some examples of asynchrony to the README?
- AMQP is replaced by gRPC, so there's no need for it anymore, am I right?
- What's stopping the nameko-tracer PR from being merged?
- Do you think it is feasible to have a more abstract interface (closer to the standard
nameko
) as an optional decorator like, idk,@grpc_simple
? This could allow declaring the request fields as keyword arguments (making type annotations and defaults possible) and giving the response as a dictionary, or potentially a single value if the response message contains a single field. In fact, this could be the default decorator@grpc
and a different one such as@grpc_context
could become the current one, as the context message is not gonna be used most of the time. Of course, I didn't take streams into account, I'm just brainstorming, but maybe there's a nice way to fit them in. - Could type annotations and RST docstrings be added to the
nameko-grpc
interface for easier development in IDEs and possibly also mypy static type-checking? - What can we help with? I can personally help with the simpler decorator idea, maybe TLS integration, types/docstrings and overall testing.
Thanks a lot!!
- Is TLS supported?
Not yet. This would be a great PR :)
- Can you add some examples of asynchrony to the README?
Will do ๐
- AMQP is replaced by gRPC, so there's no need for it anymore, am I right?
AMQP-RPC isn't necessarily replaced by gRPC. You can use one or both or neither, up to you. If your service isn't using any Nameko's "built-in" AMQP extensions, you don't need a broker.
- What's stopping the nameko-tracer PR from being merged?
@iky and I went around the houses on this one. gRPC is different to all the other Nameko extensions in that it supports streaming requests and responses. The approach I chose to take in #20 was to emit a new record for every message in a stream. In contract, the opentracing standard (which supports gRPC) emits one span for every request/response (i.e. all messages in a stream end up in the same span). Since opentracing is a standard, we decided that this was a better approach, and #20 kind of died on the vine.
I think the next step for tracing is to expand nameko-tracer to support the opentracing standard, and then nameko-grpc can do whatever it needs to do emit opentracing-compatible grpc traces.
- Do you think it is feasible to have a more abstract interface (closer to the standard nameko) as an optional decorator like, idk, @grpc_simple? This could allow declaring the request fields as keyword arguments (making type annotations and defaults possible) and giving the response as a dictionary, or potentially a single value if the response message contains a single field. In fact, this could be the default decorator @grpc and a different one such as @grpc_context could become the current one, as the context message is not gonna be used most of the time. Of course, I didn't take streams into account, I'm just brainstorming, but maybe there's a nice way to fit them in.
You'll have to expand on this a bit for me.
Nameko's AMQP-RPC is completely dynamic, whereas gRPC uses a predefined protobuffer specification. I don't think there's any scope for dynamically defining the definition. Perhaps I'm misunderstanding you?
- Could type annotations and RST docstrings be added to the nameko-grpc interface for easier development in IDEs and possibly also mypy static type-checking?
I've never actually done any work with type annotations. We aren't supporting Python 2.x so I don't see why we can't make use of this.
- What can we help with? I can personally help with the simpler decorator idea, maybe TLS integration, types/docstrings and overall testing.
Any help would be great. Supporting secure connections is something that would be helpful, and overall testing and use is the most significant thing that would move the project forwards.
Hi @mattbennett thanks for your answers.
- (TLS) I'm getting familiarized with the code, I'll try to implement it.
- (asynchrony examples) Thanks! :)
- (AMQP) Oh ok, got it.
- (tracer) Got it.
- (decorator) I'll include an example of what I mean below.
- (type annotations) Alright cool. I'll see if I can help with this as well.
- (helping out) Alright. I'll start with the TLS PR and testing and see where else I can contribute.
About the decorator, here's what I mean. Currently, you would declare a (unary-unary
) method in a service like this:
from nameko_grpc.entrypoint import Grpc
from proto.service_pb2 import MultiplyResponse
from proto.service_pb2_grpc import exampleStub
grpc = Grpc.implementing(exampleStub)
class ExampleService:
name = "example"
@grpc
def multiply(self, request, context):
result = request.number1 * (request.number2 or 2)
return MultiplyResponse(result=result)
What I'm proposing is something like the following:
from nameko_grpc.entrypoint import Grpc
from proto.service_pb2_grpc import exampleStub
grpc = Grpc.implementing(exampleStub)
class ExampleService:
name = "example"
@grpc
def multiply(self, number1: int, number2: int = 2) -> Dict[str, int]:
"""
Multiplies two numbers
:param number1: The first number to multiply.
:param number2: The second number to multiply.
:return: The product of number1 times number2.
"""
result = number1 * number2
return {"result": result}
I believe it should be achievable. The parameters can be passed by destructuring (or however this is called) the request object content (as a dict) as keyword args:
# wherever this is called...
returned_dict = example_service.multiply(**request.__dict__)
The response might be a little trickier but should be achievable as well by finding the response class from the corresponding _pb2.py
file and passing the returned dictionary as keyword args to the constructor:
response_cls = get_response_class_somehow(method_name_or_something)
response_instance = response_cls.__init__(**returned_dict)
# then do whatever's done with the response instance
I see some big advantages to this method:
- Greater abstraction. No proto-specific imports needed, except for the stub.
- Closer to nameko's standard
@rpc
decorator. - Type hints and docstrings are feasible (great for documentation / IDE's).
- Code is more portable as the functions themselves can be copied and pasted and be used in an almost standalone way. Even the whole class service.
- Easier to understand.
- Makes generating a service-specific client automatically (with all the types, docstrings, parameters and return values) a way easier task. This is something I plan on doing on our product.
What do you think?
I should add that this isn't compatible with having the current decorator as well, either with a different name (ex. @gprc_raw
) or by having my suggested decorator have a different name (ex. @gprc_simple
).
Another thing that could be useful is being able to specify the port(s) as well. Ports can be secure and insecure, and there's apparently no limit in gRPC.
I see what you mean about the decorator now. You're breaking up the request schema into its components and exposing each one as a separate argument.
It does look more similar to the Nameko AMQP RPC entrypoint, but I don't think it'll bring all the advantages you list. And/or there are other ways to achieve those advantages:
- Greater abstraction. No proto-specific imports needed, except for the stub.
Protobuf message definitions can use nested types (example) so you'd still need to import proto-defined classes. You can only really expose the first "layer" in the method signature.
- Type hints and docstrings are feasible (great for documentation / IDE's).
You could still use typehints on the proto-defined classes. You'd need this anyway for any nested types. I haven't looked, but I would be surprised if there wasn't gRPC support in some IDEs.
- Code is more portable
I think this is probably invalidated by the nested types.
- Makes generating a service-specific client automatically an easier task
Perhaps I'm misunderstanding the goal here, but doesn't protobuf give you this out of the box? Given the (protobuf) definition for a service, you can generate a client in whatever language you choose. And you can inspect the definition for the types and (I assume) doctrings.
Another thing that could be useful is being able to specify the port(s) as well. Ports can be secure and insecure, and there's apparently no limit in gRPC.
Yep, this would be great :)
@mattbennett alright, makes sense. I've had some time to dive deeper into protobuf and I understand now why my idea doesn't really make sense. The only thing that could maybe be an improvement is removing the context
argument from the default @grpc
decorator factory and adding it as an option with False
as default, something like this:
grpc = Grpc.implementing(exampleStub, context=True)
Ideally this would be possible as well:
grpc = Grpc.implementing(exampleStub)
grpc_context = Grpc.implementing(exampleStub, context=True)
In a way in which both decorators work without causing a significant performance or memory impact (maybe by using some dependency injection pattern, or some kind of memoization). This would make unused context
arguments a thing of the past, as in my case I won't need it most of the time, if ever at all.
Thanks!
I considered this when building the prototype. context
is kind of ugly and does not follow the normal dependency injection pattern that Nameko uses. But having context
in the method signature is standard gRPC practice and in the end I didn't see the value in removing it for purely aesthetic reasons.
Keeping it the same as standard gRPC helps the code look familiar and would make it easier to transition existing Python gRPC servers to Nameko services, for example. Plus if nameko-grpc removed it from the signature, we'd have to provide some alternative mechanism to access it.
If you really wanted to remove context
from the signature you could subclass nameko_grpc.entrypoint.Entrypoint
and override the decorator
classmethod.