_InactiveRpcError while running with docker
AnnaSafaryan opened this issue · 3 comments
Hi!
I'm trying to run base model with docker, but my program fails with the following traceback:
Traceback (most recent call last):
File "<my_path>\corefhd\main.py", line 42, in <module>
result = ppl(text)
File "<my_path>\corefhd\venv\lib\site-packages\isanlp\pipeline_common.py", line 74, in __call__
results = proc(*[result[e] for e in proc_input])
File "<my_path>\corefhd\venv\lib\site-packages\isanlp\processor_remote.py", line 42, in __call__
response = self._stub.process(request)
File "<my_path>\corefhd\venv\lib\site-packages\grpc\_channel.py", line 1161, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "<my_path>\corefhd\venv\lib\site-packages\grpc\_channel.py", line 1004, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNAVAILABLE: ipv4:0.0.0.0:3334: WSA Error"
debug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNAVAILABLE: ipv4:0.0.0.0:3334: WSA Error {created_time:"2023-11-22T20:49:06.9094603+00:00", grpc_status:14}"
>
I've noticed that container's port is 3336 and in readme it is 3335, but after changing the problem remains the same.
Hi!
The message says you try to reach port 3334. coref_address
should correspond to the machine and port at which you run the docker image.
UPD: The Readme mentions port 3334 for the dockerized spacy parser. Make sure you run the spacy image or use the local spacy option if you run the parsers on your local machine anyway.
Looks fine. Wait until both containers are ready to run (CPU value goes down to about zero). If grpc_status 14 persists after the containers are fully started, provide logs of the containers themselves. Error 14 only indicates that no connection is being made.
For debugging, try connecting only to the spacy container at 3334, its model loads faster.
Also, you can try 'localhost' as the address, or check in your docs what will fit for your setup (I guess it's not linux)