Can't run Docker and NodeJS
narmathamuthu opened this issue · 22 comments
Hi team ,
I have run this cmd - ( docker run -p=:3000 --env-file=variables.env wavesplatform/data-service )
I got following error ( PFA )
And also i have run node js cmd - ( export $(cat variables.env | xargs) && NODE_ENV=production node src/index.js )
But the index.js file doesn't exist.
I am struggling on last 2 days.
So kindly provide solution ASAP.
Thanks for the report, we're looking into it. Also, we'll update Readme today/tomorrow.
@dvshur Okay thank you!!!
Once update kindly let me know.
Hi, @narmathamuthu. We have added some logging to see your error more clearly. It's still in develop
branch, we'll publish it to Docker registry after QA will be sure it's stable.
However, you could try it right now by cloning this repo, develop
branch, and building Docker image as in https://github.com/wavesplatform/data-service#docker, step 1. Then try to launch it as you tried yesterday.
Once we have the additional logging output, it should be trivial to understand what is wrong.
Also, we've updated NodeJS part of Readme.md to include necessary steps if you decide to go that way. Again, see develop
branch (it's a default one in this repo).
Can i clone this repo ( https://github.com/wavesplatform/data-service.git ) ?
Where to find develop branch ?
Yes, you can clone this repo. Develop branch is the default one, if you do a simple git clone https://github.com/wavesplatform/data-service.git
it should give you the branch you need.
@dvshur Okay fine.
Now i got another one issue ( PFA ) .
Error : TypeError [ERR_INVALID_PROTOCOL]: Protocol "http:" not supported. Expected "https:"
Hi, sorry for making you wait for the resolution. The problem with your launch is that you haven't specified a correct matcher address to load the matcher settings from. While we have provided a default matcher address for mainnet, if you're running a custom network, or a different matcher, that could have been a problem.
I've merged a PR #276 with the fix that did the following:
- MATCHER_ADDRESS_URL env var became optional
- if not specified, any matcher would be considered unknown, so any assets order in all pairs would be allowed
- added more clear error message if MATCHER_ADDRESS_URL would be specified, but the service would not be able to load settings on launch
Could you please try again on the latest develop
branch (again, default for cloning) and let me know what output you see?
@dvshur .
Okay fine now i got response ( PFA ).
But i can't get docsUrl. And also i have run the docs command ( docker run --rm -d -p 8080:8080 -e SWAGGER_JSON=/app/openapi.json wavesplatform/data-service-docs ) on separate port number ( PFA ) .
I don't know how to combine both.
Then i have run following this Url ( http://localhost:3000/v0 ) , i got Not Found Issue.
How to run those functionality ( Like - https://api.testnet.wavesplatform.com/v0/transactions/exchange?matcher=3N8aZG6ZDfnh8YxS6aNcteobN8eXTWHaBBd&amountAsset=WAVES&priceAsset=DWgwcZTMhSvnyYCoWLRUXXSH1RSkzThXLJhww9gwkqdn ) in our local.
Can you explain please ?
The data service itself doesn't know if it's /v0
or not. That routing is done by an external router/proxy (you can use Nginx, for example). Routing to the documentation is also done by the same router.
To use the service functionality without a router, simply remove the /v0
part. Something like:
http://localhost:3000/transactions/exchange
@narmathamuthu, how is it going with the service? Have you succeeded in launching it?
@dvshur sorry for long delay.
Yes we are launching it our local machine.
Now we are going to config Daemons ( Candles , Pairs ) .
Can you explain please how to config ( Both - data-service , Candles & Pairs ) .
If possible to run both functionality in same host & port ?
Hi, @narmathamuthu.
Yes, you can config data-service and daemons in same host and port.
In fact, you can just run daemons using command docker run --env-file=variables.env wavesplatform/data-service-candles
(or wavesplatform/data-service-pairs) with the same .env file, that is used for data-service.
If you want, you can add specific env variables to your .env file for fine daemons configuration:
For candles daemon:
CANDLES_UPDATE_INTERVAL=2500
CANDLES_UPDATE_TIMEOUT=20000
RECALCULATE_ALL_CANDLES_ON_START=false
The last one variable should be true
only in case, when you start candles daemon at first time or with long delay from the last candles daemon stop. Otherwise daemon will fall on CANDLES_UPDATE_TIMEOUT
.
For pairs daemon:
PAIRS_UPDATE_INTERVAL=2500
PAIRS_UPDATE_TIMEOUT=20000
@Jlewka
Okay thankyou !!!
I have run above two commands ( data-service-candles & data-service-pairs ) .
I go following response ( PFA )
data-service -candles
data-service-pairs
It correct or not ?
May i know the next step please.
Then how to check above functionality working or not ?
Please tell me.
No, it's incorrect.
It looks like you have not running instance of PostgreSQL or its running not locally for daemons or with specific port (not 5432).
Both data-service and daemons have to be able to connect to PostgreSQL, please, check this.
When daemons will work correctly you will see endless stream of success update message in their output logs.
@Jlewka Okay we will check it let you know.
@narmathamuthu have you managed to launch the service?
@dvshur .
Yes we are running in our local system only.
Still now we can't run data-service-candles & data-service-pairs.
Kindly check my .env file ( PFA )
We are facing following issue
data-service -candles
data-service-pairs
The data-service-candles and data-service-pairs its mandatory or not ?
@narmathamuthu, firstly, data-service-candles
and data-service-pairs
are not mandatory. You need them only if you want to use the corresponding endpoints: /candles
, /pairs
.
Secondly, are you using the same env file for the service itself and the daemons? It is strange if you're able to use the data service, but unable to launch the daemons with the same Postgres config — after all, they're supposed to use the same database.
@narmathamuthu is the problem still relevant? Last comment was 24 days ago, I would like to either finish solving it or close the issue.
@dvshur sorry for delay. Okay fine we will check and close the issue ASAP.
Closing because of long inactivity. Feel free to reopen if any more questions arise.