ParkAPI docker-compose issues
Opened this issue · 4 comments
I want to develop new scrapers and am having some trouble getting started.
The docker-compose file uses a db
container (resulting in a hostname db in the internal network), while Dockerfile defines a host postgres
.
Next, apparently no setupdb is run on container startup, so no tables are created in the database.
While I fixed these by changing the container name to postgres in the docker-compose.yml file and adding python bin/parkapi-setupdb
in entrypoint.sh before starting parkapi-server, I now struggle as no static data can't be fetched, according to the logs:
$ docker logs parkapi_api_1
* Serving Flask app "park_api.app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
[2020-12-21 21:47:16,017] WARNING in app: Failed to get static data for Konstanz
Failed to get static data for Konstanz
...
What am I missing?
In the production environment the live_scrape
option is off per default. In this case the database is only filled when bin/parkapi-scraper
is called. The error you get should be caused by an empty database. The usual state of operation is to call the scraper in defined intervals (we use a systemd timer with 5 minutes). The server then only serves the data from the database.
In docker-compose.yml, I'd still rename db
service to postgres
, as this is the name it is referenced in Dockerfile.
And in entrypoint.sh, I wonder why python bin/parkapi-setupdb
is not executed before python bin/parkapi-server
.
Or do I get something wrong here?
And in entrypoint.sh, I wonder why python bin/parkapi-setupdb is not executed before python bin/parkapi-server.
When we started using docker was no use case so the database setup was intended to be run once on installation. Tbh I cannot say what would happen if bin/parkapi-setupdb
is called on an existing database.