Is this at all suitable for production use behind a cache?
skylarmt opened this issue · 3 comments
I spent several weeks reading the entire planet.osm into a PostGIS database, only to find that the estimated time to create a vector .mbtiles package was longer than I am old. Is anyone using this in something resembling a production environment?
I generated the map tiles found on gbif.org using postserve.
My process needed a lot of customization, you may see from #6 that I rewrote half the scripts due to the change in map projections, but the process was roughly
- Prepare the layers, preview the map using postserve
- Use a message broker (I used RabbitMQ) to coordinate
- A process which generates messages containing all required tile numbers
- Many postserve-based processes, which read these messages and generate the tiles, then put the tiles on a different queue
- A small Python process to collect the generated tiles and put them into an MBTiles database.
If you intend to process the whole world, you will need powerful servers, with large SSDs and plenty of RAM. I used a server with 384GiB RAM, 1TB SSD in RAID and two 12-core Xeons, which produced 600 tiles/second.
Short answer: probably not.
Long answer: while this project has many flaws and is now particularly good Python, it is somewhat groundbreaking in its showcase of a principle: generating tiles on the fly using PostGIS functions. Clean it up with nice and proper, rewrite it in another language, or what have you... the basics are here, and with a little work + a cache in front can give reasonable results in production on a decent server. But don't expect to render the whole planet in a reasonable time without a cluster of powerful servers.
Multithreading might also be an issue. Please note that postserve has been merged into tools - https://github.com/openmaptiles/openmaptiles-tools#test-tiles