/idunn

Qwant Maps internal API for Points of Interest, directions and more.

Primary LanguagePythonApache License 2.0Apache-2.0

GitHub Build GitHub license Docker Pulls

Idunn

Idunn is the main back-end API of Qwant Maps, it acts a the entrypoint in front of many other APIs and is in charge of aggregating data for geocoding, directions, POIs details, ...

  • historicaly, Idunn was only an API to get points-of-interest information for QwantMaps.
  • The POIs are taken from the mimir ElasticSearch database.
  • It also fetches POI data from Wikipedia API and a custom Wikidata Elasticsearch source.
  • Why Idunn ? Because she is the wife of Bragi that is also the main mimir API.
  • A simple workflow schema of Idunn is presented below.

Note: this diagram may be outdated:

Idunn workflow

API

  • The API provides its OpenAPI schema with: GET /openapi.json

The main endpoints are:

  • /v1/places/{place_id}?lang={lang}&type={type}&verbosity={verbosity} to get the details of a place (admin, street, address or POI).
    • type: (optional) parameter belongs to the set {'admin', 'street', 'address', 'poi'}
    • verbosity parameter belongs to the set {'long', 'short'}. The default verbosity is long.
  • /v1/places?bbox={bbox}&category=<category-name>&size={size} to get a list of all points of interest matching the given bbox and categories
    • bbox: left,bot,right,top e.g. bbox=2.0,48.0,3.0,49.0
    • category: multiple values are accepted (e.g. category=leisure&category=museum)
    • size: maximum number of places in the response
    • verbosity: default verbosity is list (equivalent to long, except "information" and "wiki" blocks are not returned)
    • source: (optional) to force a data source (instead of automated selection based on coverage). Accepted values: osm, pages_jaunes
    • q: full-text query (optional, experimental)
  • /v1/places?bbox={bbox}&raw_filter=class,subclass&size={size} to get a list of all points of interest matching the given bbox (=left,bot,right,top e.g. bbox=2,48,3,49) and the raw filters (e.g. raw_filter=*,restaurant&raw_filter=shop,*&raw_filter=bakery,bakery)
  • /v1/categories to get the list of all the categories you can filter on.
  • /v1/directions See directions.md for details
  • /v1/events?bbox={bbox}&category=<category_name>&size={size} to get a list of all events matching the given bbox and outing_category
    • bbox: left,bot,right,top e.g. bbox=2.0,48.0,3.0,49.0
    • category: one value is accepted (e.g. category=concert | show | exhibition | sport | entertainment)
    • size: maximum number of events in the response

  • /v1/status to get the status of the API and associated ES cluster.
  • /v1/metrics to get some metrics on the API that give statistics on the number of requests received, the duration of requests... This endpoint can be scraped by Prometheus.

Running

Requirements

  • Python 3.10
  • Pipenv, to manage dependencies and virtualenv

Installation

  • Create the virtualenv and install dependencies:

    pipenv install
  • and then:

    IDUNN_MIMIR_ES=<url_to_MIMIR_ES> IDUNN_WIKI_ES=<url_to_WIKI_ES> pipenv run python app.py
  • you can query the API on port 5000:

    curl localhost:5000/v1/places/toto?lang=fr&type=poi

Configuration

The configuration can be given from different ways:

  1. a default settings is available in utils/default_settings.yaml
  2. a yaml settings file can be given with an env var IDUNN_CONFIG_FILE (the default settings is still loaded and overriden)
  3. specific variable can be overriden with env var. They need to be given like "IDUNN_{var_name}={value}" eg IDUNN_MIMIR_ES=... You can create a .env file with commonly used env variables, it will be loaded by pipenv by default.

Please note that you will need an API key from openweathermap in order to use the Weather block. You can then set it into the IDUNN_WEATHER_API_KEY environment variable or directly into the WEATHER_API_KEY inside the utils/default_settings.yaml file.

Run tests

To run tests, first make sure you have dev dependencies installed:

pipenv install --dev

Then, you can run the full testsuite using pytest:

pipenv run pytest -vv -x

If you are using a .env file, you need to make sure that pipenv won't load it, which can be done by setting the environment variable PIPENV_DONT_LOAD_ENV=1.

How to contribute ?

  • Idunn comes along with all necessary components to contribute as easily as possible: specifically you don't need to have any Elasticsearch instance running. Idunn uses docker images to simulate the Elasticsearch sources and the Redis. This means that you will need a local docker install to be able to spawn an ES cluster.

  • To contribute the common workflow is:

    1. install the dev dependencies: pipenv install --dev
    2. add a test in ./tests for the new feature you propose
    3. implement your feature
    4. run pytest: pipenv run pytest -vv -x
    5. check the linter output: pipenv run lint
    6. if everything is fixed, then check the format: pipenv run black --diff --check

Run it with Redis and elasticsearch

You can run it with both Redis and elasticsearch using docker. First, edit the docker-compose.yml file to add a link to your elasticsearch instance (for example: https://somewhere.lost/) in IDUNN_MIMIR_ES.

Then you just need to run:

$ docker-compose up --build

If you need to clean the Redis cache, run:

$ docker-compose kill
$ docker image prune --filter "label=idunn_idunn"