/developers-italia-backend

Crawler for the OSS catalog of Developers Italia

Primary LanguageGoGNU Affero General Public License v3.0AGPL-3.0

Crawler for the OSS catalog of Developers Italia

Go Report Card Join the #website channel Get invited

Description

Developers Italia provides a catalog of Free and Open Source software aimed to Public Administrations.

This crawler finds and retrieves the publiccode.yml files from the organizations publishing the software that have registered through the onboarding procedure.

The generated YAML files are then used by developers.italia.it build to generate its static pages.

Setup and deployment processes

The crawler can either run manually on the target machine or it can be deployed from a Docker container with its helm-chart in Kubernetes.

Elasticsearch 6.8 is used to store the data and has ready to accept connections before the crawler is started.

Manually configure and build the crawler

  1. cd crawler

  2. Save the auth tokens to domains.yml.

  3. Rename config.toml.example to config.toml and set the variables

    NOTE: The application also supports environment variables in substitution to config.toml file. Remember: "environment variables get higher priority than the ones in configuration file"

  4. Build the crawler binary with make

Docker

The repository has a Dockerfile, used to build the production image, and a docker-compose.yml file to setup the development environment.

  1. Copy the .env.example file into .env and edit the environment variables as it suits you. .env.example has detailed descriptions for each variable.

    cp .env.example .env
  2. Save your auth tokens to domains.yml

    cp crawler/domains.yml.example crawler/domains.yml
    editor crawler/domains.yml
  3. Start the environment:

    docker-compose up
    

Run the crawler

Crawl mode (all item in whitelists): bin/crawler crawl whitelist/*.yml

Gets the list of organizations in whitelist/*.yml and starts to crawl their repositories.

If it finds a blacklisted repository, it will remove it from Elasticsearch, if it is present.

It also generates:

One mode (single repository url): bin/crawler one [repo url] whitelist/*.yml

In this mode one single repository at the time will be evaluated. If the organization is present, its iPA code will be matched with the ones in whitelist, otherwise it will be set to null and the slug will have a random code in the end (instead of the iPA code).

Furthermore, the iPA code validation, which is a simple check within whitelists (to ensure that code belongs to the selected PA), will be skipped.

If it finds a blacklisted repository, it will exit immediately.

Other commands

  • bin/crawler updateipa downloads iPA data and writes them into Elasticsearch

  • bin/crawler delete [URL] deletes software from Elasticsearch using its code hosting URL specified in publiccode.url

  • bin/crawler download-whitelist downloads organizations and repositories from the onboarding portal repository and saves them to a whitelist file

Crawler whitelists

The whitelist directory contains the of organizations to crawl from.

whitelist/manual-reuse.yml is a list of Public Administrations repositories that for various reasons were not onboarded with developers-italia-onboarding, while whitelist/thirdparty.yml contains the non-PAs repos.

Here's an example of how the files might look like:

- id: "Comune di Bagnacavallo" # generic name of the organization.
  codice-iPA: "c_a547" # codice-iPA
  organizations: # list of organization urls.
    - "https://github.com/gith002"

Crawler blacklists

Blacklists are needed to exclude individual repository that are not in line with our guidelines.

You can set BLACKLIST_FOLDER in config.toml to point to a directory where blacklist files are located. Blacklisting is currently supported by the one and crawl commands.

See also

Authors

Developers Italia is a project by AgID and the Italian Digital Team, which developed the crawler and maintains this repository.