/anubis

Anubis: a flexible policy enforcement solution for NGSI APIs (and beyond!)

Primary LanguagePythonApache License 2.0Apache-2.0

Anubis

FIWARE Security License: APACHE-2.0 Docker badge Support Documentation badge

Welcome to Anubis!

πŸ“š Documentation quay.io

What is the project about?

Anubis

Anubis is a flexible Policy Enforcement solution that makes easier to reuse security policies across different services, assuming the policies entail the same resource. In short we are dealing with policy portability :) What do you mean by that?

Let's think of a user that register some data in platform A. To control who can access his data he develops a set of policies. If he moves the data to platform B, most probably he will have to define again the control access policies for that data also in platform B.

Anubis aims to avoid that :) or at least simplify this as much as possible for the data owner. How? Leveraging open source solutions (e.g. Envoy, OPA) and reference standards (e.g. W3C WAC, W3C ODRL, OAUTH2).

Of course, the support for distributed policies management may be of value also for a single platform deployed distributedly, e.g. to synch policies across the cloud-edge.

Why this project?

Data portability often focuses on the mechanisms to exchange data and the formalisation of data representation: the emphasis is rarely put on the portability of security & privacy data policies. Enabling security and privacy data policy portability is clearly a step forward in enabling data sovereignty across different services.

This project aims at enabling data sovereignty by introducing data privacy and security policy portability and prototyping distributed data privacy and security policy management, thus contributing to increase trust toward data sharing APIs and platforms.

Approaches as the one proposed, increasing control by owners over their data and portability of data assets, are key to boost the establishment of trusted data spaces.

The project is looking into

  • Open standardized security & privacy data policies vocabulary.
  • Linking an existing user profiling vocabulary to the security & privacy data policies vocabulary as a way to increase portability of policies and their compatibility to existing standards.
  • A middleware supporting decentralised control and audit of security & privacy data policies by data owners (in the context of RESTful APIs).
  • Translation from the security & privacy data policies vocabulary to other policy languages or APIs that are actually used for PEP.

While Anubis is not subject to GDPR per se, it allows API owners to implement effective GDPR in their solutions.

Why did you pick Anubis as name?

Anubis is an ancient Egyptian god, that has multiple roles in the mythology of ancient Egypt. In particular, we opted for this name, because he decides the fate of souls: based on their weights he allows the souls to ascend to a heavenly existence, or condemn them to be devoured by Ammit. Indeed, Anubis was a Policy Enforcement system for souls :)

Architecture

Policy Enforcement

In term of policy enforcement, Anubis adopts a standard architecture: a client request for a resource to an API, and based on the defined policies, the client is able or not to access the resource. The figure below shows the current architecture.

                            β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                            β”‚   Policy     β”‚   3    β”‚    Policy    β”‚
                            β”‚   Decision   β”œβ”€β”€β”€β”€β”€β”€β”€β–Ίβ”‚Administrationβ”‚
                            β”‚   Point      β”‚        β”‚    Point     β”‚
                            β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                   β–²
                                 2 β”‚
                                   β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”        β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚              β”‚   1    β”‚   Policy    β”‚   4    β”‚   Protected   β”‚
    β”‚    Client    β”œβ”€β”€β”€β”€β”€β”€β”€β–Ίβ”‚ Enforcement β”œβ”€β”€β”€β”€β”€β”€β”€β–Ίβ”‚               β”‚
    β”‚              β”‚        β”‚    Point    β”‚        β”‚      API      β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  1. A client requests for a resource via the Policy Enforcement Point (PEP) - implemented using an Envoy's proxy authz filter.
  2. The PEP pass over the request to the PDP (Policy Decision Point), provided by OPA which evaluates a set of rules that apply the abstract policies to the specific API to be protected;
  3. In combination with the policies stored in the PAP (Policy Administration Point), provided by the Policy Management API;
  4. If the evaluation of the policies returns allowed, then the request is forwarded to the Protected API.

Policy Management

Anubis currently supports only Role Based Access Control policies. Policies are stored in the policy management api, that supports the translation to WAC and to a data input format supported by OPA, the engine that performs the policy evaluation.

At the time being, the API specific rules have been developed specifically for the NGSIv2 Context Broker, Anubis management, and JWT-based authentication. You can see Orion rules in this rego file.

Policy Distribution

The policy distribution architecture relies on libp2p middleware to distribute policies across differed Policies Administration Points. The architecture decouples the PAP from the distribution middleware. This allows:

  • different PAP to share the same distribution node.
  • deployment without the distribution functionalities (and hence with a smaller footprint), when this is not required.

The distribution middleware is called Policy Distribution Point.

    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚   Policy     β”‚        β”‚    Policy    β”‚
    β”‚ Distribution │◄──────►│Administrationβ”‚
    β”‚   Point 1    β”‚        β”‚    Point 1   β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β–²
         2 β”‚
           β–Ό
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚   Policy     β”‚        β”‚    Policy    β”‚
    β”‚ Distribution │◄──────►│Administrationβ”‚
    β”‚   Point 2    β”‚        β”‚    Point 2   β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

There are two distribution modalities:

  • public, i.e. when the different middleware belong to different organisations in the public internet. In this case:

    • resources are considered to be univocally identifiable (if they have the same id they are the same resource);

    • only user specific policies are distributed;

    • only resource specific policies are distributed.

  • private, i.e. when the different middleware belong to the same organisation. In this case:

    • resources are considered to be univocally identifiable only within the same service and service path;

    • all policies are distributed (including the ones for roles and groups and * and default resource policies).

Policies

The formal policy specification is defined by the oc-acl vocabulary as en extension to Web Access control. The internal representation is json-based, see policy management api for details.

In general, a policy is defined by:

  • actor: The user, group or role, that is linked to the policy
  • action: The action allowed on this resource (e.g. acl:Read for GET requests)
  • resource: The urn of the resource being targeted (e.g. urn:entity:x)
  • resource_type: The type of the resource.
  • constraint (to be implemented): The constraint that has to be satisfied to authorize access.

The authorization rules currently in place supports the following resource types:

  • entity: NGSI entity
  • entity_type: NGSI entity type
  • subscription: NGSI subscription
  • policy: A policy of the Anubis Management API (to allow users to have control over the policies that are created)

This can be extended by creating new authorisation rules, and setting up the necessary filters in the envoy configuration.

Additionally, in relation to FIWARE APIs, a policy may include also:

  • tenant: The tenant this permission falls under
  • service_path: The service path this permission falls under

Authentication

The authentication per se is not covered by the PEP, the assumption is that the client authenticates before issuing and obtains a valid JWT token.

Currently the PEP only verifies that the token is valid by checking against its expiration.

Of course, more complex validations are possible. See OPA Docs for additional examples.

Currently, the token, when decoded, should contain:

  • The ID of the user making the request
  • The groups the user belongs to and their respective tenants
  • The roles the user has under their respective tenants

Demo

Requirements

To run this demo you'll need to have the following installed:

Deployment

To enable tenant creation in both Anubis and Keycloak, for obvious security reasons, the hostname of the token issuer (Keycloak) in the docker services and in your browser, needs to be the same. To ensure that, add the following entry in your /etc/hosts:

127.0.0.1       keycloak

NOTE: If you don't want to edit your /etc/hosts and you are not interested in testing tenant creation and deletion, in the .env file replace REACT_APP_OIDC_ISSUER=http://keycloak:8080/realms/default with REACT_APP_OIDC_ISSUER=http://localhost:8080/realms/default.

To deploy the demo that includes the Auth API, OPA, Keycloak, and a Context Broker, run the following script:

$ source .env
$ cd scripts
$ ./run_demo.sh

You can now login with username admin@mail.com and password admin.

You can run a script to make a few test API calls. You can run the test script with:

$ cd scripts
$ ./test_context_broker.sh

To clean up the deployment after you're done, run:

$ cd scripts
$ ./clean.sh

Demo for distributed policy management

To deploy the demo that includes two instances of the Auth API, two instances of the distribution middleware (plus as well OPA, Keycloak, and a Context Broker), run the following script:

$ cd scripts
$ ./run_demo_with_middleware.sh

You can run a script to make a few test API calls. You can run the test script with:

$ cd scripts
$ ./test_middleware.sh

To clean up the deployment after you're done, run:

$ cd scripts
$ ./clean.sh

Installation

Anubis is available as a docker container and as a python package.

Requirements to allow policy enforcement using Anubis (PAP) are:

An example docker compose is provided in this repository that deploy all the dependencies and demonstrates how to protect an Orion Context Broker instance.

To install the python package:

$ pip install anubis-policy-api

This will allow you to reuse Anubis apis also for other projects.

Configuration

The following environment variables are used by the rego policy for configuration (see the docker-compose file):

  • AUTH_API_URI: Specifies the URI of the auth management API.
  • OPA_ENDPOINT: Specifies the URI of the OPA API.
  • VALID_ISSUERS: Specifies the valid issuers of the auth tokens (coming from Keycloak). This can be a list of issuers, separated by ;.
  • VALID_AUDIENCE: The valid aud value for token verification.

For the policy API, the following env variables are also available:

  • CORS_ALLOWED_ORIGINS: A ; separated list of the allowed CORS origins (e.g. http://localhost;http://localhost:3000)
  • CORS_ALLOWED_METHODS: A ; separated list of the allowed CORS methods (e.g. GET;POST:DELETE)
  • CORS_ALLOWED_HEADERS: A ; separated list of the allowed CORS headers (e.g. content-type;some-other-header)
  • DEFAULT_POLICIES_CONFIG_FILE: Specifies the path of the configuration file of the default policies to create upon tenant creation.
  • DEFAULT_WAC_CONFIG_FILE: Specifies the path of the configuration file of the wac serialization.
  • KEYCLOACK_ENABLED: Enable creation of tenant also in Keycloak.
  • TENANT_ADMIN_ROLE_ID: Role id for tenant admins (you need to retrieve it from a running keycloak using a different template).
  • KEYCLOACK_ADMIN_ENDPOINT: The endpoint of the admin api of Keycloak.
  • DB_TYPE: The database type to be used by the API. Valid options for now are postgres and sqlite.
  • MIDDLEWARE_ENDPOINT: The endpoint of the policy distribution middleware (if None the policy distribution is disabled).

If postgres is the database being used, the following variables are available as well:

  • DB_HOST: The host for the database.
  • DB_USER: The user for the database.
  • DB_PASSWORD: The password for the database user.
  • DB_NAME: The name of the database.

The policy distribution middleware is an add-on the basic Anubis deployment. The following environment variables can be configured:

  • SERVER_PORT: The port where the middleware API is exposed.
  • ANUBIS_API_URI: The anubis management api instance linked to the middleware.
  • LISTEN_ADDRESS: The multiaddress format address the middleware listens on.
  • IS_PRIVATE_ORG: The middleware modality to public or private.

For customizing the default policies that are created alongside a tenant, see the configuration file that's mounted as a volume in the policy-api service from the docker-compose file.

Similarly, a configuration file for the wac serialization is available to configure the prefixes and URIs of the various resource types in relation to tenants.

Load Testing

To measure the overhead introduced by Anubis, we developed a simple script. From the scripts folder, launch the demo set-up and then execute the script:

cd scripts
./run_demo.sh
./test_load.sh
Obtaining token from Keycloak...

Create urn:ngsi-ld:AirQualityObserved:demo entity in ServicePath / for Tenant1
===============================================================
PASSED

Run load test with Anubis in front of Orion
===============================================================
Requests      [total, rate, throughput]         1300, 130.11, 129.67
Duration      [total, attack, wait]             10.026s, 9.992s, 33.83ms
Latencies     [min, mean, 50, 90, 95, 99, max]  26.254ms, 37.653ms, 34.068ms, 49.952ms, 57.357ms, 84.527ms, 135.019ms
Bytes In      [total, mean]                     170300, 131.00
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           100.00%
Status Codes  [code:count]                      200:1300  
Error Set:

Run load test without Anubis in front of Orion
===============================================================
Requests      [total, rate, throughput]         1300, 130.10, 130.08
Duration      [total, attack, wait]             9.994s, 9.992s, 2.052ms
Latencies     [min, mean, 50, 90, 95, 99, max]  1.699ms, 2.324ms, 2.059ms, 2.462ms, 3.111ms, 10.234ms, 16.981ms
Bytes In      [total, mean]                     170300, 131.00
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           100.00%
Status Codes  [code:count]                      200:1300  
Error Set:

Delete urn:ngsi-ld:AirQualityObserved:demo entity in ServicePath / for Tenant1
===============================================================
PASSED

As of today, Anubis introduces an average overhead of ~35msec to upstream service requests, and supports up to 130rps. Tests have been run with 4 CPUs / 8GB RAM docker setup on an MacBook Pro 14. Compared to previous release, authorization overhead improvement is ~1.9x and rps improvement is 2.6x (mainly thanks to #14).

We can consider the overhead, composed by two factors:

  1. The communication between Envoy Proxy and OPA.
  2. The evaluation of policies in OPA.

We measured the approximated overhead introduced by the communication between Envoy and OPA by using an authorize always policy as the simplest possible policy in OPA. Resulting measurements (with the same configuration as above) are:

Requests      [total, rate, throughput]         1300, 130.11, 129.97
Duration      [total, attack, wait]             10.002s, 9.992s, 10.096ms
Latencies     [min, mean, 50, 90, 95, 99, max]  8.94ms, 15.034ms, 10.349ms, 13.437ms, 23.021ms, 159.123ms, 215.597ms
Bytes In      [total, mean]                     170300, 131.00
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           100.00%
Status Codes  [code:count]                      200:1300  
Error Set:

This basically means that basic Envoy + OPA set-up introduces and overhead of ~13msec, and that the rest of the overhead (~22msec) is due to policy evaluation. It could be that we could further optimize policies (cf #196).

NOTE: OPA is written in GOLANG, and policy evaluation performance may be heavily affected by Go Garbage Collector. GOGC variable can be used to configure the trade-off between GC CPU and memory.

To measure the approximated overhead, comment all policies in the docker-compose.yaml and uncomment the nop.rego policy:

ext_authz-opa-service:
  ...
  command:
    - run
    - --log-level=error
    - --server
    - --config-file=/opa.yaml
    - --diagnostic-addr=0.0.0.0:8282
    - --set=plugins.envoy_ext_authz_grpc.addr=:9002
    #- /etc/rego/common.rego
    #- /etc/rego/context_broker_policy.rego
    #- /etc/rego/anubis_management_api_policy.rego
    - /etc/rego/nop.rego
    - /etc/rego/audit.rego

Test rego

To test the rego policy locally:

  1. Install the opa client, e.g.:

    cd scripts
    curl -L -o opa https://openpolicyagent.org/downloads/v0.37.2/opa_linux_amd64_static
    chmod 755 ./opa
  2. Run:

    $ source .env
    $ test_rego.sh

Status and Roadmap

Release Notes provide a summary of implemented features and fixed bugs.

For additional planned features you can check the pending issues and their mapping to milestones.

Related repositories

Credits

Sponsors

  • Anubis received funding as part of the Cascade Funding mechanisms of the EC project DAPSI - GA 871498.