/topological_inventory-mock_source

Mock collector for Topological Inventory

Primary LanguageRuby

Mock collector for OpenShift

Build Status Maintainability Test Coverage security

Mock collector for Topological Inventory service

It implements collector with mock(generated) data, so it's both collector and server.
Collector sends data through Kafka messaging server to Topological Inventory Persister service, which parses received data and saves them as an inventory in database.

Usage:

start collector:

  • bin/mock-collector --source=<source> [--config=<type>] [--data=<type>]
  • bin/mock-collector --source=<source> [--config=<type>] [--data=openshift/<type>]
  • bin/mock-collector --source=<source> [--config=<type>] [--data=amazon/<type>]

@param source (ENV["SOURCE_UID"]) is sources.uid from topological_inventory db (service https://github.com/ManageIQ/sources-api)

@param ingress_api (ENV["INGRESS_API"]) - URL to Ingress API service

  • default: "localhost:9292"

@param config (ENV["CONFIG"]) [optional] - YAML files in /config dir (without ".yml")

  • default (default value) - multithreading + full refresh + events
  • simple - single threading + full refresh (no events)

@param data (ENV["DATA"]) [optional] - YAML files in /config/data/
Contains numbers of inventory collections to be generated by full refresh.
It should contain also definition of custom data values. Possible values are name of file without ".yml"

  • default (default value)
  • small
  • large
  • openshift/default
  • amazon/default
  • ...

Example:

bin/mock-collector --source=31b5338b-685d-4056-ba39-d00b4d7f19cc --config=simple --data=small

Note: Source is manager for this provider (like ExtManagementSystem in ManageIQ)

Collector has two modes, both can be switched on/off in config file (described later):

  • Full refresh
  • Event generator

Options in config: refresh_mode

Openshift deployment

Full refresh

Full refresh generates specified amount of different entities (data in config file) and sends them through Swagger API to Kafka.

Implemented entity types:

          :container_images       => %i[container_image_tags],
          :container_groups       => %i[containers
                                        container_group_tags],
          :container_projects     => %i[container_project_tags],
          :container_nodes        => %i[container_node_tags],
          :container_templates    => %i[container_template_tags],
          :flavors                => nil,
          :service_instances      => nil,
          :service_offerings      => %i[service_offering_tags],
          :service_offering_icons => nil,
          :service_plans          => nil,
          :source_regions         => nil,
          :vms                    => %i[vm_tags],
          :volumes                => %i[volume_attachments],
          :volume_types           => nil

Hash values are subcollections with generated amount equal number of parent collections.

Available options in config:

  • amounts/* - generated amount of each entity type
  • default_limit - batch size for data sent in one shot
  • multithreading - entities of different types can be generated in threads/sequentially
  • full_refresh/repeats_count - number of sequential full refreshes (should generate the same data in db)
  • full_refresh/send_order - generates entities of one type after/before references (links) exist in db

Event generator

Runs in specified number of intervals and generates events of type:

  • add, modify, delete

Event contains information about entity in the same structure as OpenShift server. Event is also sent through Swagger API to Kafka like entity in full refresh.

At this moment, events can contain only Pods.

Available options in config:

  • events/check_interval - number of seconds between ticks
  • events/checks_count - number of ticks
  • events/per_check/add - number of generated Add events per tick
  • events/per_check/modify - number of generated Modify events per tick
  • events/per_check/delete - number of generated Delete events per tick

Added events contains Pods with incrementally generated ID, starting with full-refresh final state.
Modified events always modifies first active(non-archived) entity, changing entity's resourceVersion
Deleted events start archiving Pod with ID == 0 and then increments until some active Pod remains.

Config

All values are documented in YML files located under config folder.

Development files

Following local files can be created:

  • bundler.d/Gemfile.dev.rb - specification of local gems
  • lib/require.dev.rb - specification of local requires