A tool for reading data from a third party via their API and publishing to a broker.
|> SlotSync.Runner
|> every X minutes
|> SlotSync.WIW
|> Y days worth of shifts either side of today
|> get slots on WIW
|> SlotSync.Dispatcher
|> each shift
|> SlotSync.Processor.Shift (GenServer)
|> if matches redis cache then next (cache expires each slot after 1 week)
|> if not then save in redis and publish
|> SlotSync.Publishers.Kafka
The follow env variables can be set, some are required.
SYNC_DAYS_AHEAD
- default: 1 - How many days ahead from today to get data forSYNC_DAYS_PRIOR
- default: 0 - How many days before today to get data forSYNC_SLEEP_PERIOD_SECONDS
- default: 10 - How long to wait before the next sync- NOTE: This waits from the time the sync finishes, so is not a fully correct game loop
- Eg if the sync takes 2 seconds, with the default 10, it will sync every 12 seconds
WIW_KEY
- The key to use in the header for WIW API requests
KAFKA_BROKERS
- Required - A list of kafka hosts to publish data toKAFKA_TOPIC_NAME
- Required - The name of the topic to publish each shift payload to
You might already have the Kafka schemas setup, but if not, ktsllex
can perform the schema "migrations" for you.
See ktsllex (https://github.com/quiqupltd/ktsllex/) for more info.
To encode messages into a Avro schema we use event_serializer
(https://github.com/quiqupltd/event_serializer)
AVLIZER_CONFLUENT_SCHEMAREGISTRY_URL
- Required
Each shift downloaded is cached in Redis to compare against on the next sync run.
REDIS_HOST
- default: "redis://localhost:6379"EXPIRE_CACHE
- default: 604_800 (1 week)
Each sync and publish success or fail event is emitted to Datadog
STATSD_HOST
- RequiredSTATSD_PORT
- Default: 8125
Copyright (c) 2018 Quiqup LTD, MIT License. See LICENSE.txt for further details.