Feature Flag - Configure Logging Demo
This demo shows how Optimizely Fullstack can be used to control one (or multiple) logging service's batch sizes.
Getting Started with this Example Application
- Clone this repo
- Navigate to the root of the repo
- Create a virtual environment
venv:
virtualenv venv. venv/bin/activate
- Install the Optimizely Python SDK:
pip install optimizely-sdk
- To run a single logging service:
python run.pyTo run multiple logging services:python run_multiple.py
Getting Started with your Optimizely Fullstack Project
- Sign up for a free trial of Optimizely Fullstack here
- Follow all of the prompts
- Create your first Optimizely Fullstack Project, name it whatever you like
Configure your Feature Flag
- Find your production SDK key under 'Settings'
- update the SDK key on line 7 of
logging_service.py(aside: move this to an environment variable if you plan on committing your example app)
- Create a Feature Flag
- 'Features' > 'Create New Feature'
- Enter
log_batchingas the feature key - Add a feature variable called
batch_sizeVariable Key: batch_size,Type: Integer,Default Value: 20
Toggle your Feature Flag On/Off
- Select 'Environment' as
Production - Toggle the feature
on - Roll it out to 100% of traffic
- See what happens :)
Optional: Partial Rollout
- Roll out your feature to some percentage (try 50%) of traffic
- maybe change your
batch_sizeto 10 to speed up the demo
- run
python run_multiple.pyto see what happens
Under the Hood
logging_service.py
A service which logs to console and has Optimizely configured so that feature flags and variables can control the behavior of the logging service.
services.py
Mock services which sends fake logs to the logging service
run.py
Entrypoint to a simple 'one service' demo. Bootstraps one mock service called app_server.
run_multiple.py
Entrypoint for running multiple services at once. This is useful to show off a targeting feature, targeting individual named services.