Ludus is a gamification framework for Github and Trello. It can also be extended to other version control and project management tools like Gitlab, Jira etc. This application has been configured with 11 events and 15 badges currently. You can design and contribute more events or badges as per your needs.
The Ludus project was completed as a part of the Summer Internship program at Red Hat Inc.
According to TalentLMS Gamification at Work Survey 2018, about 85 % of employees agreed that they would spend more time on the gamified software. Our motivation behind Ludus is to bring a positive transformation in the software industry by the gamification of Version Control and Project Tracking tools.
These instructions will get you a copy of the project up and running on a live OpenShift Cluster.
You need to first setup Open Data Hub on a OpenShift cluster. For more information on the Open Data Hub architecture and installation go here. You can also opt for seperate deployment of Kafka along with Elasticsearch, Logstash and Kibana. Refer this tutorial.
Create a configuration file by copying the sample.
cp .env.sample .env
GITHUB_URL
: The url of the forked github repository of ludus.KAFKA_TOPIC
: The name of the kafka topic where event lister will publish the incoming events.KAFKA_BOOTSTRAP_SERVER
: The hostname and port of the the kafka bootstrap server. The valid format is hostname:portKAFKA_TOPIC
: The name of the kafka topic where event lister will publish the incoming events.AWARDER_NAME
: The name of the awarder application. This should be unique per kafka cluster. You can scale it to distribute event processing loadAWARDER_PORT
: The port number of the awarder applicationEVENTS_TABLE_NAME
: The table where events data of the user will be stored by awarder. This should be unique per kafka cluster.BADGES_TABLE_NAME
: The table where all previously awarded badges for the user will stored by the awarder.This should be unique per kafka clusterULTRAHOOK_API_KEY
: The api key unique to each ultrahook accountULTRAHOOK_SUBDOMAIN
: A subdomain of your namespaceULTRAHOOK_DESTINATION
: The hostname of the 'event-listener-route' Route on OpenShift cluster
make deploy_event_listener
make deploy_awarder
If Event Listener application is not behind the firewall, the hostname of the 'event-listener-route'
Route on the OpenShift Cluster will be the LUDUS_URL
. This can be used to configure the webhooks.
If Event Listener application is behind the firewall, we need to configure ultrahook to
receive webhooks behind the firewall. Register and get your ULTRAHOOK_API_KEY
here. Please remember the WEBHOOK_NAMESPACE
.
This will be unique for your ultrahook account.
To deploy Ultrahook on an OpenShift cluster use the following command with required parameters:
make deploy_ultrahook
If you registered your account with the 'ludus.ultrahook.com' as your WEBHOOK_NAMESPACE
and later deployed the ultrahook with ULTRAHOOK_SUBDOMAIN
as 'redhat', your LUDUS_URL
will be 'http://redhat.ludus.ultrahook.com'
To set up a github webhook, go to the settings page of your repository or organization. From there, click Webhooks, then Add webhook. Now enter/configure following details:
Payload URL
:LUDUS_URL
Content type
: application/jsonWhich events would you like to trigger this webhook?
: Send me everything
Get a TRELLO_API_KEY from here and a TRELLO_TOKEN from following the link to "manually generate a Token" on the same page. Put both of these into the .env file. Also add TRELLO_ORG_ID to the .env file.
Then run make setup_trello
Before adding new events and badges we need to fork this repository. Once you push newly added events and badges, you can use the URL of the forked repository as the LUDUS_URL
while deploying your application.
To configure a new event follow the steps given below:
- Create a schema for your new event using jsonschema, put it in a python file and add this file to the 'validators' directory. A sample validator file content for Github Comment event is given below
schema = {
"type" : "object",
"properties" : {
"comment" : {
"type" : "object",
}
},
"required": ["comment"]
}
- Create a jinja template for your new event that formats the json event payload. Add this file to the formatters directory. A sample formatter template for Github Comment event is given below
{
"username" : "{{ event['sender']['login'] }}",
"timestamp" : "{{ timestamp }}",
"event_source" : "github",
"event_url" : "{{ event['comment']['html_url'] }}",
"event_type" : "github_comment",
"raw_github" : {{ json_event }}
}
- Add this new event to event_configuration.py file in configs directory. You have to also add validator's schema object and the name of the formatter template to it. A sample configuration for Github Comment event is given below
'github_comment': {
'validator': github_comment_validator.schema,
'formatter': 'github_comment_formatter'
}
- Currently you can configure badges with 3 different types of criteria
every_event
criteria is used when you want to award a badge on every occurrence of the event associated with the badge. A sample configuration for a badge with this criteria is given below
'finisher': {
'description': 'awarded for moving a card in the completed list',
'event_type': 'task_completed',
'criteria': {
'type': 'every_event'
},
'image_file': None
}
description
: General information about the badge
event_type
: Type of the event for which badge will be awarded
criteria.type
: Type of the criteria for awarding the badge. Here criteria is to award the badge for every occurrence of event_type
image_file
: Path of an image associated with the badge. Not supported yet
count
criteria is used when you want to award a badge for certain amount of events, associated with the badge, have occurred. This badge is awarded only once per user. A sample configuration for a badge with this criteria is given below
'first-github-comment': {
'description': 'awarded for first github comment',
'event_type': 'github_comment',
'criteria': {
'type': 'count',
'value': 1
},
'image_file': None
}
description
: General information about the badge
event_type
: Type of the event for which badge will be awarded
criteria.type
: Type of the criteria for awarding the badge. Here criteria is to award badge when count of event_type
for a particular user reaches criteria.value
criteria.value
: A count value that satisfies this criteria
image_file
: Path of an image associated with the badge. Not supported yet
match
criteria is used when you want to award a badge when certain events occur which are matched on a field in the event's json. This field's name can be different for every event but content is same. A sample configuration for a homerun badge with this criteria is given below. It is awarded when a user creates an issue on github, creates a pull request for the issue, gets it reviewed and merged. The matching field here is the issue number
'homerun': {
'description': 'awarded for opening an issue, creating pull request, closing the issue',
'criteria': {
'type': 'match',
'matching_events': [
{
'event_type': 'issue_closed',
'field': 'raw_github.issue.number'
},
{
'event_type': 'pull_request',
'field': 'issue_closes'
},
{
'event_type': 'issue',
'field': 'raw_github.issue.number'
}
]
},
'image_file': None
},
description
: General information about the badge
criteria.type
: Type of the criteria for awarding the badge. Here criteria is to award badge when certain events occur which are matched on a criteria.matching_events.field
in the event's json
criteria.matching_events.event_type
: Type of the event to be matched
criteria.matching_events.field
: The name of the field in the event's json to be matched
image_file
: Path of an image associated with the badge. Not supported yet
- You can create a badge with one of the above criteria and put it in badge_configuration.py file in the configs directory
You need to create your own kibana dashboard using the event and badges data stored in the elasticsearch index. Please refer this tutorial for the same. The following dashboard can be created by importing docs/kibana_dashboard.json
- Python - interpreted, high-level, general-purpose programming language
- Flask - web framework written in python
- Kafka - distributed streaming platform
- Faust - stream processing library, porting the ideas from Kafka Streams to Python
- Elasticsearch - search engine based on the Lucene library
- Kibana - data visualization plugin for Elasticsearch
- Docker - tool to create, deploy, and run applications by using containers
- OpenShift - kubernates based container orchestration platform
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details