Solver for kidney pair donation matching problems.
This is a tool that allows kidney pair donation centre to find the best possible matching from a pool of patients.
It consists of backend written in Python and an Angular frontend.
Here is the board containing the next development plan: https://github.com/orgs/mild-blue/projects/13
The project runs on macOS, Linux and Windows. To run on Windows you need Docker and WSL2 as we are using graph-tool package that does not support Windows.
In order to build Frontend for the app one must run make build-fe
.
If it does not work, you might have some dependencies missing.
For details see README.md.
One must do that every time when something was changed in the FE code in order to have up to date FE.
What this does is that it builds FE code to txmatching/web/frontend/dist/frontend
directory,
where it is picked up by the Flask.
In case npm can't find some of packages and you get this error:
ENOENT: no such file or directory, chmod '.../node_modules/...`
try to remove node_modules
folder from txmatching/web/frontend/
, run npm cache clean -f
and then run the command again.
Backend is written in Python. We are using conda for dependency management. To be able to run the project you must have it installed.
If you are using macOS on an ARM-based computer, make sure that you install anaconda for x86_64 architecture using rosetta. (Some packages still are not built for ARM).
After you have conda ready and setup. Execute make conda-create
which creates Conda env for you.
Finally, activate the environment with conda activate txmatching
For pdf generation, a wkhtmltopdf is required to be installed.
In macOS it can be installed via brew:
brew install wkhtmltopdf
With linux it is a bit more complicated:
sudo apt update
sudo apt install wget xfonts-75dpi
cd /tmp
Also, you may need to install libssl1.1.1
because newer versions
of Ubuntu (e.g. 22.04 LTS) use a newer version of libssl, which is not compatible with wkhtmltopdf.
wget http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl-dev_1.1.1f-1ubuntu2.16_amd64.deb
wget http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb
sudo dpkg -i libssl-dev_1.1.1f-1ubuntu2.16_amd64.deb
sudo dpkg -i libssl1.1_1.1.1f-1ubuntu2.16_amd64.deb
wget https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox_0.12.6-1.focal_amd64.deb
(choose your version)
sudo dpkg -i wkhtmltox_0.12.6-1.focal_amd64.deb
(of the correct version that you want to install)
wkhtmltopdf --version
(to check that it worked)
You need to have docker installed. And you need to have activated environment from the previous step.
After that simply run make setup-small-non-empty-db
or make setup-non-empty-db
for larger one.
If you want to remove the majority of patients with errors that often do not occur in real data,
then use the script remove_inconsistent_patients.sql
This runs postgres database in docker that has already some data inside.
Simply run make run
. This should start the app at localhost:8080. To log in use credentials admin@example.com
and
password admin
We have a swagger UI running on /doc/
route (so for example, localhost:8080/doc/
).
How to use it and some useful info here on doc.
The swagger is also in the project. It is generated in txmatching/web/swagger.yaml
. We always test that it is up to date
and in case any changes are made, one needs to regenerated it using local_testing_utilities/generate-swagger.py
by
running make generate-swagger-file
command.
We also automatically generate TypeScript files that are used by FE. These files are generated from the swagger file
using openapi-generator-cli
tool. To install this tool, please refer to README.md.
You can automatically generate both swagger file and TS files by running make generate-swagger-all
.
You should be able to create user with some rights to some events and login with /v1/user/login
endpoint. Also you should be able to create a txm event
with duplicate patients from another event via the swagger endpoints.
We are using conda
for managing dependencies as graph-tool
can be installed just from the conda-forge
repository.
To add new package put <package>
to conda.yml
and execute make conda-update
.
Please try to install specific version which you put to conda.yml
in order to guarantee that whole team has the same
versions of packages.
Try to put package to dependencies
part of yaml, that are the packages installed from conda repository,
if the package is not in the conda repo, put it in the pip
part of yaml.
Note that before the PR with new dependencies is submitted, one must build and publish new version of the
mildblue/txmatching-conda-dependencies
docker image.
To do that, go to dockerbase
directory, login to container registry and see further information
in the related README.
when someone updates and you pull new version from git do the following:
make conda-update
there are some githooks in this project. It is advised to use them. After installing dependencies in conda it should be enough to run
pre-commit install
then everytime you commit it first fails in case there were some changes done by the pre-commit script. If you recommit it will pass, because all the needed changes were done. It's especially so we do not push notebooks that would contain data to git.
Right now Flask web server tries to load configuration from the environment
with fallback to loading from local_config.py
.
All current configuration can be found here.
To obtain configuration in the code, one should call get_application_configuration()
from application_configuration.py.
To run unittests use make test
command. make check
command runs linter and unittests altogether.
We can set the logging direction in the .env file using LOGS_FOLDER
variable,
otherwise, the logs will be collected in the ../logs
folder.
We also print logs with DEBUG and INFO levels to stdout and
with WARNING, ERROR and CRITICAL levels to stderr.
We log every user action with API endpoints, but you can disable this
in the .env file by setting SHOW_USERS_ACTIONS
to false
(by default is true
).
There is an ability to activate logging for SQL queries
(set the LOG_QUERIES
environment variable to true
, by default is false
).
Please pay special attention to the confidential information of our users when
creating or modifying endpoints. Currently, we have some automation that
hides sensitive user data if an incoming argument key has the pattern 'password'
in its name and the corresponding value is changed to '********'.
This is not a perfect protection, so you may
improve it if it is needed.
Logger configuration is located in the logging_config.py
.
All filters, handlers and formatters are configured there.
You can interact with some of them through environment variables in .env,
for example you can disable colorful output in terminal, sql duration and logger debug mode.
One of the most important options is PRODUCTION_LOGGER
which activates the json formatter.
It's really useful to log this way during production,
because we can easily filter all logs in real time.
P.S. Timezone in the logs is UTC+00:00.
Thanks to storing logs in JSON format on production and staging, you can easily work with them using jq.
- Connect to the staging or production machine according to the instructions in project-configuration.
- Get logs from backend docker container with jq. There are some cases, which can be useful:
- Get all information
docker logs be -f --tail 10 2>&1 | jq '.'
- Get only user level, email, message, arguments
docker logs be -f --tail 10 2>&1 | jq '. | {level: .levelname, user_email: .user_email, message: .message, args: .arguments}'
- Get only logs that have authorized user.
docker logs be -f --tail 10 2>&1 | jq '. | select ( .user_id != null)'
Also, you can combine these methods and configure it similarly to fit your own situation.
While testing we disable logs to a certain level, which is configurable in .env.
Because of this design choice, you aren't able to use assertLogs
in unittests.
Nevertheless, if you want to use this assertion, just set LOGGING_DISABLE_LEVEL_FOR_TESTING
in .env.local and .env.pub to NOTSET
.
This means it's time to find another design choice for logging while testing,
discuss it with a development team.
We are using Authentik for authentication. You can find more information about it here.
After you run docker-compose up
you must setup Authentik. You will do this only once.
It might take some time (minute or two) before Authentik is ready.
- Go to http://localhost:9000/if/flow/initial-setup/
- Pick email and password
- Go to Admin interface
- Click on Applications / Providers
- Click Create
- Select OAuth2/OpenID Provider
- Input TXM as name (can be anything, its used only as a label in authentic)
- Client ID:
copy from env.
- Client Secret:
copy from env.
- Redirect URI:
http://localhost:8080/v1/user/authentik-login
- Click on Applications
- Create new application
- Set name as TXM and slug as txm (can be anything, its used only as a label in authentic)
- From providers select TXM (provider you created in previous steps)
- Scroll down and click on Create
You can test it by going to http://localhost:9000/application/o/authorize/?client_id=f5c6b6a72ff4f7bbdde383a26bdac192b2200707&response_type=code&redirect_uri=http://localhost:8080/v1/user/authentik-login&scope=null In the future middleware should redirect you there.
Crossmatch happens because of a match of the recipient's antibody with the donor's antigen. It means that the transplant is forbidden completely or that it is allowed but further tests are needed, so the warning is shown. There are six types of crossmatches. You will be able to read about them in the future in TXM knowledge base.
To generate patients with crossmatches warnings set parameter CROSSMATCH to True in __main__
function in generate_patients.py
In the txm event attributes, there is an attribute called "strictness_type" that can be set to either "STRICT" or "FORGIVING".
STRICT = normal parsing, no exceptions being made.
FORGIVING = some parsing errors and warnings are overlooked for the sake of including recipients in a matching. It is used especially when we need to parse data from other KEPs that recognize different set of HLAs.