Run Mojaloop
in your local machine using docker-compose without need for a Kubernetes
cluster.
- git
- docker
- docker-compose
Execute the following commands to run mojaloop in local machine
git clone https://github.com/mojaloop/ml-core-test-harness.git
cd ml-core-test-harness
docker-compose --profile all-services --profile ttk-provisioning --profile ttk-tests up
Wait for some time to get all the containers up and healthy.
You can check the status of the containers using the command docker ps
.
You should see the following output after some time. That means all your mojaloop services are up and test FSPs are onboarded successfully. Now you can run a P2P transfer.
┌───────────────────────────────────────────────────┐
│ SUMMARY │
├───────────────────┬───────────────────────────────┤
│ Total assertions │ 27 │
├───────────────────┼───────────────────────────────┤
│ Passed assertions │ 27 │
├───────────────────┼───────────────────────────────┤
│ Failed assertions │ 0 │
├───────────────────┼───────────────────────────────┤
│ Total requests │ 4 │
├───────────────────┼───────────────────────────────┤
│ Total test cases │ 1 │
├───────────────────┼───────────────────────────────┤
│ Passed percentage │ 100.00% │
├───────────────────┼───────────────────────────────┤
│ Started time │ Wed, 15 Jun 2022 17:02:28 GMT │
├───────────────────┼───────────────────────────────┤
│ Completed time │ Wed, 15 Jun 2022 17:02:30 GMT │
├───────────────────┼───────────────────────────────┤
│ Runtime duration │ 2398 ms │
└───────────────────┴───────────────────────────────┘
You can see all the test reports at http://localhost:9660/admin/reports and latest report should be available in reports/
folder.
After all services been started, if you want to execute the P2P transfer from the command line again, use the following command in a separate terminal session.
docker-compose --project-name ttk-test-only --profile ttk-tests up --no-deps
_Note: This doesn't wait for any dependent services. You should make sure that all the services are up and healthy.
- Open the URL
http://localhost:9660
and go toTest Runner
- Go to
Collection Manager
and click on the buttonImport Folder
- Select the folder
docker/ml-testing-toolkit/test-cases/collections/tests
and import - Select
p2p.json
and click on the masked area on the right side of the screen - Collection manager closes and click on
Run
button at the top right corner - You should see all the tests passed
- You can explore the requests and responses by clicking on
Edit
button next to the test case
You can execute a transfer using the mobile simulator page where you can see two virtual mobile applications a sender and receiver.
By making a transfer using sender mobile application, you can see all the mojaloop requests and callbacks visually by means of a live sequence diagram.
http://localhost:9660/mobilesimulator
Profile Name | Description | Dependent Profiles |
---|---|---|
all-services | All mojaloop services including TTK | - |
ttk-provisioning | For setting up mojaloop switch and onboard sample DFSPs | - |
ttk-tests | TTK tests | - |
debug | Debug utilities (kowl) | kafka |
central-ledger | Central Ledger service | kafka |
ml-api-adapter | ML API Adapter service | central-ledger |
quoting-service | Quoting service | central-ledger |
account-lookup-service | Account lookup service | central-ledger |
discovery | Services used for discovery | - |
agreement | Services used for agreement | - |
transfer | Services used for transfer | - |
docker-compose --profile all-services up
docker-compose --profile all-services --profile debug up
docker-compose --profile central-ledger up
docker-compose --profile quoting-service --profile central-ledger up
Note: We need to include central-ledger profile also here because its a dependency for quoting service
docker-compose --profile account-lookup-service --profile central-ledger up
Note: We need to include central-ledger profile also here because its a dependency for account lookup service
docker-compose --profile ml-api-adapter --profile central-ledger up
Note: We need to include central-ledger profile also here because its a dependency for ml-api-adapter
docker-compose --profile discovery up
docker-compose --profile agreement up
docker-compose --profile transfer up
TODO: Add settlement related services
TODO: Add bulk related services
You can use this repo to run functional tests inside the CICD of a core service
The following commands can be added to the CICD pipeline
git clone --depth 1 --branch v0.0.2 https://github.com/mojaloop/ml-core-test-harness.git
cd ml-core-test-harness
docker-compose --project-name ttk-func --profile all-services --profile ttk-provisioning --profile ttk-tests up -d
bash wait-for-container.sh ttk-func-ttk-tests-1
docker logs ttk-func-ttk-tests-1 > ttk-tests-console.log
docker-compose -p ttk-func down
cat ttk-tests-console.log
ls reports/ttk-func-tests-report.html reports/ttk-provisioning-report.html
docker compose --project-name ml-core -f docker-compose-perf.yml --profile als-test --profile ttk-provisioning-als up -d
Stop Services
docker compose --project-name ml-core -f docker-compose-perf.yml --profile als-test down -v
NOTE:
-v
argument is optional, and it will delete any volume data created by the monitoring docker compose
docker compose --project-name ml-core -f docker-compose-perf.yml --profile transfers-test --profile 8dfsp --profile ttk-provisioning-transfers up -d
Stop Services
docker compose --project-name ml-core -f docker-compose-perf.yml --profile transfers-test --profile 8dfsp down -v
NOTE:
-v
argument is optional, and it will delete any volume data created by the monitoring docker compose
docker compose --project-name ml-core -f docker-compose-perf.yml --profile quotes-test --profile 8dfsp --profile ttk-provisioning-quotes up -d
Stop Services
docker compose --project-name ml-core -f docker-compose-perf.yml --profile quotes-test --profile 8dfsp down -v
NOTE:
-v
argument is optional, and it will delete any volume data created by the monitoring docker compose
- Set
ALS_SWITCH_ENDPOINT
to "http://central-ledger:3001" in perf.env - Set
QS_SWITCH_ENDPOINT
to "http://central-ledger:3001" in perf.env
docker compose --project-name ml-core -f docker-compose-perf.yml --profile all-services --profile 8dfsp --profile ttk-provisioning-e2e up -d
Stop Services
docker compose --project-name ml-core -f docker-compose-perf.yml --profile all-services --profile 8dfsp down -v
NOTE:
-v
argument is optional, and it will delete any volume data created by the monitoring docker compose
docker compose --project-name ml-core -f docker-compose-perf.yml --profile sdk-scheme-adapter up -d
Stop Services
docker compose --project-name ml-core -f docker-compose-perf.yml --profile sdk-scheme-adapter down -v
- Go to
perf.env
and comment out the inboundSDK variables. You'll need to do the same and restart thedocker-compose
in order to change test suite.
- Set CENTRAL_LEDGER_POSITION_BATCH_REPLICAS to desired count in
.env
file - Enable line
CLEDG_KAFKA__EVENT_TYPE_ACTION_TOPIC_MAP__POSITION__PREPARE=topic-transfer-position-batch
inperf.env
file - Set
CENTRAL_LEDGER_VERSION
tov17.2.0
or higher
Start Monitoring Services stack which uses:
- Prometheus for time series data store
- Grafana for visualization dashboards
- Node Exporter to instrument the Host machine
- CAdviser to instrument the Docker containers running on Host machine
docker compose --project-name monitoring -f docker-compose-monitoring.yml up -d
Stop Monitoring Services
docker compose --project-name monitoring --profile als-test --profile transfers-test -f docker-compose-monitoring.yml down -v
Start monitoring with account lookup service mysql exporter
docker compose --project-name monitoring --profile als-test -f docker-compose-monitoring.yml up -d
Start monitoring with central ledger mysql exporter
docker compose --project-name monitoring --profile transfers-test -f docker-compose-monitoring.yml up -d
or
docker compose --project-name monitoring --profile quotes-test -f docker-compose-monitoring.yml up -d
since the quoting service uses the central ledger database.
Start monitoring with all exporters
docker compose --project-name monitoring --profile als-test --profile quotes-test --profile transfers-test -f docker-compose-monitoring.yml up -d
NOTE:
-v
argument is optional, and it will delete any volume data created by the monitoring docker compose
TODO:
- add note about network being created by docker-compose-perf.yml, or it can be done manually.
K6 is being used to execute performance tests, with metrics being captured by Prometheus and displayed using Grafana.
Tests can be defined in the ./packages/k6-tests/scripts/test.js, refer to API load testing guide for more information.
Env configs are stored in the ./perf.env environment configuration file..
Note: Transfer testing and quote testing
Depending on the profile you started the performance docker compose with i.e --profile transfers-test --profile {2/4/8}dfsp
You will need to edit K6_SCRIPT_FSPIOP_FSP_POOL
json string in ./perf.env
to contain 2/4/8 dfsps depending on your test.
For reference here are the provisioned dfsps with an associated partyId available for use.
[
{"partyId":19012345001,"fspId":"perffsp1","wsUrl":"ws://sim-perffsp1:3002"},
{"partyId":19012345002,"fspId":"perffsp2","wsUrl":"ws://sim-perffsp2:3002"},
{"partyId":19012345003,"fspId":"perffsp3","wsUrl":"ws://sim-perffsp3:3002"},
{"partyId":19012345004,"fspId":"perffsp4","wsUrl":"ws://sim-perffsp4:3002"},
{"partyId":19012345005,"fspId":"perffsp5","wsUrl":"ws://sim-perffsp5:3002"},
{"partyId":19012345006,"fspId":"perffsp6","wsUrl":"ws://sim-perffsp6:3002"},
{"partyId":19012345007,"fspId":"perffsp7","wsUrl":"ws://sim-perffsp7:3002"},
{"partyId":19012345008,"fspId":"perffsp8","wsUrl":"ws://sim-perffsp8:3002"},
]
Start tests
env K6_SCRIPT_CONFIG_FILE_NAME=fspiopTransfers.json docker compose --project-name load -f docker-compose-load.yml up
( or )
env K6_SCRIPT_CONFIG_FILE_NAME=fspiopTransfersUnidirectional.json docker compose --project-name load -f docker-compose-load.yml up
( or )
env K6_SCRIPT_CONFIG_FILE_NAME=fspiopDiscovery.json docker compose --project-name load -f docker-compose-load.yml up
( or )
env K6_SCRIPT_CONFIG_FILE_NAME=fspiopQuotes.json docker compose --project-name load -f docker-compose-load.yml up
( or )
env K6_SCRIPT_CONFIG_FILE_NAME=fspiopE2E.json docker compose --project-name load -f docker-compose-load.yml up
( or )
env K6_SCRIPT_CONFIG_FILE_NAME=inboundSDKDiscovery.json docker compose --project-name load -f docker-compose-load.yml up
( or )
env K6_SCRIPT_CONFIG_FILE_NAME=inboundSDKQuotes.json docker compose --project-name load -f docker-compose-load.yml up
( or )
env K6_SCRIPT_CONFIG_FILE_NAME=inboundSDKTransfer.json docker compose --project-name load -f docker-compose-load.yml up
Cleanup tests
docker compose --project-name load -f docker-compose-load.yml down -v
It's recommended that you do not trouble certificates and keys found in docker/security/
.
If you do need to for whatever reason these are the steps.
From the root ml-core-test-harness
directory. Accept all defaults and enter y
when prompted.
cd docker/security/payer/jws/ && . keygen.sh && cd ../tls/ && . createSecrets.sh && cd ../../payee/jws && . keygen.sh && cd ../tls/ && . createSecrets.sh && cd ../../../../
cp docker/security/payer/jws/publickey.cer docker/security/payee/jws/verification_keys/fspiopsimpayer.pem && cp docker/security/payee/jws/publickey.cer docker/security/payer/jws/verification_keys/fspiopsimpayee.pem
cd docker/security/payer/tls/ && openssl ca -config openssl-clientca.cnf -policy signing_policy -extensions signing_req -out ../../payee/tls/dfsp_client_cert.pem -infiles ../../payee/tls/dfsp_client.csr && cp dfsp_server_cacert.pem ../../payee/tls/payer_server_cacert.pem && cd ../../../../
cd docker/security/payee/tls/ && openssl ca -config openssl-clientca.cnf -policy signing_policy -extensions signing_req -out ../../payer/tls/dfsp_client_cert.pem -infiles ../../payer/tls/dfsp_client.csr && cp dfsp_server_cacert.pem ../../payer/tls/payee_server_cacert.pem && cd ../../../../
Here are more verbose hands on instructions of what above commands do.
- Run
. keygen.sh
and. createSecrets.sh
in the/jws
and/tls
folders respectively for both payer and payee. - Move
payee/jws/publickey.cer
topayer/jws/verification_keys/fspiopsimpayee.pem
and movepayer/jws/publickey.cer
topayee/jws/verification_keys/fspiopsimpayer.pem
- Switch directories to
docker/security/payer/tls/
- Run
openssl ca -config openssl-clientca.cnf -policy signing_policy -extensions signing_req -out ../../payee/tls/dfsp_client_cert.pem -infiles ../../payee/tls/dfsp_client.csr
- Switch directories to
docker/security/payee/tls/
- Run
openssl ca -config openssl-clientca.cnf -policy signing_policy -extensions signing_req -out ../../payer/tls/dfsp_client_cert.pem -infiles ../../payer/tls/dfsp_client.csr
- Move each others
dfsp_server_cacert.pem
into each others folder and rename topayer_server_cacert.pem
andpayee_server_cacert.pem
- Run
docker compose --project-name security -f docker-compose-security.yml --profile security-sdk-scheme-adapter up
This section describes the process to automate capturing of grafana rendered dashboards after running the performance testing scenarios.
The main script that contains the logic for this is automate_perf.sh. Before running this script, the required variables are provided as environment variables that are defined in automate_perf.env. As this file contains login credentials, to avoid credential exposure a sample file called automate_perf_sample.env is available at the root level. Make a copy of this file, rename it to automate_perf.env and update the variable values.
Once automate_perf.env is ready, the next step is to make sure that the services for test harness and monitoring are up and running. The relevant docker-compose commands for these 2 steps are listed above in Performance Characterization section.
Once the required services are up and running, run automate_perf.sh from terminal. Once the script is completed successfully, a results folder is created at the main root level. In there another folder based on date is created and it creates subfolders for the different scenarios that are executed. The different dashboards that will be collected are specified in the script itself.
Run the script:
./automate_perf.sh
To capture results without running tests, use the following command
./automate_perf.sh -c -f <From Time in Milliseconds> -t <To time in Milliseconds>
For executing performance test scenarios against a Mojaloop deployment, follow the steps below:
-
Set Environment Variables:
- Set
perf.override.env
with the proper endpoints of the Mojaloop services.
- Set
-
Customize Configurations:
- Edit the file
docker/ml-testing-toolkit/test-cases/environments/remote-k8s-env.json
to customize currencies and MSISDNs according to your requirements.
- Edit the file
-
Run Simulators and TTK Provisioning:
docker compose --project-name simulators -f docker-compose-perf.yml --profile 8dfsp --profile testing-toolkit --profile ttk-provisioning-remote-k8s --profile oracle up -d
-
Run Monitoring Services:
docker compose --project-name monitoring --profile transfers-test -f docker-compose-monitoring.yml up -d
-
Execute Single Transfer Test Case:
env K6_SCRIPT_CONFIG_FILE_NAME=fspiopSingleTransfer.json docker compose --project-name load -f docker-compose-load.yml up
-
Stop Services:
docker compose --project-name simulators -f docker-compose-perf.yml --profile 8dfsp --profile testing-toolkit --profile ttk-provisioning-remote-k8s down -v docker compose --project-name monitoring --profile transfers-test -f docker-compose-monitoring.yml down -v
Note: The
-v
argument is optional and will delete any volume data created by the monitoring Docker Compose.
The following helper scripts are available to allow easier execution of repetitive tasks.
- perf-test.sh - run various performance tests
- k8s-mojaloop-perf-tuning/patch.sh - patch k8s cluster to test with different releases and configurations
The easiest way to use these scripts is to create bash aliases for them:
alias p='./k8s-mojaloop-perf-tuning/patch.sh'
alias t='./perf-test.sh'
Then use one of the following commands:
t discovery
- Test single account discoveryt discoveries
- Test account discoveriest discoveries rate
- Test account discoveries with ramping ratest quote
- Test single quotet fx quote
- Test single FX quotet quotes
- Test quotest fx quotes
- Test FX quotest quotes rate
- Test quotes with ramping ratest fx quotes rate
- Test FX quotes with ramping ratest transfer
- Test single transfert fx transfer
- Test single FX transfert transfers
- Test transferst fx transfers
- Test FX transferst transfers rate
- Test transfers with ramping ratest fx transfers rate
- Test FX transfers with ramping ratest dqt rate
- Test account discoveries, quotes and transfers in parallel with constant ratest dfx rate
- Test account discoveries, FX quotes and FX transfers in parallel with constant ratest e2e
- Test multiple end to endt e2e single
- Test single end to endt sim start
- Start the simulatorst sim stop
- Stop the simulatorst sim restart
- Restart the simulatorst sim update
- Update the simulator imagesp audit
- Configure direct events to Kafka without going through the sidecar, sending only audit and skipping othersp direct
- Configure direct events to Kafka without going through the sidecarp disabled
- Configure no events to be producedp schema
- Recreate central ledger schema objectp init
- Install RedPandap
- Restore baseline with events sent through the sidecar