Maildev UI not opening in Browser
odidev opened this issue · 11 comments
Hi EdwinVW,
I have worked on running pitstop with docker-compose, Kubernetes etc on ubuntu arm64 and amd64 platform. Although it looks like with maildev image v1.2.0-beta1, the maildev UI doesn’t open while running through docker-compose.
I had to do below mentioned changes in docker-compose.yml
post that only the web-page of Maildev opened.
azureuser@Svm1:~/pitstop/src$ git diff docker-compose.yml
diff --git a/src/docker-compose.yml b/src/docker-compose.yml
index a2fe25a..67d004a 100644
--- a/src/docker-compose.yml
+++ b/src/docker-compose.yml
@@ -26,11 +26,11 @@ services:
- SA_PASSWORD=8jkGh47hnDw89Haq8LN2
mailserver:
- image: maildev/maildev:1.1.0
+ image: maildev/maildev:1.1.1
container_name: mailserver
ports:
- "25:25"
- - "4000:80"
+ - "4000:1080"
Also, with the above changes only UI of maildev gets opened in browser but I am not able to receive any email while testing for invoice and notifications.
Please find below the steps to reproduce the issue:
- Vim docker-compose.yml (do the changes mentioned above)
- docker-compose up
- Open maildev UI in browser http://localhost:4000/
- Follow the steps mentioned in wiki for testing notifications
- Email should be received post 4th step but not receiving the same
Please share your pointers as to how to resolve the same. Thanks in advance!
Hi @odidev. Thanks for your interest in the Pitstop sample application!
I have fixed the version of MailDev to 1.1.0 because I had issues with the latest version (see commit 3cb01f5). So you should use 1.1.0 (and ports 25 and 80).
I was not able to reproduce this issue on Ubuntu AMD64 (using WSL2!).
Please double check the log output of the services to see whether all messages are being sent over RabbitMQ and the NotificationService and InvoiceService receive events after planning or completing a maintenance-job. Then send a DayHasPassed
event and watch the log output of the NotificationService and InvoiceService.
Hi EdwinVW,
Thanks for the quick response! I have followed your suggestion and tested the pitstop application with maildev v.1.10 and it is working fine for me.
I have also worked on deploying the microservices on docker swarm although facing issue while running the webapp and related services as their state is failed/rejected/shutdown. Please have a look at the logs below:
ubuntu@ip-172-31-25-158:~/pitstop/src$ sudo docker stack ps pitstop
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
ibna98t1fcnb pitstop_auditlogservice.1 odidev/auditlogservice:latest ip-172-31-21-184 Running Running 3 hours ago
gm4blorwdv9p pitstop_customermanagementapi.1 odidev/customermanagementapi:latest ip-172-31-17-87 Running Starting 6 seconds ago
yiostvz6rzus \_ pitstop_customermanagementapi.1 odidev/customermanagementapi:latest ip-172-31-23-108 Shutdown Failed 11 seconds ago "task: non-zero exit (143): do…"
y4vkvt3eqa0l \_ pitstop_customermanagementapi.1 odidev/customermanagementapi:latest ip-172-31-23-108 Shutdown Failed 49 seconds ago "task: non-zero exit (143): do…"
80hbiqqwzr5g \_ pitstop_customermanagementapi.1 odidev/customermanagementapi:latest ip-172-31-17-87 Shutdown Failed about a minute ago "task: non-zero exit (143): do…"
luhdljb2aw8s \_ pitstop_customermanagementapi.1 odidev/customermanagementapi:latest ip-172-31-17-87 Shutdown Failed 2 minutes ago "task: non-zero exit (143): do…"
yw6lw8nxlsqb pitstop_invoiceservice.1 odidev/invoiceservice:latest ip-172-31-25-158 Running Running 3 hours ago
b7uhyn1r55l0 pitstop_logserver.1 datalust/seq:latest ip-172-31-19-51 Running Running 3 hours ago
unquzuc75yp5 pitstop_mailserver.1 maildev/maildev:1.1.1 ip-172-31-19-51 Running Running 3 hours ago
pjixxfcmh8bk pitstop_notificationservice.1 odidev/notificationservice:latest ip-172-31-21-184 Running Running 3 hours ago
k17o6106bblh pitstop_rabbitmq.1 rabbitmq:3-management-alpine ip-172-31-25-158 Running Running 3 hours ago
ziyi75r4avnp pitstop_sqlserver.1 mcr.microsoft.com/azure-sql-edge:latest ip-172-31-25-158 Running Running 3 hours ago
p87nfbd69nvo pitstop_timeservice.1 odidev/timeservice:latest ip-172-31-23-108 Running Running 3 hours ago
ng6mss96nbyd pitstop_vehiclemanagementapi.1 odidev/vehiclemanagementapi:latest ip-172-31-23-108 Running Starting 8 seconds ago
l9j8x9qkgsyk \_ pitstop_vehiclemanagementapi.1 odidev/vehiclemanagementapi:latest ip-172-31-17-87 Shutdown Failed 14 seconds ago "task: non-zero exit (143): do…"
drbic50nepaa \_ pitstop_vehiclemanagementapi.1 odidev/vehiclemanagementapi:latest ip-172-31-17-87 Shutdown Failed 52 seconds ago "task: non-zero exit (143): do…"
v8lw92u4sztf \_ pitstop_vehiclemanagementapi.1 odidev/vehiclemanagementapi:latest ip-172-31-23-108 Shutdown Failed about a minute ago "task: non-zero exit (143): do…"
m5q1fuq952hh \_ pitstop_vehiclemanagementapi.1 odidev/vehiclemanagementapi:latest ip-172-31-23-108 Shutdown Failed 2 minutes ago "task: non-zero exit (143): do…"
yl7629roadj4 pitstop_webapp.1 odidev/webapp:latest ip-172-31-17-87 Running Starting 12 seconds ago
rjzen38h64cd \_ pitstop_webapp.1 odidev/webapp:latest ip-172-31-17-87 Shutdown Complete 17 seconds ago
prsiliz52kvl \_ pitstop_webapp.1 odidev/webapp:latest ip-172-31-17-87 Shutdown Complete 56 seconds ago
29k7r2mekvj0 \_ pitstop_webapp.1 odidev/webapp:latest ip-172-31-17-87 Shutdown Complete about a minute ago
w5dt8ldpl6zy \_ pitstop_webapp.1 odidev/webapp:latest ip-172-31-17-87 Shutdown Complete 2 minutes ago
v6nupriwmtjb pitstop_workshopmanagementapi.1 odidev/workshopmanagementapi:latest ip-172-31-17-87 Running Starting 12 seconds ago
w69f7dc8t4f3 \_ pitstop_workshopmanagementapi.1 odidev/workshopmanagementapi:latest ip-172-31-21-184 Shutdown Complete 17 seconds ago
wxppkqut39ka \_ pitstop_workshopmanagementapi.1 odidev/workshopmanagementapi:latest ip-172-31-23-108 Shutdown Complete 56 seconds ago
pbvzzsdb9a1f \_ pitstop_workshopmanagementapi.1 odidev/workshopmanagementapi:latest ip-172-31-17-87 Shutdown Complete about a minute ago
rk2435i1kbef \_ pitstop_workshopmanagementapi.1 odidev/workshopmanagementapi:latest ip-172-31-17-87 Shutdown Complete 2 minutes ago
damklupetp75 pitstop_workshopmanagementeventhandler.1 odidev/workshopmanagementeventhandler:latest ip-172-31-19-51 Running Running 3 hours ago
Please find below the steps I used to deploy pitstop through docker swarm: -
- First created multiple ubuntu instances on aws. Added inbound rule mentioned here and port 2377 for docker swarm. Performed this step on all the instances.
- Installed docker and docker-compose in all the instances using below mentioned commands:
sudo apt update
,sudo apt install docker.io docker-compose –y
- Considered one of the instances as manager/Master node and ran below mentioned command: - ````docker swarm init --advertise-addr ```
- The token shown in the above output
docker swarm join --token <TOKEN> <Manager Node IP>:2377
used it to add worker nodes to the cluster. This command has to run on worker nodes. - Checked if all the nodes are joined to master node using this command: -
docker node ls
- Created an overlay network on which docker container are going to communicate from different nodes. Ran this
docker network create -d overlay testservice
command to create an overlay network. - Now deployed the container over the docker swarm cluster using this
docker stack deploy -c docker-compose.yml pitstop
command. - Now checked all the container are running by using below mentioned command: -
docker stack ps pitstop
Note: - I have used docker-compose.yml as my docker-swarm.yml file and modified it with these changes.
It will be really helpful if you could share your pointers here to resolve the issue.
Unfortunately, I have no experience with Docker Swarm. So I won't be able to really help you with the issue.
The logging also doesn't provide enough information to determine what the issue is: "task: non-zero exit (143): do…"
. The do...
part could contain valuable information on the issue, but the entire exception or error is not shown.
If I had to guess what could be the issue, then I would look at what the Pitstop services do when they are started. They try to connect to SQL Server and RabbitMQ. Perhaps these containers took too long to come up. Although, the Pitstop services retry 10 times (with an interval of 5 seconds for RabbitMQ and 10 seconds for SQL Server). So that should be enough.
So I'm sorry I can't be of more help to you.
Hi EdwinVW,
Thanks for your previous pointers, I have made progress with deploying pitstop through docker swarm. Please find below the steps for the same:
- Modify the CustomerManagementAPI, VehicleManagementAPI and WorkshopManagementAPI dockerfiles to remove the line No.20 of HEALTHCHECK.
- After modifying the dockerfiles, I built the image and pushed it to dockerhub to use for testing.
- Follow the rest of the steps as mentioned in the previous comment
Note: - HEALTHCHECK doesn’t work with docker swarm that’s why needed to remove the same while deploying services through docker swarm.
Now I am able to successfully deploy all the services through docker swarm. Please have a look at the logs: -
ubuntu@ip-172-31-24-75:~/pitstop/src$ sudo docker stack ps pitstop
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
lhrvwp1jjvqv pitstop_auditlogservice.1 odidev/auditlogservice:latest ip-172-31-19-6 Running Running 5 minutes ago
2hsswba6tczy pitstop_customermanagementapi.1 odidev/customermanagementapi-health:latest ip-172-31-19-6 Running Running 2 minutes ago
fp38iguxrfde pitstop_invoiceservice.1 odidev/invoiceservice:latest ip-172-31-21-115 Running Running 5 minutes ago
phfmvekcfam2 pitstop_logserver.1 datalust/seq:latest ip-172-31-24-75 Running Running 5 minutes ago
9cgmisn6e6ad pitstop_mailserver.1 odidev/maildev-arm64:1.1.0 ip-172-31-24-75 Running Running 5 minutes ago
7vrfmyk6glis pitstop_notificationservice.1 odidev/notificationservice:latest ip-172-31-30-181 Running Running 5 minutes ago
4p45ekp2gdqd pitstop_rabbitmq.1 rabbitmq:3-management-alpine ip-172-31-24-75 Running Running 4 minutes ago
tsb7a64pvao8 pitstop_sqlserver.1 mcr.microsoft.com/azure-sql-edge:latest ip-172-31-30-181 Running Running 5 minutes ago
ypwnmg9h8hz5 pitstop_timeservice.1 odidev/timeservice:latest ip-172-31-19-6 Running Running 5 minutes ago
wwg7ph5te9vu pitstop_vehiclemanagementapi.1 odidev/vehiclemanagementapi-health:latest ip-172-31-30-181 Running Running 5 minutes ago
eb2ei2aanm6w pitstop_webapp.1 odidev/webapp-health:latest ip-172-31-21-115 Running Running 5 minutes ago
yx56uzhqlps8 pitstop_workshopmanagementapi.1 odidev/workshopmanagementapi-health:latest ip-172-31-21-115 Running Running 5 minutes ago
o9gl0iecurwq pitstop_workshopmanagementeventhandler.1 odidev/workshopmanagementeventhandler:latest ip-172-31-24-75 Running Running 5 minutes ago
Although when trying to test the services in browser some services are showing as offline. Please have a look at the logs below: -
It would be helpful if you could suggest some pointers on what might be the reason for the services being offline.
The services are probably not offline, but the URI used to connect to them is probably not correct. Please check the following things:
- Have you setup networking correctly for the services to communicate to eachother?
- Have you configured the URIs for the services correctly in the
appsettings.Production.json
of the WebApp (use the hostnames of the services inside the Docker Swarm network)? - Browse to the URI of a service (use the hostnames of the services inside the Docker Swarm network). You should see a link to the Swagger UI of the service.
I'm not familiar with Docker swarm, but I guess that the hostnames are pitstop_customermanagementapi.1
, pitstop_vehiclemanagementapi.1
and pitstop_workshopmanagementapi.1
.
Hi EdwinVW,
Thanks for your suggestion. I have implemented the same, but all 3 services are still offline. I have checked adding networks in config file but no progress.
Please do share if you have any other pointers.
Thanks.
I'm sorry @odidev, but I'm not able to help you any further (as I have no experience with Docker Swarm).
@EdwinVW Thanks for your previous input, it was helpful. I have been working on performance testing the pitstop application with locust.io on aws ec2 ubuntu arm64 instance. Please find below a detailed explanation of the same:
- To test the pitstop application, first, we created a Python file that will store the host IP and the number of users which we want to create.
- Created a Python script that will create multiple users. This script will also store the credentials of all the users we have created using this script in
user_Credentials.json
. This file will be used by another test to login into the accounts. - The Python script will create a JSON file that contains details regarding the users, and it will be used to connect to the pitstop webapp and register the customers then locust will do the performance testing on the same.
Please find below the steps to test the same:
- Install the dependencies below to run the script on Ubuntu ec2 instance.
sudo apt install python3-locust
pip3 install locust
pip3 install faker
-
Created a python file that will store the host IP and the number of users which we want to create. Please check Create environmentVariable.py · odidev/pitstop@6411bd0 (github.com) for the same.
-
Created a python script that will create users (for now created for only CustomerManagement API). Please check Create signUP.py · EdwinVW/pitstop@8ac6717 (github.com) for the same.
-
To run the script use below mentioned command to run the script
locust -f signUP.py
-
Although after running the above command getting the below error in the command line, please have a look:
ubuntu@ip-172-31-23-221:~/locust$ locust -f signUP.py
/usr/lib/python3/dist-packages/pkg_resources/__init__.py:116: PkgResourcesDeprecationWarning: 0.1.43ubuntu1 is an invalid version and will not be supported in a future release
warnings.warn(
/usr/lib/python3/dist-packages/pkg_resources/__init__.py:116: PkgResourcesDeprecationWarning: 1.1build1 is an invalid version and will not be supported in a future release
warnings.warn(
/usr/lib/python3/dist-packages/pkg_resources/__init__.py:116: PkgResourcesDeprecationWarning: 2.0.5-build-libtorrent-rasterbar-QyJODx-libtorrent-rasterbar-2.0.5-bindings-python is an invalid version and will not be supported in a future release
warnings.warn(
WARNING:root:You have tagged your on_stop/start function with @task. This will make the method get called both as a task AND on stop/start.
[2023-04-19 08:10:29,831] ip-172-31-23-221/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
[2023-04-19 08:10:29,842] ip-172-31-23-221/INFO/locust.main: Starting Locust 2.14.2
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 908, in gevent._gevent_cgreenlet.Greenlet.run
File "/home/ubuntu/.local/lib/python3.10/site-packages/locust/user/users.py", line 176, in run_user
user.run()
File "/home/ubuntu/.local/lib/python3.10/site-packages/locust/user/users.py", line 142, in run
self.on_start()
File "/home/ubuntu/sakshi/locust/signUP.py", line 58, in on_start
logging.info('Register Customer with %s email and %s password', self.email, self.password)
AttributeError: 'MyLoadTester' object has no attribute 'email'
2023-04-19T08:13:02Z <Greenlet at 0xffff9fa75760: run_user(<signUP.MyLoadTester object at 0xffffa4429e70>)> failed with AttributeError
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 908, in gevent._gevent_cgreenlet.Greenlet.run
File "/home/ubuntu/.local/lib/python3.10/site-packages/locust/user/users.py", line 176, in run_user
user.run()
File "/home/ubuntu/.local/lib/python3.10/site-packages/locust/user/users.py", line 142, in run
self.on_start()
File "/home/ubuntu/sakshi/locust/signUP.py", line 58, in on_start
logging.info('Register Customer with %s email and %s password', self.email, self.password)
AttributeError: 'MyLoadTester' object has no attribute 'email'
2023-04-19T08:13:02Z <Greenlet at 0xffff9fa756c0: run_user(<signUP.MyLoadTester object at 0xffffa4429ba0>)> failed with AttributeError
KeyboardInterrupt
2023-04-19T08:13:03Z
[2023-04-19 08:13:03,272] ip-172-31-23-221/INFO/locust.main: Shutting down (exit code 0)
- Also, to view the locust open this http://:8089 URL in your local browser. However, there is no output coming for now in the locust UI due to the error mentioned above.
After exploring the above issue, it seems like it is because of the Request verification Token that is being generated in the payload when we register a customer to the customer management portal.
It would be really helpful if you could share pointers on this. Please share your thoughts on this.