Different OS incompatibility or flask crash when too many peers (Original Problem: Unable to see / add peers inside a configuration: TypeError: '<' not supported between instances of 'int' and 'str')
ComradeCluck opened this issue · 14 comments
Describe The Problem
Installed the dashboard service on top of an existing wireguard server with multiple peers. I'm unable to see anything inside of the configuration. The log file shows a traceback: TypeError: '<' not supported between instances of 'int' and 'str'
Expected Error / Traceback
Please provide the error traceback here
[22/Jun/2021 20:20:37] "GET /get_config/wg0 HTTP/1.1" 500 -
[2021-06-22 20:20:59,820] ERROR in app: Exception on /get_config/wg0 [GET]
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functionsrule.endpoint
File "dashboard.py", line 406, in get_conf
"peer_data": get_peers(config_name),
File "dashboard.py", line 186, in get_peers
result = sorted(result, key=lambda d: d['status'])
TypeError: '<' not supported between instances of 'int' and 'str'
To Reproduce
Please provide how you run the dashboard
Home page shows Wg0. When selecting Wg0 returns blank page with navigation on left still visible. Can activate / deactivate the interface.
OS Information:
- OS: CentOS Linux release 8.3.2011
- Python Version: 3.6.8
Sample of your .conf
file
[Interface]
Address = 10.200.200.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o ens32 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o ens32 -j MASQUERADE
ListenPort = <Listen port>
PrivateKey = <Private Key here>
[Peer]
PublicKey = <Public Key here>
AllowedIPs = 10.200.200.3/32
Endpoint = <Client Generated>
[Peer]
PublicKey = <Public Key here>
AllowedIPs = 10.200.200.4/32
Endpoint = <Client Generated>
[Peer]
PublicKey = <Public Key here>
AllowedIPs = 10.200.200.6/32
Endpoint = <Client Generated>
Please provide a sample of your configuration file that you are having problem with. You can replace your public key and private key to ABCD...
[Account]
username = ABCD
password = ABCD
[Server]
wg_conf_path = /etc/wireguard
app_ip = 0.0.0.0
app_port = 10086
auth_req = true
version = v2.0
Hi! Could you please provide the .json
file created under wireguard-dashboard/db
please? It seems like the database file is causing the problem. You can remove any private information such as IP address, but please leave the status
key unchanged. Thank you!
Thanks for getting back to me!
Here is a sample of the file since we have a lot of peers. The only fields I have changed are ID to protect the public keys and the private IP addresses.
{"_default": { "1": { "id": "<Peer Public Key>", "name": "", "total_receive": 0, "total_sent": 0, "total_data": 0, "endpoint": "<Peer IP Address>", "status": "stopped", "latest_handshake": "(None)", "allowed_ip": "10.200.200.3/32", "traffic": [] }, "2": { "id": "<Peer Public Key>", "name": "", "total_receive": 0, "total_sent": 0, "total_data": 0, "endpoint": "<Peer IP Address>", "status": "stopped", "latest_handshake": "(None)", "allowed_ip": "10.200.200.4/32", "traffic": [] }, "3": {" id": "<Peer Public Key>", "name": "", "total_receive": 11.0934, "total_sent": 303.8359, "total_data": 314.9293, "endpoint": "<Peer IP Address>", "status": "stopped", "latest_handshake": "(None)", "allowed_ip": "10.200.200.6/32", "traffic": [] }, "4": { "id": "<Peer Public Key>", "name": "", "total_receive": 0, "total_sent": 0, "total_data": 0, "endpoint": "<Peer IP Address>", "status": "stopped", "latest_handshake": "(None)", "allowed_ip": "10.200.200.7/32", "traffic": [] }, "5": { "id": "<Peer Public Key>", "name": "", "total_receive": 0, "total_sent": 0, "total_data": 0, "endpoint": "<Peer IP Address>", "status": "stopped", "latest_handshake": "(None)", "allowed_ip": "10.200.200.8/32", "traffic": [] }, "6": { "id": "<Peer Public Key>", "name": "", "total_receive": 0.556, "total_sent": 3.1699, "total_data": 3.7259, "endpoint": "<Peer IP Address>", "status": "running", "latest_handshake": "0:00:09", "allowed_ip": "10.200.200.9/32", "traffic": [] }, "7": { "id": "<Peer Public Key>", "name": "", "total_receive": 15.4949, "total_sent": 260.1104, "total_data": 275.6053, "endpoint": "<Peer IP Address>", "status": "running", "latest_handshake": "0:01:38", "allowed_ip": "10.200.200.10/32", "traffic": [] }, "8": { "id": "<Peer Public Key>", "name": "", "total_receive": 0, "total_sent": 0, "total_data": 0, "endpoint": "<Peer IP Address>", "status": "stopped", "latest_handshake": "(None)", "allowed_ip": "10.200.200.11/32", "traffic": [] }, "9": { "id": "<Peer Public Key>", "name": "", "total_receive": 0, "total_sent": 0, "total_data": 0, "endpoint": "<Peer IP Address>", "status": "stopped", "latest_handshake": "(None)", "allowed_ip": "10.200.200.12/32", "traffic": [] }, "10": { "id": "<Peer Public Key>", "name": "", "total_receive": 0, "total_sent": 0, "total_data": 0, "endpoint": "(none)", "status": "stopped", "latest_handshake": "(None)", "allowed_ip": "10.200.200.14/32", "traffic": [] }
Your database file seems normal, if you remove it and restart the dashboard, does it work or is still not working?
Also did you have any content other than wireguard configuration in the .conf
file? like comments
Removing it and restarting the dashboard did not work. There is a Post Up and Post Down iptables entry in the conf file for the server. I spun up a test system running Ubuntu 18.04.5 and copied the conf over. I was running into the same issue until I removed the Post Up and Post Down sections. I'm guessing that is it? I changed:
[Interface] Address = 10.200.200.1/24 SaveConfig = true PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o ens32 -j MASQUERADE PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o ens32 -j MASQUERADE ListenPort = <Listen port> PrivateKey = <Private Key here>
to
[Interface] Address = 10.200.200.1/24 SaveConfig = true ListenPort = <Listen port> PrivateKey = <Private Key here>
Deleted the db json file, restarted the dashboard, and toggled the tunnel off then on.
This is weird, because for myself is using post up and down too, but the dashboard is running no problem. and our configuration looks similar. I will look into this problem and will try to fix it. Thank you for telling me this problem!
Thank you for your help!
Quick update, the peers appear while the tunnel is off but while it is active the peers do not load. This on the test system.
would you mind telling me does the user that run dashboard have privilege to execute wg show
?
I'm using the default admin user from the ini file. I haven't modified any permissions whatsoever. Having said that, I removed a bunch of entries from the conf file ~80 down to 3 and now the dashboard is running as expected. Could it be a memory issue? Seemed like the dashboard would get hosed up when I tried to look at the peers.
Hmmmmm that might be an issue, I'm gonna test it on my side since i'm just using it with 10 or less peers
Sounds good, top isn't showing a bunch of utilization but the web page seems to be timing out.
Quick follow up.
The error is not present in the test machine logs. I'm guessing it has to do with the OS or how I setup the other machine.
The web interface works perfectly when the tunnel is not active but begins to become unresponsive when the tunnel is active.
I think this has to do with the larger number of peers our config file has. When reducing the number of peers to under ten the website works as expected.
I'm gonna close the issue out since it has been fixed by changing to the required OS.
Hi! Thank you for replying back, I'm gonna re-open this bug report since is gonna remind me to fix it lol. I'm gonna test on multiple OS and also simulate more peers running and figure out what is going on ;)
Bug fixed on the newest release ;) Running 80+ peers now should be fine, please file another bug report if it is still causing bug in the newest version :)