Account/Login POST not reaching the controller
nathantfrank opened this issue · 4 comments
I am attempting to create a Kubernetes template from the master branch, will be happy to create a PR when I complete it. I am coming across an issue that does not appear on the local docker-swarm version.
Whenever I click the login button for local login, I receive a 400 error from https://sso.mydomain.com
My Setup:
- Running in a cloud setup
- Nginx acting as a reverse-proxy to the project. - forces HTTPS
- sso -reachable-at-> https://sso.mydomain.com
- api -reachable-at-> https://sso.mydomain.com/api
- user-ui-managment -reachable-at-> https://user-ui.mydomain.com
- admin-ui -reachable-at-> https://admin-ui.mydomain.com
- database accessible from inside the kube cluster - also updated the clients to use new urls instead of localhost
I have compared the post request message differences between docker-swarm and kube deployment. Differences I have noticed are:
- I'm running everything remote from each other
- I'm using HTTPS
- I also have the following headers on my request: Sec-Fetch-Mode: navigate, Sec-Fetch-Site: same-origin, Sec-Fetch-User: ?1
I have tried to attempt to capture the 400 Post by using OnActionExecuting on the Account Controller and logging it. Sadly, the action is never selected, so it never runs on the POST.
Any thoughts on why this is occurring would be appreciated.
Hi @nathantfrank,
At a first glance I don't have anything in my mind. Maybe... could be something on nginx config. But it's hard to say, because I believe you can access another endpoints.
Anyway I'll share my config, see if it help. But count on me to help you to fix it.
I'm helping a team with a docker-swarm environment that is pretty close to yours. The only difference is that sso and api have diferent domain names.
- Nginx as reverse-proxy
- sso at: sso.mydomain.com
- api at: api.mydomain.com
...
And everything goes fine. Below my nginx file
worker_processes 16;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
http {
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
large_client_header_buffers 4 16k;
upstream api {
least_conn;
server 172.1.1.1:5003 max_fails=3 fail_timeout=5s;
server 172.1.1.2:5003 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
server_name api.mydomain.com;
location / {
proxy_pass http://api;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
upstream sso {
least_conn;
server 172.1.1.1:5000 max_fails=3 fail_timeout=5s;
server 172.1.1.2:5000 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
server_name sso.mydomain.com;
location / {
proxy_pass http://sso;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
}
}
upstream admin {
least_conn;
server 172.1.1.1:4300 max_fails=3 fail_timeout=5s;
server 172.1.1.2:4300 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
server_name admin.mydomain.com;
location / {
proxy_pass http://admin;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
upstream users {
least_conn;
server 172.1.1.1:4200 max_fails=3 fail_timeout=5s;
server 172.1.1.2:4200 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
server_name users.mydomain.com;
location / {
proxy_pass http://users;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
I managed to get it working in development mode and http. I'll continue to look into a production version.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this error still occurs, feel free to reopen!