It's impossible to proxy two ports, 80 and 443, to a single internal server
Opened this issue · 5 comments
It's impossible to proxy two ports, 80 and 443, to a single internal server. If I create a proxy to port 443, then when I create a proxy to port 80, I get a message that the server already exists.
same question here.
Can you give an actual use case for this? If you have a single host name (mysite.domain.com) there wouldn't typically ever be a reason you need a forward to both ports internally. Nginx might be listening on both 80 and 443 but it's just going to use the defined proxy upstream for its requests which will almost always mean you pick one of those 2 ports you mentioned.
If you're trying to maybe send
mysite.domain.com to port 80
and
mysite.domain.com/cooldemo to port 443 you'd use the custom locations tab.
But almost every time someone asks this the answer is a misunderstanding what what they want achieved.
If I understand what they're asking correctly, they're talking about the situation where you may need to serve frontend and backend at different ports.
For example 10.0.0.5:3000 serves the frontend while the backend is served via 10.0.0.5:8000.
If I understand what they're asking correctly, they're talking about the situation where you may need to serve frontend and backend at different ports.
For example
10.0.0.5:3000serves the frontend while the backend is served via10.0.0.5:8000.
Maybe so but thats not really something nginx is going to fix, that is an architectural issue.
You can work around it some by making the backend root URL something like /api and then have an nginx proxy manager definition for the /api route to forward it to the proper backend (port 8000 in your example).
You could also serve the api on a different domain altogether (api.myapp.com for example). But either way you can't turn a single address (myapp.com port 443) into two addresses internally without knowing something about the architecture of the setup.
It is an architectural problem for sure.
I think they're using a low code framework like Reflex so they might not have much control over the ports or endpoints.
Reflex has its own docker setup where a separate nginx proxy coordinates the url endpoints and where the requests should go. Using something like that would solve the issue. Then NginxProxyManager could sit in front and proxy the connections.