Proxy SSL CN FQDN mapping feature breaks client registration in multi-proxy scenario with wildcard certificates
nsneck opened this issue · 1 comments
Problem description
We have multiple Uyuni proxies all using the same *.company.com SSL wildcard certificate. Following a recent Uyuni upgrade we experience that new client registration has broken for one of the proxies. The root cause of this seems to be this PR: #8627 which causes duplicate rows in rhnserverfqdn table for our proxy. The reason appears to be that proxy1.company.com was first upgraded and the SSL CN FQDN mapping feature implemented in aforementioned PR mapped the SSL CN "*.company.com" to proxy1.company.com. Then when we upgraded proxy2.company.com the SSL CN FQDN mapping feature again found the "*.company.com" SSL CN on this proxy and created a new rhnserverfqdn db entry "proxy2.company.com" which gets mapped to the same server_id that the "*.company.com" SSL CN has, which is the same server_id that "proxy1.company.com" has. Thus we end up in a scenario where proxy1.company.com, proxy2.company.com and the company.com SSL CN FQDN entries all share the same server_id and proxy2.company.com has duplicate rows, the older one being the correct row with the correct server_id with the newer one being an incorrect row with the duplicate server_id matching the other proxy.
This in turn leads to client registration via proxy2.company.com failing after salt key accept due to query Server.findByFqdn failing to find the correct proxy FQDN (lookupProxyServer) because of the duplicate rows. Thus the corresponding server entry never gets created for the salt key.
Here are relevant logs about the failing client registration:
2024-08-30 11:56:10,845 [salt-event-thread-3] ERROR com.suse.manager.reactor.messaging.RegisterMinionEventMessageAction - Error registering minion id: server123.company.com
com.redhat.rhn.common.hibernate.HibernateRuntimeException: Executing query Server.findByFqdn with params {name=proxy2.company.com} failed
at com.redhat.rhn.common.hibernate.HibernateFactory.lookupObjectByNamedQuery(HibernateFactory.java:203) ~[rhn.jar:?]
at com.redhat.rhn.common.hibernate.HibernateFactory.lookupObjectByNamedQuery(HibernateFactory.java:176) ~[rhn.jar:?]
at com.redhat.rhn.domain.server.ServerFactory.lambda$findByFqdn$14(ServerFactory.java:1038) ~[rhn.jar:?]
at java.util.Optional.map(Optional.java:265) ~[?:?]
at com.redhat.rhn.domain.server.ServerFactory.findByFqdn(ServerFactory.java:1038) ~[rhn.jar:?]
at com.redhat.rhn.domain.server.ServerFactory.lookupProxyServer(ServerFactory.java:211) ~[rhn.jar:?]
at com.redhat.rhn.domain.server.MinionServer.updateServerPaths(MinionServer.java:279) ~[rhn.jar:?]
at com.suse.manager.reactor.messaging.RegisterMinionEventMessageAction.finalizeMinionRegistration(RegisterMinionEventMessageAction.java:595) ~[rhn.jar:?]
at com.suse.manager.reactor.messaging.RegisterMinionEventMessageAction.lambda$registerMinion$7(RegisterMinionEventMessageAction.java:256) ~[rhn.jar:?]
at com.suse.utils.Opt.consume(Opt.java:91) ~[rhn.jar:?]
at com.suse.manager.reactor.messaging.RegisterMinionEventMessageAction.lambda$registerMinion$9(RegisterMinionEventMessageAction.java:253) ~[rhn.jar:?]
at com.suse.utils.Opt.consume(Opt.java:91) ~[rhn.jar:?]
at com.suse.manager.reactor.messaging.RegisterMinionEventMessageAction.lambda$registerMinion$18(RegisterMinionEventMessageAction.java:252) ~[rhn.jar:?]
at com.suse.utils.Opt.consume(Opt.java:91) ~[rhn.jar:?]
at com.suse.manager.reactor.messaging.RegisterMinionEventMessageAction.registerMinion(RegisterMinionEventMessageAction.java:250) ~[rhn.jar:?]
at com.suse.manager.reactor.messaging.RegisterMinionEventMessageAction.lambda$registerMinion$5(RegisterMinionEventMessageAction.java:212) ~[rhn.jar:?]
at com.suse.utils.Opt.consume(Opt.java:94) ~[rhn.jar:?]
at com.suse.manager.reactor.messaging.RegisterMinionEventMessageAction.lambda$registerMinion$6(RegisterMinionEventMessageAction.java:210) ~[rhn.jar:?]
Here are the proxies in rhnserverfqdn table, notice how proxy2 has a duplicate row with the same server_id as proxy1:
uyuni=# select * from rhnserverfqdn where name like '%proxy-%.company%';
id | name | server_id | is_primary | created | modified
-----+-------------------------------------+------------+------------+-------------------------------+-------------------------------
79 | proxy1.company.com | 1000010027 | N | 2023-06-06 15:22:05.718447+03 | 2023-06-06 15:22:05.718447+03
250 | proxy2.company.com | 1000010088 | N | 2024-06-04 14:18:32.473069+03 | 2024-06-04 14:18:32.473069+03
322 | proxy2.company.com | 1000010027 | N | 2024-08-13 14:18:47.978935+03 | 2024-08-13 14:18:47.978935+03
(3 rows)
Here's what the "company.com" wildcard SSL certificate CN looks like in rhnserverfqdn table, notice how it has the same server_id as proxy1 and the second proxy2. One can also observe the GlobalSign CA has for some reason mapped to the same server_id that the older proxy2.company.com row has. Not sure why.
uyuni=# select * from rhnserverfqdn where name like 'company.com';
id | name | server_id | is_primary | created | modified
-----+-----------+------------+------------+-------------------------------+-------------------------------
299 | company.com | 1000010027 | N | 2024-07-25 09:49:57.035761+03 | 2024-07-25 09:49:57.035761+03
(1 row)
uyuni=# select * from rhnserverfqdn where name like '%Global%';
id | name | server_id | is_primary | created | modified
-----+------------------------------------+------------+------------+-------------------------------+-------------------------------
321 | GlobalSign GCC R6 AlphaSSL CA 2023 | 1000010088 | N | 2024-08-13 14:18:28.017058+03 | 2024-08-13 14:18:28.017058+03
(1 row)
Two questions:
- Can this problem be fixed? Or at least can we be given an option to disable the SSL CN FQDN mapping feature if it's not necessary to have in place?
- In our case is it safe to just remove ID 322 from rhnserverfqdn table? Will that row get repopulated on proxy boot / upgrade or something like that, before a fix in Uyuni is implemented and released?
Sorry for the complicated explanation, I can try to explain this better if something is still unclear.
ping @cbosdo as the author of the PR that seemingly has caused this issue.
Steps to reproduce
- Install Uyuni server
- Install proxy1 using a shared wildcard certificate, register proxy to Uyuni
- Install proxy2 using a shared wildcard certificate, register proxy to Uyuni
- Attempt to register a new salt client via proxy2, after accepting the salt key the server object doesn't get created and an error is logged.
Uyuni version
2024.07
Uyuni proxy version (if used)
2024.07
Useful logs
No response
Additional information
No response
It smells like this should rule out wildcard FQDNs:
private Optional<Server> findByAnyFqdn(Set<String> fqdns) {
for (String fqdn : fqdns) {
Optional<Server> server = ServerFactory.findByFqdn(fqdn);
if (server.isPresent()) {
return server;
}
}
return Optional.empty();
}