hakwerk/labca

Service does not start

Ksdmg opened this issue · 12 comments

Ksdmg commented

Hello,

the service does not start anymore, I updated to the latest version but I get the following error:

labca-boulder-1  | Error validating config file "labca/config/wfe2.json" for command "boulder-wfe2": Key: 'Config.WFE.DirectoryCAAIdentity' Error:Field validation for 'DirectoryCAAIdentity' failed on the 'fqdn' tag

Because I'm using different networks, I called my homenetwork "home". This is what's inside the config:
"directoryCAAIdentity": "home"
This has been working a while ago, but I can't remember what version it was. How can I resolve this?

Looks like LabCA v23.03 will be the last version to still work, as Let's Encrypt added strict validations in their boulder release-2023-03-22. Now that field needs to be a domain of at least two parts (aaa.bbb).

I'll see if I can remove that validation again.

Ksdmg commented

Thanks for the quick response! So if I got you right, my.home would work? Or does it need to be a official tld?

As far as I can tell it is only a very generic test, so yes I think my.home would work

Ksdmg commented

@hakwerk , so I just tested it and manually changing the config to my.home did work. Thanks!
I guess you should implement some kind of warning during the upgrade or creation so other users do not make the same mistake as I did.

Ksdmg commented

@hakwerk it was working for some days, but after a reboot I found out that the value in the config file resets. When I now try to change it back to my.home it doesn't work. I cannot find any information that helps me to debug the issue. The following logs just keep repeating, so it's in an endless loop.

labca-boulder-1  | goroutine 1 [running]:
labca-boulder-1  | github.com/letsencrypt/boulder/cmd.Fail({0xc000178c80, 0x3a})
labca-boulder-1  | 	/opt/boulder/cmd/shell.go:352 +0x59
labca-boulder-1  | main.main()
labca-boulder-1  | 	/opt/boulder/test/health-checker/main.go:87 +0x38a
labca-boulder-1  | Starting service nonce-service-zinc
labca-boulder-1  | Starting service ct-test-srv
labca-boulder-1  | Starting service akamai-test-srv
labca-boulder-1  | Starting service boulder-publisher-1
labca-boulder-1  | Starting service s3-test-srv
labca-boulder-1  | Starting service boulder-sa-2
labca-boulder-1  | Starting service boulder-publisher-2
labca-boulder-1  | Starting service log-validator
labca-boulder-1  | Starting service boulder-sa-1
labca-boulder-1  | Starting service boulder-remoteva-b
labca-boulder-1  | Starting service mail-test-srv
labca-boulder-1  | Starting service nonce-service-taro
labca-boulder-1  | Starting service crl-storer
labca-boulder-1  | Starting service akamai-purger
labca-boulder-1  | Starting service boulder-remoteva-a
labca-boulder-1  | Starting service boulder-ca-b
labca-boulder-1  | Error starting service boulder-ca-b: Command '['./bin/health-checker', '-addr', 'ca2.service.consul:9093', '-config', 'labca/config/health-checker.json']' returned non-zero exit status 2.
labca-boulder-1  | 2023-07-07T13:41:02.267095+00:00Z akamai-purger[825]: 6 akamai-purger 2f7Ciwo Shutting down; queue is already empty.
labca-boulder-1  |  * Starting enhanced syslogd rsyslogd
labca-boulder-1  |    ...done.
labca-boulder-1  | Connected to boulder-mysql:3306
labca-boulder-1  | 
labca-boulder-1  | �[0;34;1mboulder_sa_test�[0m
labca-boulder-1  | Already exists - skipping create
labca-boulder-1  | Applied 0 migrations
labca-boulder-1  | Added users from ../db-users/boulder_sa.sql
labca-boulder-1  | 
labca-boulder-1  | �[0;34;1mboulder_sa_integration�[0m
labca-boulder-1  | Already exists - skipping create
labca-boulder-1  | Applied 0 migrations
labca-boulder-1  | Added users from ../db-users/boulder_sa.sql
labca-boulder-1  | 
labca-boulder-1  | �[0;34;1mincidents_sa_test�[0m
labca-boulder-1  | Already exists - skipping create
labca-boulder-1  | Applied 0 migrations
labca-boulder-1  | Added users from ../db-users/incidents_sa.sql
labca-boulder-1  | 
labca-boulder-1  | �[0;34;1mincidents_sa_integration�[0m
labca-boulder-1  | Already exists - skipping create
labca-boulder-1  | Applied 0 migrations
labca-boulder-1  | Added users from ../db-users/incidents_sa.sql
labca-boulder-1  | 
labca-boulder-1  | database setup complete
labca-boulder-1  | CKR_SLOT_ID_INVALID: Slot 0 does not exist.
labca-boulder-1  | Found slot 1708694314 with matching token label.
labca-boulder-1  | The key pair has been imported.
labca-boulder-1  | CKR_SLOT_ID_INVALID: Slot 1 does not exist.
labca-boulder-1  | Found slot 1738539299 with matching token label.
labca-boulder-1  | The key pair has been imported.
labca-boulder-1  | echo bin/crl-updater bin/boulder-wfe2 bin/ocsp-responder bin/id-exporter bin/boulder-va bin/log-validator bin/crl-checker bin/boulder-sa bin/boulder bin/contact-auditor bin/admin-revoker bin/cert-checker bin/boulder-observer bin/akamai-purger bin/notify-mailer bin/rocsp-tool bin/crl-storer bin/expiration-mailer bin/boulder-ra bin/caa-log-checker bin/nonce-service bin/orphan-finder bin/bad-key-revoker bin/boulder-publisher bin/boulder-ca bin/reversed-hostname-checker bin/mail-tester bin/ceremony
labca-boulder-1  | bin/crl-updater bin/boulder-wfe2 bin/ocsp-responder bin/id-exporter bin/boulder-va bin/log-validator bin/crl-checker bin/boulder-sa bin/boulder bin/contact-auditor bin/admin-revoker bin/cert-checker bin/boulder-observer bin/akamai-purger bin/notify-mailer bin/rocsp-tool bin/crl-storer bin/expiration-mailer bin/boulder-ra bin/caa-log-checker bin/nonce-service bin/orphan-finder bin/bad-key-revoker bin/boulder-publisher bin/boulder-ca bin/reversed-hostname-checker bin/mail-tester bin/ceremony
labca-boulder-1  | GOBIN=/opt/boulder/bin GO111MODULE=on go install -mod=vendor -buildvcs=false -tags "integration" ./...
labca-boulder-1  | ./link.sh
labca-boulder-1  | pebble-challtestsrv - 2023/07/07 13:41:35 Creating HTTP-01 challenge server on 10.77.77.77:80
labca-boulder-1  | pebble-challtestsrv - 2023/07/07 13:41:35 Creating HTTPS HTTP-01 challenge server on 10.77.77.77:443
labca-boulder-1  | pebble-challtestsrv - 2023/07/07 13:41:35 Creating TCP and UDP DNS-01 challenge server on :8053
labca-boulder-1  | pebble-challtestsrv - 2023/07/07 13:41:35 Creating TCP and UDP DNS-01 challenge server on :8054
labca-boulder-1  | pebble-challtestsrv - 2023/07/07 13:41:35 Creating TLS-ALPN-01 challenge server on 10.88.88.88:443
labca-boulder-1  | pebble-challtestsrv - 2023/07/07 13:41:35 Answering A queries with 10.77.77.77 by default
labca-boulder-1  | pebble-challtestsrv - 2023/07/07 13:41:35 Starting challenge servers
labca-boulder-1  | pebble-challtestsrv - 2023/07/07 13:41:35 Starting management server on :8055
labca-boulder-1  | 2023-07-07T13:41:35.923467+00:00Z boulder-sa[668]: 6 boulder-sa 7NqW9AQ Versions: boulder-sa=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | 2023-07-07T13:41:35.928189+00:00Z boulder-sa[668]: 6 boulder-sa 4OjY7Q8 transitioning health of "sa.StorageAuthorityReadOnly" from "NOT_SERVING" to "SERVING"
labca-boulder-1  | 2023-07-07T13:41:35.929116+00:00Z boulder-sa[668]: 6 boulder-sa wovCVQA transitioning health of "sa.StorageAuthority" from "NOT_SERVING" to "SERVING"
labca-boulder-1  | Connecting to sa1.service.consul:9095 health service
labca-boulder-1  | 2023-07-07T13:41:36.049920+00:00Z nonce-service[684]: 6 nonce-service 6rCI0As Versions: nonce-service=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | Connecting to nonce2.service.consul:9101 health service
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4606 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE31BxBVCdehxOC35jJzvAPNrU4ZjNXbmxS+zSN5DSkpJWQUp5wUHPGnXiSCtx7jXnTYLVzslIyXWpNN8m8BiKjQ== and log ID Oqk/Tv0cUSnEJ4bZa0eprm3IQQ4XgNcv20/bXixlxnQ=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4501 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEKtnFevaXV/kB8dmhCNZHmxKVLcHX1plaAsY9LrKilhYxdmQZiu36LvAvosTsqMVqRK9a96nC8VaxAdaHUbM8EA== and log ID 3Zk0/KXnJIDJVmh9gTSZCEmySfe1adjHvKs/XMHzbmQ=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4608 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsHFSkgrlrwIY0PG79tOZhPvBzrnrpbrWa3pG2FfkLeEJQ2Uvgw1oTZZ+oXcrm4Yb3khWDbpkzDbupI+e8xloeA== and log ID ck+wYNY31I+5XBC7htsdNdYVjOSm4YgnDxlzO9PouwQ=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4512 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEFRu37ZRLg8lT4rVQwMwh4oAOpXb4Sx+9hgQ+JFCjmAv3oDV+sDOMsC7hULkGTn+LB5L1SRo/XIY4Kw5V+nFXgg== and log ID NvR3OcSRDDWwwb0Hg+t9aKCpL3+tDuk99WrHkTwabYo=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4607 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEAjRx6Mhc/U4Ye7NzsZ7bbKMGhKVpGZHpZJMzLzNIveBAPh5OBDHpSdn9RY58t4diH8YLjqCi9o+k1T5RwiFbfQ== and log ID e90gTyc4KkZpHv2pgeSOS224Md6/21UmWIxRF9mXveI=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4510 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEyw1HymhJkuxSIgt3gqW3sVXqMqB3EFsXcMfPFo0vYwjNiRmCJDXKsR0Flp7MAK+wc3X/7Hpc8liUbMhPet7tEA== and log ID FuhpwdGV6tfD+Jca4/B2AfeM4badMahSGLaDfzGoFQg=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4609 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEMVjHUOxzh2flagPhuEYy/AhAlpD9qqACg4fGcCxOhLU35r21CQXzKDdCHMu69QDFd6EAe8iGFsybg+Yn4/njtA== and log ID FWPcPPStmIK3l/jogz7yLYUtafS44cpLs6hQ3HrjdUQ=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4600 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAExhriVaEwBOtdNzg5EOtJBHl/u+ua1FtCR/CBXQ1kvpFelcP3gozLNXyxV/UexuifpmzTN31CdfdHv1kK3KDIxQ== and log ID OJiMlNA1mMOTLd/pI7q68npCDrlsQeFaqAwasPwEvQM=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4601 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE7uzW0zXQpWIk7MZUBdTu1muNzekMCIv/kn16+ifndQ584DElobOJ0ZlcACz9WdFyGTjOCfAqBmFybX2OJKfFVg== and log ID 2OHE0zamM5iS1NRFWJf9N6CWxdJ93je+leBX371vC+k=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4605 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEzmpksKS/mHgJZ821po3ldwonsz3K19jwsZgNSGYvEuzAVtWbGfY+6aUXua7f8WK8l2amHETISOY4JTRwk5QFyw== and log ID EOPWVkKfDlS3lQe5brFUMsEYAJ8I7uZr7z55geKzv7c=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4603 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE2EFdA2UBfbJ2Sw1413hBN9YESyABmTGbdgcMh0l/GyV3eFrFjcVS0laNphkfRZ+qkcMbeF+IIHqVzxHAM/2mQQ== and log ID HRrTQca8iy14Qbrw6/itgVzVWTcaENF3tWnJP743pq8=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4602 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE/s5W5OHfowdLA7KerJ+mOizfHJE6Snfib8ueoBYl8Y12lpOoJTtCmmrx4m9KAb9AptInWpGrIaLY+5Y29l2eGw== and log ID z7banNzwEtmRiittSviBYKjWmVltXNBhLfudmDXIcoU=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4500 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEYggOxPnPkzKBIhTacSYoIfnSL2jPugcbUKx83vFMvk5gKAz/AGe87w20riuPwEGn229hKVbEKHFB61NIqNHC3Q== and log ID KHYaGJAn++880NYaAY12sFBXKcenQRvMvfYE9F1CYVM=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4604 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEAMSHwrzvr/KvNmUT55+uQo7CXQLPx1X+qEdKGekUg1q/InN/E37bCY/x45wC00qgiE0D3xoxnUJbKaCQcAX39w== and log ID UtToynGEyMkkXDMQei8Ll54oMwWHI0IieDEKs12/Td4=
labca-boulder-1  | 2023/07/07 13:41:36 ct-test-srv on :4511 with pubkey MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEFRu37ZRLg8lT4rVQwMwh4oAOpXb4Sx+9hgQ+JFCjmAv3oDV+sDOMsC7hULkGTn+LB5L1SRo/XIY4Kw5V+nFXgg== and log ID NvR3OcSRDDWwwb0Hg+t9aKCpL3+tDuk99WrHkTwabYo=
labca-boulder-1  | 2023-07-07T13:41:36.296559+00:00Z boulder-publisher[707]: 6 boulder-publisher l_3thQ8 Versions: boulder-publisher=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | Connecting to publisher1.service.consul:9091 health service
labca-boulder-1  | 2023-07-07T13:41:36.404645+00:00Z log-validator[723]: 6 log-validator 8uCB5gQ Waiting for /var/log/crl-updater.log to appear...
labca-boulder-1  | 2023-07-07T13:41:36.404819+00:00Z log-validator[723]: 6 log-validator rsLnrwg Waiting for /var/log/bad-key-revoker.log to appear...
labca-boulder-1  | 2023-07-07T13:41:36.405235+00:00Z log-validator[723]: 6 log-validator g5CP7Ac Waiting for /var/log/boulder-wfe2.log to appear...
labca-boulder-1  | 2023-07-07T13:41:36.405426+00:00Z log-validator[723]: 6 log-validator x8_tqgs Waiting for /var/log/ocsp-responder.log to appear...
labca-boulder-1  | 2023-07-07T13:41:36.405685+00:00Z log-validator[723]: 6 log-validator ucH95gQ Waiting for /var/log/boulder-ca.log to appear...
labca-boulder-1  | 2023-07-07T13:41:36.405978+00:00Z log-validator[723]: 6 log-validator xdaeiQk Waiting for /var/log/boulder-observer.log to appear...
labca-boulder-1  | 2023-07-07T13:41:36.406373+00:00Z log-validator[723]: 6 log-validator 67f_wwg Waiting for /var/log/boulder-ra.log to appear...
labca-boulder-1  | 2023-07-07T13:41:36.406404+00:00Z log-validator[723]: 6 log-validator _uqR4wM Waiting for /var/log/boulder-remoteva.log to appear...
labca-boulder-1  | 2023-07-07T13:41:36.509885+00:00Z boulder-remoteva[732]: 6 boulder-remoteva rd2Trgk Versions: boulder-remoteva=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | Connecting to rva1.service.consul:9097 health service
labca-boulder-1  | 2023-07-07T13:41:36.805752+00:00Z mail-test-srv[754]: 6 mail-test-srv 3dLodQA mail-test-srv: Got connection from 127.0.0.1:49122
labca-boulder-1  | 2023-07-07T13:41:36.805817+00:00Z mail-test-srv[754]: 6 mail-test-srv hfHH5Qg 2023/07/07 13:41:36 mail-test-srv: 127.0.0.1:49122: readline: EOF
labca-boulder-1  | 2023-07-07T13:41:36.824852+00:00Z boulder-remoteva[761]: 6 boulder-remoteva rd2Trgk Versions: boulder-remoteva=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | Connecting to rva1.service.consul:9098 health service
labca-boulder-1  | 2023-07-07T13:41:37.071212+00:00Z boulder-publisher[783]: 6 boulder-publisher l_3thQ8 Versions: boulder-publisher=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | Connecting to publisher2.service.consul:9091 health service
labca-boulder-1  | 2023-07-07T13:41:37.179325+00:00Z boulder-va[798]: 6 boulder-va 6uWpyAo Versions: boulder-va=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | Connecting to va1.service.consul:9092 health service
labca-boulder-1  | 2023-07-07T13:41:37.303279+00:00Z boulder-sa[814]: 6 boulder-sa 7NqW9AQ Versions: boulder-sa=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | 2023-07-07T13:41:37.307062+00:00Z boulder-sa[814]: 6 boulder-sa 4OjY7Q8 transitioning health of "sa.StorageAuthorityReadOnly" from "NOT_SERVING" to "SERVING"
labca-boulder-1  | 2023-07-07T13:41:37.307525+00:00Z boulder-sa[814]: 6 boulder-sa wovCVQA transitioning health of "sa.StorageAuthority" from "NOT_SERVING" to "SERVING"
labca-boulder-1  | Connecting to sa2.service.consul:9095 health service
labca-boulder-1  | 2023-07-07T13:41:37.425301+00:00Z akamai-purger[830]: 6 akamai-purger j8S8_Ao Versions: akamai-purger=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | 2023-07-07T13:41:37.531120+00:00Z nonce-service[838]: 6 nonce-service 6rCI0As Versions: nonce-service=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | Connecting to nonce1.service.consul:9101 health service
labca-boulder-1  | 2023-07-07T13:41:37.661794+00:00Z boulder-va[860]: 6 boulder-va 6uWpyAo Versions: boulder-va=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | Connecting to va2.service.consul:9092 health service
labca-boulder-1  | 2023-07-07T13:41:37.888175+00:00Z crl-storer[885]: 6 crl-storer l9z3xQc Versions: crl-storer=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | 2023-07-07T13:41:38.012169+00:00Z boulder-ca[893]: 6 boulder-ca m_DExQ0 Versions: boulder-ca=(Unspecified Unspecified) Golang=(go1.20.4) BuildHost=(Unspecified)
labca-boulder-1  | 2023-07-07T13:41:38.012272+00:00Z boulder-ca[893]: 6 boulder-ca lLmugQE loading hostname policy, sha256: 5476058d953ee5182d7d3b7d2e7b45e1c4b52a8ed976ea62387ff94a87b3139e
labca-boulder-1  | Connecting to ca1.service.consul:9093 health service
labca-boulder-1  | got error connecting to health service ca1.service.consul:9093: grpc.health.v1.Health.Check timed out after 1000 ms
labca-boulder-1  | Connecting to ca1.service.consul:9093 health service
labca-boulder-1  | got error connecting to health service ca1.service.consul:9093: grpc.health.v1.Health.Check timed out after 1001 ms
labca-boulder-1  | Connecting to ca1.service.consul:9093 health service
labca-boulder-1  | got error connecting to health service ca1.service.consul:9093: grpc.health.v1.Health.Check timed out after 1000 ms
labca-boulder-1  | Connecting to ca1.service.consul:9093 health service
labca-boulder-1  | got error connecting to health service ca1.service.consul:9093: grpc.health.v1.Health.Check timed out after 1001 ms
labca-boulder-1  | Connecting to ca1.service.consul:9093 health service
labca-boulder-1  | got error connecting to health service ca1.service.consul:9093: grpc.health.v1.Health.Check timed out after 1000 ms
labca-boulder-1  | Connecting to ca1.service.consul:9093 health service
labca-boulder-1  | got error connecting to health service ca1.service.consul:9093: grpc.health.v1.Health.Check timed out after 1001 ms
labca-boulder-1  | Connecting to ca1.service.consul:9093 health service
labca-boulder-1  | got error connecting to health service ca1.service.consul:9093: grpc.health.v1.Health.Check timed out after 1000 ms
labca-boulder-1  | Connecting to ca1.service.consul:9093 health service
labca-boulder-1  | got error connecting to health service ca1.service.consul:9093: grpc.health.v1.Health.Check timed out after 1000 ms
labca-boulder-1  | Connecting to ca1.service.consul:9093 health service
labca-boulder-1  | got error connecting to health service ca1.service.consul:9093: grpc.health.v1.Health.Check timed out after 1000 ms
labca-boulder-1  | Connecting to ca1.service.consul:9093 health service
labca-boulder-1  | got error connecting to health service ca1.service.consul:9093: grpc.health.v1.Health.Check timed out after 891 ms
labca-boulder-1  | Connecting to ca1.service.consul:9093 health service
labca-boulder-1  | got error connecting to health service ca1.service.consul:9093: grpc.health.v1.Health.Check timed out after 0 ms
labca-boulder-1  | 2023-07-07T13:41:48.004915+00:00Z health-checker[894]: 3 health-checker tOCy-gI [AUDIT] timed out waiting for ca1.service.consul:9093 health check
labca-boulder-1  | panic: timed out waiting for ca1.service.consul:9093 health check

Is the labca-bconsul-1 container running ok? I've seen this behaviour if that is not running

Ksdmg commented

@hakwerk , this is the output from docker ps

root@homeca:~# docker ps
CONTAINER ID   IMAGE                                           COMMAND                  CREATED        STATUS          PORTS                                                                      NAMES
493bd416bb8a   letsencrypt/boulder-tools:go1.20.4_2023-05-02   "./setup.sh"             24 hours ago   Up 43 seconds   3000/tcp                                                                   labca-gui-1
fe8c6fe9fdc6   letsencrypt/boulder-tools:go1.20.4_2023-05-02   "labca/entrypoint.sh"    24 hours ago   Up 42 seconds   4001-4003/tcp                                                              labca-boulder-1
ab1c9d6e9d48   mariadb:10.5                                    "docker-entrypoint.s…"   24 hours ago   Up 43 seconds   3306/tcp                                                                   labca-bmysql-1
56db6ce97a26   letsencrypt/boulder-tools:go1.20.4_2023-05-02   "./control.sh"           11 days ago    Up 43 seconds   3030/tcp                                                                   labca-control-1
1d051d9e5a32   hashicorp/consul:1.14.2                         "docker-entrypoint.s…"   11 days ago    Up 42 seconds   8300-8302/tcp, 8500/tcp, 8301-8302/udp, 8600/tcp, 8600/udp                 labca-bconsul-1
43af47a1b7f0   nginx:1.21.6                                    "/docker-entrypoint.…"   2 months ago   Up 42 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   labca-nginx-1
Ksdmg commented

@hakwerk , and the one restarting is labca-boulder-1

root@homeca:~# docker ps
CONTAINER ID   IMAGE                                           COMMAND                  CREATED        STATUS              PORTS                                                                      NAMES
493bd416bb8a   letsencrypt/boulder-tools:go1.20.4_2023-05-02   "./setup.sh"             24 hours ago   Up About a minute   3000/tcp                                                                   labca-gui-1
fe8c6fe9fdc6   letsencrypt/boulder-tools:go1.20.4_2023-05-02   "labca/entrypoint.sh"    24 hours ago   Up 3 seconds        4001-4003/tcp                                                              labca-boulder-1
ab1c9d6e9d48   mariadb:10.5                                    "docker-entrypoint.s…"   24 hours ago   Up About a minute   3306/tcp                                                                   labca-bmysql-1
56db6ce97a26   letsencrypt/boulder-tools:go1.20.4_2023-05-02   "./control.sh"           11 days ago    Up About a minute   3030/tcp                                                                   labca-control-1
1d051d9e5a32   hashicorp/consul:1.14.2                         "docker-entrypoint.s…"   11 days ago    Up About a minute   8300-8302/tcp, 8500/tcp, 8301-8302/udp, 8600/tcp, 8600/udp                 labca-bconsul-1
43af47a1b7f0   nginx:1.21.6                                    "/docker-entrypoint.…"   2 months ago   Up About a minute   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   labca-nginx-1
Ksdmg commented

@hakwerk , the container logs from labca-bconsul-1 show the following in an endless loop:

==> Starting Consul agent...
              Version: '1.14.2'
           Build Date: '2022-11-30 19:54:31 +0000 UTC'
              Node ID: '6345c1e0-ac34-1217-caed-6cabfcf29571'
            Node name: '1d051d9e5a32'
           Datacenter: 'dc1' (Segment: '<all>')
               Server: true (Bootstrap: false)
          Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: 8502, gRPC-TLS: 8503, DNS: 53)
         Cluster Addr: 10.55.55.10 (LAN: 8301, WAN: 8302)
    Gossip Encryption: false
     Auto-Encrypt-TLS: false
            HTTPS TLS: Verify Incoming: false, Verify Outgoing: false, Min Version: TLSv1_2
             gRPC TLS: Verify Incoming: false, Min Version: TLSv1_2
     Internal RPC TLS: Verify Incoming: false, Verify Outgoing: false (Verify Hostname: false), Min Version: TLSv1_2

==> Log data will now stream in as it occurs:

2023-07-07T13:49:32.722Z [ERROR] agent.server: error performing anti-entropy sync of federation state: error="context canceled"
==> Starting Consul agent...
              Version: '1.14.2'
           Build Date: '2022-11-30 19:54:31 +0000 UTC'
              Node ID: 'fd016cac-7c80-e85d-9fde-b5d17ee720d6'
            Node name: '1d051d9e5a32'
           Datacenter: 'dc1' (Segment: '<all>')
               Server: true (Bootstrap: false)
          Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: 8502, gRPC-TLS: 8503, DNS: 53)
         Cluster Addr: 10.55.55.10 (LAN: 8301, WAN: 8302)
    Gossip Encryption: false
     Auto-Encrypt-TLS: false
            HTTPS TLS: Verify Incoming: false, Verify Outgoing: false, Min Version: TLSv1_2
             gRPC TLS: Verify Incoming: false, Min Version: TLSv1_2
     Internal RPC TLS: Verify Incoming: false, Verify Outgoing: false (Verify Hostname: false), Min Version: TLSv1_2

==> Log data will now stream in as it occurs:

With release v23.07 the original value of "home" will be accepted again.

No idea why the consul container isn't running properly (and as a consequence the boulder container neither). You can try stopping, removing and recreating the container, that usually can resolve some weird issues:

docker compose stop bconsul
docker compose rm bconsul
docker compose up -d bconsul
Ksdmg commented

@hakwerk , I restored a working backup of my VM and started the update. It works for me, thanks!

This error from the bconsul container:

[ERROR] agent.server: error performing anti-entropy sync of federation state: error="context canceled"

and this error in the boulder container:

bind: cannot assign requested address

seem to be caused by a bug in the docker compose plugin v2.19.x related to containers with multiple networks.

Workaround for anyone else hitting that issue when the plugin version v2.20.0 is not yet available in the debian/ubuntu repositories (check with docker compose version after running sudo apt update), is to downgrade to version v2.18.1:

sudo apt list docker-compose-plugin -a

Search the name of the version for your specific OS (e.g. 2.18.1-1~debian.12~bookworm) and downgrade like this:

sudo apt install docker-compose-plugin=2.18.1-1~debian.12~bookworm

Now remove and start those containers:

cd /home/labca/boulder
docker compose stop boulder bconsul
docker compose rm -f boulder bconsul
docker compose up -d boulder bconsul