kroxylicious/kroxylicious-junit5-extension

Unable to stand up a 3 controller KRaft-based cluster

Closed this issue · 1 comments

Describe the bug

I'm unable to stand up a 3 controller KRaft-based cluster.

To Reproduce

    @KRaftCluster(numControllers = 3)
    KafkaCluster cluster;

    @Test
    public void shouldWork() throws ExecutionException, InterruptedException {
        // Create some topics with partitions on known brokers
        var admin = Admin.create(Map.of(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, cluster.getBootstrapServers()));
        DescribeClusterResult describeClusterResult = admin.describeCluster();
        System.out.printf("Nodes %s%n", describeClusterResult.nodes().get());
        System.out.printf("Controller %s%n", describeClusterResult.controller().get());
    }

Declaring cluster as TestcontianersKafkaCluster does not make a difference.

Expected behavior

The cluster should be brought up and the test should pass after printing the nodes and controller

Logs

The test ultimately fails with this stacktrace:

org.awaitility.core.ConditionTimeoutException: Condition with io.kroxylicious.testing.kafka.invm.InVMKafkaCluster was not fulfilled within 30 seconds.

	at org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:167)
	at org.awaitility.core.CallableCondition.await(CallableCondition.java:78)
	at org.awaitility.core.CallableCondition.await(CallableCondition.java:26)
	at org.awaitility.core.ConditionFactory.until(ConditionFactory.java:985)
	at org.awaitility.core.ConditionFactory.until(ConditionFactory.java:954)
	at io.kroxylicious.testing.kafka.invm.InVMKafkaCluster.tryToStartServerWithRetry(InVMKafkaCluster.java:198)
	at io.kroxylicious.testing.kafka.invm.InVMKafkaCluster.lambda$start$5(InVMKafkaCluster.java:186)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
	at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
	at java.base/java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:290)
	at java.base/java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:754)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
	at java.base/java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:686)
	at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:159)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:173)
	at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
	at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
	at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:765)
	at io.kroxylicious.testing.kafka.invm.InVMKafkaCluster.start(InVMKafkaCluster.java:184)
	at io.kroxylicious.testing.kafka.junit5ext.KafkaClusterExtension.createCluster(KafkaClusterExtension.java:956)
	at io.kroxylicious.testing.kafka.junit5ext.KafkaClusterExtension.lambda$getCluster$12(KafkaClusterExtension.java:814)
	at org.junit.jupiter.engine.execution.ExtensionValuesStore.lambda$getOrComputeIfAbsent$4(ExtensionValuesStore.java:86)
	at org.junit.jupiter.engine.execution.ExtensionValuesStore$MemoizingSupplier.computeValue(ExtensionValuesStore.java:223)
	at org.junit.jupiter.engine.execution.ExtensionValuesStore$MemoizingSupplier.get(ExtensionValuesStore.java:211)
	at org.junit.jupiter.engine.execution.ExtensionValuesStore$StoredValue.evaluate(ExtensionValuesStore.java:191)
	at org.junit.jupiter.engine.execution.ExtensionValuesStore$StoredValue.access$100(ExtensionValuesStore.java:171)
	at org.junit.jupiter.engine.execution.ExtensionValuesStore.getOrComputeIfAbsent(ExtensionValuesStore.java:89)
	at org.junit.jupiter.engine.execution.ExtensionValuesStore.getOrComputeIfAbsent(ExtensionValuesStore.java:93)
	at org.junit.jupiter.engine.execution.NamespaceAwareStore.getOrComputeIfAbsent(NamespaceAwareStore.java:61)
	at io.kroxylicious.testing.kafka.junit5ext.KafkaClusterExtension.getCluster(KafkaClusterExtension.java:812)
	at io.kroxylicious.testing.kafka.junit5ext.KafkaClusterExtension.lambda$injectFields$6(KafkaClusterExtension.java:552)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1092)
	at io.kroxylicious.testing.kafka.junit5ext.KafkaClusterExtension.injectFields(KafkaClusterExtension.java:546)
	at io.kroxylicious.testing.kafka.junit5ext.KafkaClusterExtension.injectInstanceFields(KafkaClusterExtension.java:534)
	at io.kroxylicious.testing.kafka.junit5ext.KafkaClusterExtension.lambda$beforeEach$4(KafkaClusterExtension.java:530)
	at java.base/java.util.Collections$SingletonList.forEach(Collections.java:4966)
	at io.kroxylicious.testing.kafka.junit5ext.KafkaClusterExtension.beforeEach(KafkaClusterExtension.java:530)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachCallbacks$2(TestMethodTestDescriptor.java:163)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:199)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:199)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeEachCallbacks(TestMethodTestDescriptor.java:162)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:129)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
	at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:57)
	at com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
	at com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11)
	at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
	at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
	at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
Caused by: java.util.concurrent.TimeoutException
	at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:204)
	at org.awaitility.core.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:101)
	at org.awaitility.core.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:81)
	at org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:103)
	... 91 more


Process finished with exit code 255

It doesn't look like the junit extension is actually starting all the controllers

2023-08-18 11:45:52 INFO  QuorumState:480 - [RaftManager id=0] Completed transition to Unattached(epoch=0, voters=[0, 1, 2], electionTimeoutMs=1501) from null
2023-08-18 11:45:52 INFO  KafkaRaftClient:2150 - [RaftManager id=0] Registered the listener org.apache.kafka.image.loader.MetadataLoader@806650533
2023-08-18 11:45:52 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:52 INFO  QuorumController:1921 - [QuorumController id=0] Creating new QuorumController with clusterId 28fACCbEQAycWXEXZ2gfig, authorizer Optional.empty.
2023-08-18 11:45:52 INFO  KafkaRaftClient:2150 - [RaftManager id=0] Registered the listener org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@914503624
2023-08-18 11:45:52 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:52 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:52 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:52 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:52 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:52 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:52 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:52 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:53 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:53 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:53 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:53 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:53 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:53 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:53 INFO  QuorumState:480 - [RaftManager id=0] Completed transition to CandidateState(localId=0, epoch=1, retries=1, voteStates={0=GRANTED, 1=UNRECORDED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1285) from Unattached(epoch=0, voters=[0, 1, 2], electionTimeoutMs=1501)
2023-08-18 11:45:53 INFO  QuorumController:1094 - [QuorumController id=0] In the new epoch 1, the leader is (none).
2023-08-18 11:45:53 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:53 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:53 INFO  NetworkClient:977 - [RaftManager id=0] Node 2 disconnected.
2023-08-18 11:45:53 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 2 (localhost/127.0.0.1:24038) could not be established. Broker may not be available.
2023-08-18 11:45:53 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:53 INFO  NetworkClient:977 - [RaftManager id=0] Node 2 disconnected.
2023-08-18 11:45:53 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 2 (localhost/127.0.0.1:24038) could not be established. Broker may not be available.
2023-08-18 11:45:53 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:53 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:53 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:53 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:53 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:53 INFO  NetworkClient:977 - [RaftManager id=0] Node 2 disconnected.
2023-08-18 11:45:53 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 2 (localhost/127.0.0.1:24038) could not be established. Broker may not be available.
2023-08-18 11:45:53 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:53 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:54 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:54 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:54 INFO  NetworkClient:977 - [RaftManager id=0] Node 2 disconnected.
2023-08-18 11:45:54 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 2 (localhost/127.0.0.1:24038) could not be established. Broker may not be available.
2023-08-18 11:45:54 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:54 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:54 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:54 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:54 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:54 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:54 INFO  NetworkClient:977 - [RaftManager id=0] Node 2 disconnected.
2023-08-18 11:45:54 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 2 (localhost/127.0.0.1:24038) could not be established. Broker may not be available.
2023-08-18 11:45:54 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:54 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:54 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:54 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:54 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:54 INFO  KafkaRaftClient:1989 - [RaftManager id=0] Election has timed out, backing off for 100ms before becoming a candidate again
2023-08-18 11:45:54 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:54 INFO  KafkaRaftClient:1982 - [RaftManager id=0] Re-elect as candidate after election backoff has completed
2023-08-18 11:45:54 INFO  QuorumState:480 - [RaftManager id=0] Completed transition to CandidateState(localId=0, epoch=2, retries=2, voteStates={0=GRANTED, 1=UNRECORDED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1536) from CandidateState(localId=0, epoch=1, retries=1, voteStates={0=GRANTED, 1=UNRECORDED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1285)
2023-08-18 11:45:54 INFO  QuorumController:1094 - [QuorumController id=0] In the new epoch 2, the leader is (none).
2023-08-18 11:45:54 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:54 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:55 INFO  NetworkClient:977 - [RaftManager id=0] Node 2 disconnected.
2023-08-18 11:45:55 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 2 (localhost/127.0.0.1:24038) could not be established. Broker may not be available.
2023-08-18 11:45:55 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:55 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:55 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:55 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:55 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:55 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:55 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:55 INFO  NetworkClient:977 - [RaftManager id=0] Node 2 disconnected.
2023-08-18 11:45:55 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 2 (localhost/127.0.0.1:24038) could not be established. Broker may not be available.
2023-08-18 11:45:55 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:55 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:55 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:55 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:55 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:56 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:56 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:56 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:56 INFO  NetworkClient:977 - [RaftManager id=0] Node 2 disconnected.
2023-08-18 11:45:56 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 2 (localhost/127.0.0.1:24038) could not be established. Broker may not be available.
2023-08-18 11:45:56 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:56 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:56 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:56 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:56 INFO  KafkaRaftClient:1989 - [RaftManager id=0] Election has timed out, backing off for 100ms before becoming a candidate again
2023-08-18 11:45:56 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:56 INFO  KafkaRaftClient:1982 - [RaftManager id=0] Re-elect as candidate after election backoff has completed
2023-08-18 11:45:56 INFO  QuorumState:480 - [RaftManager id=0] Completed transition to CandidateState(localId=0, epoch=3, retries=3, voteStates={0=GRANTED, 1=UNRECORDED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1568) from CandidateState(localId=0, epoch=2, retries=2, voteStates={0=GRANTED, 1=UNRECORDED, 2=UNRECORDED}, highWatermark=Optional.empty, electionTimeoutMs=1536)
2023-08-18 11:45:56 INFO  QuorumController:1094 - [QuorumController id=0] In the new epoch 3, the leader is (none).
2023-08-18 11:45:56 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:56 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:56 INFO  NetworkClient:977 - [RaftManager id=0] Node 2 disconnected.
2023-08-18 11:45:56 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 2 (localhost/127.0.0.1:24038) could not be established. Broker may not be available.
2023-08-18 11:45:56 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:56 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:56 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:56 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:57 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:57 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:57 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:57 INFO  NetworkClient:977 - [RaftManager id=0] Node 2 disconnected.
2023-08-18 11:45:57 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 2 (localhost/127.0.0.1:24038) could not be established. Broker may not be available.
2023-08-18 11:45:57 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:57 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:57 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:57 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:57 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:57 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:57 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:57 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:57 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:57 INFO  NetworkClient:977 - [RaftManager id=0] Node 2 disconnected.
2023-08-18 11:45:57 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 2 (localhost/127.0.0.1:24038) could not be established. Broker may not be available.
2023-08-18 11:45:57 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:57 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:58 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:58 INFO  NetworkClient:977 - [RaftManager id=0] Node 1 disconnected.
2023-08-18 11:45:58 WARN  NetworkClient:814 - [RaftManager id=0] Connection to node 1 (localhost/127.0.0.1:20281) could not be established. Broker may not be available.
2023-08-18 11:45:58 INFO  KafkaRaftClient:1989 - [RaftManager id=0] Election has timed out, backing off for 500ms before becoming a candidate again
2023-08-18 11:45:58 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:58 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:58 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:58 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.
2023-08-18 11:45:58 INFO  MetadataLoader:231 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.

Additional context
Add any other context about the problem here.

Turning on kafka logging logger.Kafka = debug, STDOUT

Shows

023-08-22 12:19:14 DEBUG org.apache.kafka.clients.ClientUtils:113 - Resolved host localhost as 127.0.0.1
2023-08-22 12:19:14 DEBUG org.apache.kafka.clients.NetworkClient:1031 - [RaftManager id=0] Initiating connection to node localhost:28797 (id: 1 rack: null) using address localhost/127.0.0.1
2023-08-22 12:19:14 DEBUG org.apache.kafka.clients.ClientUtils:113 - Resolved host localhost as 127.0.0.1
2023-08-22 12:19:14 DEBUG org.apache.kafka.clients.NetworkClient:1031 - [RaftManager id=0] Initiating connection to node localhost:17847 (id: 2 rack: null) using address localhost/127.0.0.1
2023-08-22 12:19:14 DEBUG org.apache.kafka.common.network.Selector:606 - [RaftManager id=0] Connection with localhost/127.0.0.1 (channelId=2) disconnected
java.net.ConnectException: Connection refused
	at sun.nio.ch.Net.pollConnect(Native Method) ~[?:?]
	at sun.nio.ch.Net.pollConnectNow(Net.java:673) ~[?:?]
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:973) ~[?:?]
	at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) ~[kafka-clients-3.5.1.jar:?]
	at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) ~[kafka-clients-3.5.1.jar:?]
	at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) [kafka-clients-3.5.1.jar:?]
	at org.apache.kafka.common.network.Selector.poll(Selector.java:481) [kafka-clients-3.5.1.jar:?]
	at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:571) [kafka-clients-3.5.1.jar:?]
	at kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:78) [kafka_2.13-3.5.1.jar:?]
	at kafka.common.InterBrokerSendThread.doWork(InterBrokerSendThread.scala:98) [kafka_2.13-3.5.1.jar:?]
	at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:127) [kafka-server-common-3.5.1.jar:?]