kubernetes-client/java

The Wait.pool() method does not automatically destroy the thread pool

JifeiMei opened this issue · 4 comments

Describe the bug
I created a job.batch and used Wait. pool() to monitor the running status of the pod created by my job. batch, when pod.status.phase is "Succeeded", save full logs and exit. But my main function does not exit, I try use jconsole to monitor my jvm, I found that one thread pool was not destroyed. I use a NamedThreadFactory to init ScheduledExecutorService, it is indeed this thread pool.

Client Version
19.0.0

Kubernetes Version
1.27.0

Java Version
Java 17

To Reproduce

Wait.poll(
        Duration.ofSeconds(5),
        Duration.ofSeconds(5),
        Duration.ofMinutes(5),
        () -> {
            try {
                V1Pod = findPod(namespace, jobName);
                String phase = pod.getStatus().getPhase();
                if("Succeeded".equals(phase)) {
                    // save the container log
                    return true;
                }
            } catch () {
                // save Exception log
                return false;
            }
        }
);

Expected behavior

I use executorService.shutdown(); to destory the thread pool, the problem has been resolved. This is my code, I added a finally code block.

public static boolean poll(
            Duration initialDelay, Duration interval, Duration timeout, Supplier<Boolean> condition) {
        ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor(new NamedThreadFactory("k8sWailPool"));
        AtomicBoolean result = new AtomicBoolean(false);
        long dueDate = System.currentTimeMillis() + timeout.toMillis();
        ScheduledFuture<?> future =
                executorService.scheduleAtFixedRate(
                        () -> {
                            try {
                                result.set(condition.get());
                            } catch (Exception e) {
                                result.set(false);
                            }
                        },
                        initialDelay.toMillis(),
                        interval.toMillis(),
                        TimeUnit.MILLISECONDS);
        try {
            while (System.currentTimeMillis() < dueDate) {
                if (result.get()) {
                    future.cancel(true);
                    return true;
                }
            }
        } catch (Exception e) {
            return result.get();
        } finally {
            executorService.shutdown();
        }
        future.cancel(true);
        return result.get();
    }

Server (please complete the following information):

  • OS: Windows
  • Environment: My develop environment, pc
  • Cloud: Not

Additional context

This is my jvm thread dumps, if don't use executorService.shutdown();

image

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.