DelegatingScheduler singletons in modern style
magicprinc opened this issue · 9 comments
DelegatingScheduler uses an old singleton idiom with double volatile check and synchronized.
Bill Pugh Singleton Implementation is better, shorter and uses (in some cases) less memory.
Plus fields become "static final" so JVM can do some other optimizations.
Plus it creates (if needed) ForkJoinPool with asyncMode=true (which is recommended for normal Runnables and Callables) and it uses Math.max(Runtime.getRuntime().availableProcessors(), 2) for parallelism.
ForkJoinPool.commonPool() can have less parallelism because of 'java.util.concurrent.ForkJoinPool.common.parallelism' is set, and actual number of CPU can be high.
Second commit contains:
-
bug fix in DelegatingScheduler#schedule: (delay == 0) → (delay <= 0)
-
fix, if user's executor is actually commonPool() without parallelism. Similar to CompletableFuture.screenExecutor
-
Use less memory: don't capture variable if executor is commonPool (Lambda can access it without capturing, through static field)
The associated PR is a lot to take on faith, and it's very hard to test. Is anyone reporting problems due to the current implementation?
There are only 225 lines of code.
Now with almost 100% test code coverage (I have written tests to cover all lines, even not "mine").
What line is unclear?
I work with large sets, so every byte in RAM counts.
With last commit, there is no more "fat" in code.
Every byte – is required and works.
I report problem so to say :-)
It was too many moving parts:
Double-checked locking singleton pattern is outdated.
8+4+1 extra (unused) bytes per every task (in reality even more: 8 byte object alignment of JVM or even more if 64bit pointers are used).
Full row of checks every time.
Bug with delay: if (delay == 0)
Accepting commonPool that cannot support parallelism (now protected, similar to CompletableFuture)
Extra bonus: now some fields are static final and JVM can inline them and do other optimizations.
Last possible optimization:
Replace juicy DelegatingScheduler.ScheduledCompletableFuture with a lightweight, almost empty implementation. CompletableFuture is quite fat if used only as dumb "placeholder".
Simply having tests to cover lines of code doesn't give me confidence in the correctness of the code. The risk involved in replacing this machinery without feedback from experts could be substantial, but getting that feedback takes time and attention.
I could understand undertaking it if people were reporting actual problems, e.g., "I ran out of memory because the tasks were too big." "I'm seeing significant performance degradation due to the extra checks." Has anyone experienced problems that might stem from a need to optimize DelegatingScheduler?
If not, this could be premature optimization.
Side note: if (delay == 0)
is more of a specification problem in Scheduler
, right? Should negative delay values be accepted in the first place?
Side note: if (delay == 0) is more of a specification problem in Scheduler, right? Should negative delay values be accepted in the first place?
JDK's ScheduledThreadPoolExecutor accepts them (treats negative values as 0 = run immediately).
In your case, it is a small bug: you schedule then task in the wrong executor.
This is what I mean, when I say: "too many moving parts"
Maybe you could show the code to the devs?
premature optimization
Greta Thunberg won't love you: All this extra work adds to global warming :-)
I am not a maintainer of this project, so I can't "show the code to the devs" beyond making comments here that interested parties might pick up on.
How about making the if (delay == 0)
-> if (delay <= 0)
change into a separate PR?
Greta Thunberg would probably be in favor of thinking about whether the extra work isn't dominated by other concerns.
I did a good job.
And I hope the maintainer will find my changes interesting.
It is easy to see the difference: simply generate 1 billion scheduled "events" and see.