krakjoe/pthreads

php-zts with pthreads not allocating tasks to free workers

blacktek opened this issue · 3 comments

Hi,
I've the same problem reported here:
https://stackoverflow.com/questions/54217345/php-multi-threading-and-pools

Basically I've php 7.2.19 with zts enabled:
[root@server tmp]# php -i | grep -i thread
/etc/php-zts.d/pthreads.ini
Thread Safety => enabled
pthreads

[root@server tmp]# php -v
PHP 7.2.19 (cli) (built: May 29 2019 11:10:45) ( ZTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
with Zend OPcache v7.2.19, Copyright (c) 1999-2018, by Zend Technologies

I've the same problem both on Windows10 and CentOS7, so it's not OS related.

If I run this code:


$pool = new Pool(2);
foreach ([0,1,2,3] as $count) {
$pool->submit(
new class ($count) extends Threaded
{
private $count;

        public function __construct(int $count)
        {
            $this->count= $count;
        }

        public function run()
        {
            if ($this->count== 0) {
                sleep(3);
                echo $this->count . " is ready\n";
            } else {
                echo $this->count . " is ready\n";
            }
        }
    }
);

}

while ($pool->collect());

$pool->shutdown();

I get as output:
1 is ready
3 is ready
0 is ready
2 is ready

While I would expect that "0 is ready" is printed as last row, because "2 is ready" should be worked by first Worker not doing the sleep.

Where is the mistake and how to create a real POOL model where there are not free workers while there are tasks in queue?

Thank you

Tasks are submitted to the workers in a round robin fashion, so task 2 gets submitted to the same worker as task 0. If you need a custom algorithm, you need to use a custom pool implementation.

yes, I've made a custom implementation with queues. In my opinion the tasks should be submitted round robin only to free workers and not all. This way workers that complete before can process more tasks

This question has been asked many times before on this very same issue tracker. The bottom line is that it's not a bug.