STMicroelectronics/STM32CubeU5

CMSIS layer never free()'s memory associated with a thread that has been detached and terminated

Opened this issue · 7 comments

We are using ThreadX through the CMSIS2 layer provided here with USE_DYNAMIC_MEMORY_ALLOCATION.

When we delete a task, we call osThreadDetach() and then osThreadTerminate(), expecting this to free the memory that was allocated when we called osThreadNew(); however this is never the case, and we soon run out of memory.

You can see this if you set break-points here:

osStatus_t osThreadTerminate(osThreadId_t thread_id)

...here:

_tx_thread_system_preempt_check();

...and here:

if (thread_ptr->tx_thread_detached_joinable == osThreadDetached)

The first two break points are reached when osThreadTerminate() is called, status is 0 at the second break-point but, the last break-point is never reached, presumably because _tx_thread_system_preempt_check() has done its stuff, task-switching has occurred, the code that follows the call to tx_thread_terminate() is never going to be run.

How is this meant to work?

FYI, we have tried calling tx_thread_preemption_change() and tx_thread_priority_change() with zero before terminating that thread, to set the current thread to top priority (0 in ThreadX terms), which this post suggests should effectively create a critical section, but that doesn't change the behaviour, _tx_thread_system_preempt_check() still chooses to switch to what is now a lower-priority (54 in ThreadX terms) thread.

This tested on a Nucleo STM32F575ZI board, in case it matters.

Any thoughts on this? I would like to publish the code to customers next week.

ST Internal Reference: 180641

Hi @RobMeades,

Please excuse this delayed reply. Thank you for the detailed report. It has been forwarded to our development teams. We will try our best to get back to you by next week.

With regards,

Hi @RobMeades,

Our development teams acknowledged the issue and said they are already aware of it. They are working on a solution. Unfortunately, they still need time and it will probably not be for this week. They cannot share a date for the moment.

I will keep you informed should there be any update. We apologize for the inconvenience and count on your patience and comprehension.

With regards,

@ALABSTM: understood, thanks for the update; as soon as you have a rough date let me know and I will let our customers know.

Any update on this one (one of our customers is asking)?

<bump />