flexivrobotics/flexiv_rdk

[BUG] Observed scheduler memory leak

Closed this issue · 4 comments

acf986 commented

Version information

  • RDK: v0.8
  • Robot software: NA
  • OS: Ubuntu 20.04 - x86_64

Describe the bug
Valgrind reports a possible memory leak on my computer when using the Flexiv Scheduler.
Later we slightly modified the code (see the code in comment section), and observe an continuous increase in the memory usage.

Steps to reproduce

  1. Compile the following minimum repeatable example:
void periodicTask(flexiv::Scheduler& s)
{
  s.stop();
}

int main(int argc, char* argv[])
{
  try
  {
    {
      flexiv::Scheduler scheduler;
      scheduler.addTask(std::bind(periodicTask, std::ref(scheduler)),
                        "HP", 1000, scheduler.maxPriority());
      scheduler.start();

    }
  }
  catch (const flexiv::Exception& e)
  {
    return 1;
  }
  return 0;
}

  1. Then valgrind the executable with the following flags:
    valgrind --leak-check=full --num-callers=100

Expected behavior
Valgrind returns no error.

Screenshots
Valgrind instead returns:


==468323== 
==468323== HEAP SUMMARY:
==468323==     in use at exit: 2,399 bytes in 34 blocks
==468323==   total heap usage: 460 allocs, 426 frees, 122,653 bytes allocated
==468323== 
==468323== 304 bytes in 1 blocks are possibly lost in loss record 34 of 34
==468323==    at 0x483DD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==468323==    by 0x40149DA: allocate_dtv (dl-tls.c:286)
==468323==    by 0x40149DA: _dl_allocate_tls (dl-tls.c:532)
==468323==    by 0x4961322: allocate_stack (allocatestack.c:622)
==468323==    by 0x4961322: pthread_create@@GLIBC_2.2.5 (pthread_create.c:660)
==468323==    by 0x3683E4: fvr::PosixThread::create(std::function<FvrSt ()>) (in /home/sp/Workspace/moveit_ws/devel/lib/moveit_tutorials/moveit_cpp_direct_robot)
==468323==    by 0x365B3E: fvr::SchedTask::start() (in /home/sp/Workspace/moveit_ws/devel/lib/moveit_tutorials/moveit_cpp_direct_robot)
==468323==    by 0x3606F0: fvr::Scheduler::start(bool) (in /home/sp/Workspace/moveit_ws/devel/lib/moveit_tutorials/moveit_cpp_direct_robot)
==468323==    by 0x35E510: flexiv::Scheduler::start(bool) (in /home/sp/Workspace/moveit_ws/devel/lib/moveit_tutorials/moveit_cpp_direct_robot)
==468323==    by 0x2C3BE1: main (direct_robot_node.cpp:25)
==468323== 
==468323== LEAK SUMMARY:
==468323==    definitely lost: 0 bytes in 0 blocks
==468323==    indirectly lost: 0 bytes in 0 blocks
==468323==      possibly lost: 304 bytes in 1 blocks
==468323==    still reachable: 2,095 bytes in 33 blocks
==468323==         suppressed: 0 bytes in 0 blocks
==468323== Reachable blocks (those to which a pointer was found) are not shown.
==468323== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==468323== 
==468323== For lists of detected and suppressed errors, rerun with: -s
==468323== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)

Additional context
It seems the underlying pthread created is never joined or detached.
See here: link to stackoverflow
Could you suggested a robust way so that I could achieve the following:

  1. Set robot to RT Joint Position mode and do something
  2. Set the robot to RT Cartesian mode and do something else
    Then repeat step 1 and 2 forever.
acf986 commented

By slightly modifying the code to:

void periodicTask(flexiv::Scheduler& s)
{
  s.stop();
}

int main(int argc, char* argv[])
{
  try
  {
    for (int i=0;i<10000;i++)
    {
      flexiv::Scheduler scheduler;
      scheduler.addTask(std::bind(periodicTask, std::ref(scheduler)),
                        "HP", 200, 10);
      scheduler.start();

    }
  }
  catch (const flexiv::Exception& e)
  {
    return 1;
  }
  return 0;
}

We observe an obvious continuous increase in the memory usage of the testing process.

@acf986 Please try using a unique pointer, i.e.
std::unique_ptr<flexiv::Scheduler> scheduler = std::make_unique<flexiv::Scheduler>();

acf986 commented

@pzhu-flexiv We have tried, but it seems the leaking problem is still there.

@acf986 Thanks for the report, we have located a bug in the destructor of flexiv::Scheduler and have fixed it. The fix will be included in 0.9 release.
image