NOSALRO/robot_dart

Time-consuming Filling of Glyph Cache when Initialising Base Applications

Closed this issue · 9 comments

Hi, :)

When I launch the example magnum_contexts, it seems that the following lines (filling the Glyph cache), take around 2 seconds to be executed.
https://github.com/resibots/robot_dart/blob/59d779a5400bd3c1596e32a133dc0ef249be2238/src/robot_dart/gui/magnum/base_application.cpp#L180-L183

As this also leads to a significant bottleneck in my experiments using sferes2, I was wondering if there is an easy way to deactivate it completely.

Thank you very much for your help!

@Lookatator thanks for using our library.

This is a very interesting comment/issue.

I was wondering if there is an easy way to deactivate it completely.

There's none at the moment, but I will create one tomorrow. Ping me if I haven't...

Nevertheless, I would like to know what is your system (OS, graphics drivers, etc..), as we have never experienced this delay.

@costashatz Thank you very much for your reply!

We are using a singularity container (airl_env : base_2.0) (of Ubuntu 18.04). The host machine is also on Ubuntu 18.04, and uses nvidia drivers 440.44

I have just tested the same container on another machine (running on Ubuntu 20.04 with nvidia drivers 450.66) and now the same instruction takes 500ms to be executed (instead of 2 seconds) ^^

If you require any additional information, feel free to ask!

I have just tested the same container on another machine (running on Ubuntu 20.04 with nvidia drivers 450.66) and now the same instruction takes 500ms to be executed (instead of 2 seconds) ^^

500ms in a virtual machine makes more sense. But indeed this functionality needs to be optional. I'll make it optional soon.

We can further investigate how to make this faster (although I doubt that it can be done, except if I create one global instance and the threads get it with mutexes; I do not see the utility for making this effort).

Thanks @costashatz.
Quick note, containers add a very marginal overhead (1-2% of performance loss, in most of the benchmarks I saw), as they directly connect to the host's kernel without virtualising the hardware. So I don't think this is the root of the problem.

That said, I agree with you, I don't see the reason for investigating this further, just a way to disable it will solve the problem.

(You would have guessed that @Lookatator and @ limbryan are my PhD students :) ).

(You would have guessed that @Lookatator and @ limbryan are my PhD students :) ).

That was easy! ;) Happy to see more people using robot_dart.

So I don't think this is the root of the problem.

Sure I agree with you, but I have seen many things with Nvidia drivers and containers and docker images. I never fully trust them.. It might be just me though :)

That said, I agree with you, I don't see the reason for investigating this further, just a way to disable it will solve the problem.

Yes, I will do that. The code as is also "breaks" with some Nvidia drivers in windowless mode; i.e., no text is rendered although the render calls are being produced and processed without error. This calls even more for making this easy to disable (or even disabled by default in windowed mode).

Any update on this? I also do not see any 2 second delay, but this sounds like a significant problem.

Any update on this? I also do not see any 2 second delay, but this sounds like a significant problem.

I have it in my TODO list for this week. I will make this optional for a quick fix (and all texts off by default in the windowless mode) and open an issue to examine whether we can generate this only once per application and share it per thread (instead of re-creating it for each thread); I do not have time at the moment to look more deeply into this.

@jbmouret @Lookatator @Aneoshun I am waiting for #123 to be merged (to avoid conflicts) and I will make a PR for this one.. Sorry for the delay. Busy times...!

@costashatz Absolutely no worries! I have made a quick (& dirty) fix in my own code for the moment. So it is not urgent.
Thank you very much for your help! :)