Inlyne-Project/inlyne

Stream image rendering

Opened this issue · 3 comments

Opening this since @trimental is much more knowledgeable of the GPU related parts

After #74 the final piece of the lower peak memory usage puzzle is preventing decoding the entire compressed image blob in memory to render. The reasonable solution here, to me at least, seems to be chunking the image horizontally to render N rows instead of rendering the whole thing at once. This would lower the memory usage since we can control the number of rows used to use a smaller buffer. Does this seem workable @trimental?

Right now the most efficient way to upload our image data from normal cpu ram memory to gpu texture memory is to use write_texture which is stores the data in staging memory and submits it to the gpu together with other queued gpu commands.

Even if we were to use multiple calls to write_texture so they were row by row, they would still be stored in staging memory until the submit I believe so there would be no point. It might be possible to stream an upload to a gpu buffer and then copy to a gpu texture but this would have performance hits.

I think the only proper way to get even lower memory usage would be to used a compressed texture format and write that to the gpu.

Using compressed textures sounds exactly like what I would want 👀

I'm not very familiar with stuff in the GPU space. Got a link or anything I can look at?

Yep sure here. I'm not sure how much memory these compressed textures save though or how to rewrite images into them.