gscept/nebula

Better support for large resources

Closed this issue · 1 comments

A major issue with large resources, apart from the obvious memory footprint, is the way we currently handle resource uploading to the GPU. The resource loader always assumes a large resource can be uploaded in one go, and there is no mechanism to defer a resource to be loaded later.

Another limitation is the upload heap, which is a fixed size block of memory we use as a transient resource to upload data from CPU to GPU. The upload heap can create new buffers to handle multiple resources, but needs to have a large base size to handle a single large resource.

An example of the above is a texture that takes around 85MB compressed, such as an 8K terrain material source. That type of resource forces the upload heap to be at least 85MB in size to accommodate for it, which means that we will constantly keep 85MB of CPU memory just in case we want to upload a large texture.

Here are a few solutions to the problem.

  1. Large textures are automatically handled as sparse resources. This means one has to explicitly request a tile and mip to load when using the texture. This method has a smaller memory footprint but requires tons of extra book keeping.
  2. The resource loaders has a budget. When a resource is loaded, that budget is decremented and if the resulting value is below 0, the load is postponed to next frame where it's attempted again. This method gracefully handles the memory restrictions, preventing large and sudden allocations, but requires a lot of work.
  3. 1 and 2. We implement both a resource loader budget and allow the resource loader to reject a load, and support for either designating or automatically assuming some images should be sparse images.

Fixed in #208 by using an upload heap that can defer a resource for future frames on a per-mip/lod basis.