GameTechDev/MaskedOcclusionCulling

Why use 1/w for depth test, not z/w

SungJJinKang opened this issue · 2 comments

I don't know how this works...
As I see, 1/w is used for depth test, not z/w.

This is mentioned briefly in the readme and in more detail in MaskedOcclusionCulling.h (copied below). This is an optimization that uses z = 1/w to as a proxy for z instead of calculating the actual z value. I will update the documentation to make this more clear,
but (please check my math :) ) for a simplified case transforming a point, (constant values c1 and c2 based on near and far planes)

projZ = z*c1 - c2 and projW = z;
so, projZ/projW = c1 - c2/z.

Because we are interpolating the z value, we still need to use projW; so,

z = projZ/projW = c1 - c2/projW.

We can use 1/projW to for ordering because c1 and c2 are constant.

  • Input to all API functions are (x,y,w) clip-space coordinates (x positive left, y positive up, w positive away from camera).
    We entirely skip the z component and instead compute it as 1 / w, see next bullet. For TestRect the input is NDC (x/w, y/w).
  • We use a simple z = 1 / w transform, which is a bit faster than OGL/DX depth transforms. Thus, depth is REVERSED and z = 0 at
    the far plane and z = inf at w = 0. We also have to use a GREATER depth function, which explains why all the conservative
    tests will be reversed compared to what you might be used to (for example zMaxTri >= zMinBuffer is a visibility test)

thanks for your reply