Screenspace vs. mip-mapping

Just spent half a day debugging this, so here it is for the future reference of the internets.

In a deferred rendering setup (see Game Angst for a good discussion of deferred shading & lighting), lights are applied using data from screen-space buffers. Position, normal and other things are reconstructed from buffers and lighting is computed “in screen space”.

Because each light is applied to a portion of the screen, the pixels it computes can belong to different objects. If in any place of lighting computation you use textures with mipmaps, be careful. Most common use for mipmapped light textures is light “cookies” (aka “gobo”).

Let’s say we have a very simple scene with a spot light:

Light’s angular attenuation comes from a texture like this:

If the texture has mipmaps and you sample it using the “obvious” way (e.g. tex2Dproj), you can get something like this:

Black stuff around the sphere is no good! It’s not the infamous half-texel offset in D3D9, not a driver bug, not a shader compiler bug and not the nature trying to prevent you from writing a deferred renderer.

It’s the mipmapping.

Mipmaps of your cookie texture look like this (128x128, 16x16, 8x8, 4x4 shown):

Now, take two adjacent pixels, where one belongs to the edge of the sphere, and the other belongs to the background object (technically you take a 2x2 block of pixels, but just two are enough to illustrate the point). When the light is applied, cookie texture coordinates for those pixels are computed. It can happen that the coordinates are very different, especially when pixels “belong” to entirely different surfaces that are quite far away from each other.

What the GPU does when texture coordinates of adjacent pixels are very different? Chooses a lower mipmap level so that texel to pixel density roughly matches 1:1. On the edges of this “wrong” screenshot, it happens that very small mipmap level is sampled, which is either black or white color (see 4x4 mip level).

What to do here? You could disable mip-mapping (which is not good for performance and not good for image quality). You could drop some smallest mip levels which might be enough and not that bad for performance. Another option is to manually supply LOD level or derivatives to sampling instructions, using something else than cookie texture coordinates. For example, derivative in view space position, or something like that. This might not be possible on lower shader models though.