D3D9 GPU Hacks

I've been trying to catch up what hacks GPU vendors have exposed in Direct3D9, and turns out there's a lot of them!

If you know more hacks or more details, please let me know in the comments!

Most hacks are exposed as custom ("FOURCC") formats. So to check for that, you do CheckDeviceFormat. Here's the list (Usage column codes: DS=DepthStencil, RT=RenderTarget; Resource column codes: tex=texture, surf=surface). More green = more hardware support.

FormatUsageResourceDescriptionNVIDIA GeForceAMD RadeonIntel
Shadow mapping
D3DFMT_D16 DStexSample depth buffer directly as shadow map. 3+HD 2xxx+965+
D3DFMT_D24X8DStex 3+HD 2xxx+965+
Depth Buffer As Texture
DF16DStexRead depth buffer as texture. 9500+G45+
DF24DStex X1300+SB+
INTZDStex 8+HD 4xxx+G45+
RAWZDStex 6 & 7
Anti-Aliasing related
RESZRTsurfResolve MSAA'd depth stencil surface into non-MSAA'd depth texture. HD 4xxx+G45+
ATOC0surfTransparency anti-aliasing. 7+SB+
SSAA0surf 7+
All AMD DX9+ hardware 9500+
n/aCoverage Sampled Anti-Aliasing[5] 8+
ATI10texATI1n & ATI2n texture compression formats. 8+X1300+G45+
ATI20tex 6+9500+G45+
DF24DStexFetch 4: when sampling 1 channel texture, return four touched texel values[1]. Check for DF24 support. X1300+SB+
NULLRTsurfDummy render target surface that does not consume video memory. 6+HD 4xxx+HD+
NVDB0surfDepth Bounds Test. 6+
R2VB0surfRender into vertex buffer. 6 & 79500+
INST0surfGeometry Instancing on pre-SM3.0 hardware. 9500+

Native Shadow Mapping

Native support for shadow map sampling & filtering was introduced ages ago (GeForce 3) by NVIDIA. Turns out AMD also implemented the same feature for DX10 level cards. Intel also supports it on Intel 965 (aka GMA X3100, the shader model 3 card) and later (G45/X4500/HD) cards.

The usage is quite simple; just create a texture with regular depth/stencil format and render into it. When reading from the texture, one extra component in texture coordinates will be the depth to compare with. Compared & filtered result will be returned.

Also useful:

  • Creating NULL color surface to keep D3D runtime happy and save on video memory.

Depth Buffer as Texture

For some rendering schemes (anything with “deferred”) or some effects (SSAO, depth of field, volumetric fog, …) having access to a depth buffer is needed. If native depth buffer can be read as a texture, this saves both memory and a rendering pass or extra output for MRTs.

Depending on hardware, this can be achieved via INTZ, RAWZ, DF16 or DF24 formats:

  • INTZ is for recent (DX10+) hardware. With recent drivers, all three major IHVs expose this. According to AMD [1], it also allows using stencil buffer while rendering. Also allows reading from depth texture while it’s still being used for depth testing (but not depth writing). Looks like this applies to NV & Intel parts as well.
  • RAWZ is for GeForce 6 & 7 series only. Depth is specially encoded into four channels of returned value.
  • DF16 and DF24 is for AMD and Intel cards, including older cards that don’t support INTZ. Unlike INTZ, this does not allow using depth buffer or using the surface for both sampling & depth testing at the same time.
Also useful when using depth textures:
  • Creating NULL color surface to keep D3D runtime happy and save on video memory.
  • RESZ allows resolving multisampled depth surfaces into non-multisampled depth textures (result will be sample zero for each pixel).
  • Using INTZ for both depth/stencil testing and sampling at the same time seems to have performance problems on AMD cards (checked Radeon HD 3xxx to 5xxx, Catalyst 9.10 to 10.5). A workaround is to render to INTZ depth/stencil first, then use RESZ to “blit” it into another surface. Then do sampling from one surface, and depth testing on another.

Depth Bounds Test

Direct equivalent of GL_EXT_depth_bounds_test OpenGL extension. See [3] for more information.

Transparency Anti-Aliasing

NVIDIA exposes two controls: transparency multisampling (ATOC) and transparency supersampling (SSAA) [4]. The whitepaper does not explicitly say it, but in order for ATOC render state (D3DRS_ADAPTIVETESS_Y set to ATOC) to actually work, D3DRS_ALPHATESTENABLE state must be also set to TRUE.

AMD says that all Radeons since 9500 support “alpha to coverage” [1].

Intel supports ATOC (same as NVIDIA) with SandyBridge (GMA HD 2000/3000) GPUs.

Render Into Vertex Buffer

Similar to “stream out” or “memexport” in other APIs/platforms. See [2] for more information. Apparently some NVIDIA GPUs (or drivers?) support this as well.

Geometry Instancing

Instancing is supported on all Shader Model 3.0 hardware by Direct3D 9.0c, so there’s no extra hacks necessary there. AMD has exposed a capability to enable instancing on their Shader Model 2.0 hardware as well. Check for “INST” support, and do dev->SetRenderState (D3DRS_POINTSIZE, kFourccINST); at startup to enable instancing.

I can’t find any document on instancing from AMD now. Other references: [6] and [7].

ATI1n & ATI2n Compressed Texture Formats

Compressed texture formats. ATI1n is known as BC4 format in DirectX 10 land; ATI2n as BC5 or 3Dc. Since they are just DX10 formats, support for this is quite widespread, with NVIDIA exposing it a while ago and Intel exposing it recently (drivers 15.17 or higher).

Thing to keep in mind: when DX9 allocates the mip chain, they check if the format is a known compressed format and allocate the appropriate space for the smallest mip levels. For example, a 1x1 DXT1 compressed level actually takes up 8 bytes, as the block size is fixed at 4x4 texels. This is true for all block compressed formats. Now when using the hacked formats DX9 doesn’t know it’s a block compression format and will only allocate the number of bytes the mip would have taken, if it weren’t compressed. For example a 1x1 ATI1n format will only have 1 byte allocated. What you need to do is to stop the mip chain before the size of the either dimension shrinks below the block dimensions otherwise you risk having memory corruption.

Another thing to keep in mind: on Vista+ (WDDM) driver model, textures in these formats will still consume application address space. Most regular textures like DXT5 don’t take up additional address space in WDDM (see here). For some reason ATI1n and ATI2n textures on D3D9 are deemed lockable.


All this information gathered mostly from:

  1. Advanced DX9 Capabilities for ATI Radeon Cards (pdf)
  2. ATI R2VB Programming (pdf)
  3. NVIDIA GPU Programming Guide (pdf)
  4. NVIDIA Transparency AA
  5. NVIDIA Coverage Sampled AA
  6. Humus' Instancing Demo
  7. Arseny's article on particles


  • 2016 01 06: Updated links to NV/AMD docs since they like to move pages around making old links invalid! Renamed ATI to AMD. Clarified ATOC gotcha.
  • 2013 06 11: One more note on ATI1n/ATI2n format virtual address space issue (thanks JSeb!).
  • 2013 04 09: Turns out since sometime 2011 Intel has DF24 and Fetch4 for SandyBridge and later.
  • 2011 01 09: Intel implemented ATOC for SandyBridge, and NULL for GMA HD and later.
  • 2010 08 25: Intel implemented DF16, INTZ, RESZ for G45+ GPUs!
  • 2010 08 25: Added note on INTZ performance issue with ATI cards.
  • 2010 08 19: Intel implemented ATI1n/ATI2n support for G45+ GPUs in the latest drivers!
  • 2010 07 08: Added note on ATI1n/ATI2n texture formats, with a caveat pointed out by Henning Semler (thanks!)
  • 2010 01 06: Hey, shadow map hacks are also supported on Intel 965!
  • 2009 12 09: Shadow map hacks are supported on Intel G45!
  • 2009 11 21: Added instancing on SM2.0 hardware.
  • 2009 11 20: Added Fetch-4, CSAA.
  • 2009 11 20: Initial version.