Shaders must die, part 3

Continuing the series (see Part 1, Part 2)…

Got different lighting models (BRDFs) working. Without further ado, code snippets that produce real actual working shaders that work with lights & shadows and whatnot:

Simple Lambert (single color):

 Properties
     Color _Color
 EndProperties
 Surface
     o.Albedo = _Color;
 EndSurface
 Lighting Lambert

Let’s add a texture:

 Properties
     2D _MainTex
     Color _Color
 EndProperties
 Surface
     o.Albedo = SAMPLE(_MainTex) * _Color;
 EndSurface
 Lighting Lambert

Change light model to Half-Lambert (a.k.a. wrapped diffuse):

 // ...everything the same
 Lighting HalfLambert

Blinn-Phong, with constant exponent & constant specular color, modulated by gloss map in main texture’s alpha:

 Properties
     2D _MainTex
     Color _Color
     Color _SpecColor
     Float _Exponent
 EndProperties
 Surface
     half4 col = SAMPLE(_MainTex);
     o.Albedo = col * _Color;
     o.Specular = _SpecColor.rgb * col.a;
     o.Exponent = _Exponent;
 EndSurface
 Lighting BlinnPhong

The same Blinn-Phong, with added normal map:

 Properties
     2D _MainTex
     2D _BumpMap
     Color _Color
     Color _SpecColor
     Float _Exponent
 EndProperties
 Surface
     half4 col = SAMPLE(_MainTex);
     o.Albedo = col * _Color;
     o.Specular = _SpecColor.rgb * col.a;
     o.Exponent = _Exponent;
     o.Normal = SAMPLE_NORMAL(_BumpMap);
 EndSurface
 Lighting BlinnPhong

I also made an illustrative-style BRDF (see Illustrative Rendering in Team Fortress 2), but that only requires above sample to have “Lighting TF2” at the end.

Another thing I tried is surface that has Albedo dependent on a viewing angle, similar to Layered Car Paint Shader. It works:

 Properties
     2D _MainTex
     2D _BumpMap
     2D _SparkleTex
     Float _Sparkle
     Color _PrimaryColor
     Color _HighlightColor
 EndProperties
 Surface
     half4 main = SAMPLE(_MainTex);
     half3 normal  = SAMPLE_NORMAL(_BumpMap);
     half3 normalN = normalize(SAMPLE_NORMAL(_SparkleTex));
     half3 ns = normalize (normal + normalN * _Sparkle);
     half3 nss = normalize (normal + normalN);
     i.viewDir = normalize(i.viewDir);
     half nsv = max(0,dot(ns, i.viewDir));
     half3 c0 = _PrimaryColor.rgb;
     half3 c2 = _HighlightColor.rgb;
     half3 c1 = c2 * 0.5;
     half3 cs = c2 * 0.4;    
     half3 tone =
         c0 * nsv +
         c1 * (nsv*nsv) +
         c2 * (nsv*nsv*nsv*nsv) +
         cs * pow(saturate(dot(nss,i.viewDir)), 32);
     main.rgb *= tone;
     o.Albedo = main;
     o.Normal = normal;
 EndSurface
 Lighting Lambert

Up next:

  • How and where emissive terms should be placed. I cautiously omitted all emissive terms from the above examples (so my layered car shader is without reflections right now).

  • Where should things like rim lighting go? I’m not sure if it’s a surface property (increasing albedo/emission with angle) or a lighting property (a back light).

My impressions so far:

  • I like that I don’t have to write down vertex-to-fragment structures or the vertex shader. In most cases all the vertex shader does is transform stuff and pass it down to later stages, plus occasional computations that are linear over the triangle. No good reason to write it by hand.

  • I like that the above shaders do not deal with how the rendering is actually done. For Unity’s case, I’m compiling them into single pass per light forward renderer, but they should just work with multiple lights per pass, deferred etc. Of course, that still has to be proven!

So far so good.

Series index: Shaders must die, Part 1, Part 2, Part 3.


Shaders must die, part 2

I started playing around with the idea of “shaders must die”. I’m experimenting with extracting “surface shaders” for now.

Right now my experimental pipeline is:

  1. Write a surface shader file
  2. Perl script transforms it into Unity 2.x shader file
  3. Which in turn is compiled by Unity into all lighting/shadows permutations, for D3D9 and OpenGL backends. Cg is used for actual shader compilation.

I have very simple cases working. For example:

 Properties
     2D _MainTex
 EndProperties
 Surface
     o.Albedo = SAMPLE(_MainTex);
 EndSurface

This is a “no bullshit” source code for a simple Diffuse (Lambertian) shader, 87 bytes of text.

The Perl script produces a Unity 2.x shader. This will be long, but bear with me - I’m trying to show how much stuff has to be written right now, when we’re operating on vertex/pixel shader level. See Attenuation and Shadows for Pixel Lights in Unity docs for how this system works.

 Shader "ShaderNinja/Diffuse" {
 Properties {
   _MainTex ("_MainTex", 2D) = "" {}
 }
 SubShader {
   Tags { "RenderType"="Opaque" }
   LOD 200
   Blend AppSrcAdd AppDstAdd
   Fog { Color [_AddFog] }
   Pass {
     Tags { "LightMode"="PixelOrNone" }
 CGPROGRAM
 #pragma fragment frag
 #pragma fragmentoption ARB_fog_exp2
 #pragma fragmentoption ARB_precision_hint_fastest
 #include "UnityCG.cginc"
 uniform sampler2D _MainTex;
 struct v2f {
     float2 uv_MainTex : TEXCOORD0;
 };
 struct f2l {
     half4 Albedo;
 };
 half4 frag (v2f i) : COLOR0 {
     f2l o;
     o.Albedo = tex2D(_MainTex,i.uv_MainTex);
     return o.Albedo * _PPLAmbient * 2.0;
 }
 ENDCG
   }
   Pass {
     Tags { "LightMode"="Pixel" }
 CGPROGRAM
 #pragma vertex vert
 #pragma fragment frag
 #pragma multi_compile_builtin
 #pragma fragmentoption ARB_fog_exp2
 #pragma fragmentoption ARB_precision_hint_fastest
 #include "UnityCG.cginc"
 #include "AutoLight.cginc"
 struct v2f {
     V2F_POS_FOG;
     LIGHTING_COORDS
     float2 uv_MainTex;
     float3 normal;
     float3 lightDir;
 };
 uniform float4 _MainTex_ST;
 v2f vert (appdata_tan v) {
     v2f o;
     PositionFog( v.vertex, o.pos, o.fog );
     o.uv_MainTex = TRANSFORM_TEX(v.texcoord, _MainTex);
     o.normal = v.normal;
     o.lightDir = ObjSpaceLightDir(v.vertex);
     TRANSFER_VERTEX_TO_FRAGMENT(o);
     return o;
 }
 uniform sampler2D _MainTex;
 struct f2l {
     half4 Albedo;
     half3 Normal;
 };
 half4 frag (v2f i) : COLOR0 {
     f2l o;
     o.Normal = i.normal;
     o.Albedo = tex2D(_MainTex,i.uv_MainTex);
     return DiffuseLight (i.lightDir, o.Normal, o.Albedo, LIGHT_ATTENUATION(i));
 }
 ENDCG
   }
 }
 Fallback "VertexLit"
 }

Phew, that is quite some typing to get simple diffuse shader (1607 bytes)! Well, at least all the lighting/shadow combinations are handled by Unity macros here. When Unity takes this shader and compiles into all permutations, it results in 58 kilobytes of shader assembly (D3D9 + OpenGL, 17 light/shadow combinations).

Let’s try something slightly different: bumpmapped, with a detail texture:

 Properties
     2D _MainTex
     2D _Detail
     2D _BumpMap
 EndProperties
 Surface
     o.Albedo = SAMPLE(_MainTex) * SAMPLE(_Detail) * 2.0;
     o.Normal = SAMPLE_NORMAL(_BumpMap);
 EndSurface

This is 173 bytes of text. Generated Unity shader is 2098 bytes, which compiles into 74 kilobytes of shader assembly.

In this case, the processing script detects that surface shader modifies normal per pixel, and does the necessary tangent space light transformations. It all just works!

So this is where I am now. Next up: detect which lighting model to use based on surface parameters (right now it always uses Lambertian). Fun!


Shaders must die

It came in as a simple thought, and now I can’t shake it off. So I say:

Shaders Must Die

Ok, now that the controversial bits are done, let’s continue.

Most of this can be (and probably is) wrong, and I haven’t given it enough thought yet. But here’s my thinking about shaders of “regular scene objects”. All of below is about things that need to interact with lighting; I’m not talking about shaders for postprocessing, one-off uses, special effects, GPGPU or kitchen sinks.

Operating on vertex/pixel shader level is a wrong abstraction level

Instead, it should be separated out into “surface shader” (albedo, normal, specularity, …), “lighting model” (Lambertian, Blinn Phong, …) and “light shader” (attenuation, cookies, shadows).

  • Probably 90% of the cases would only touch the surface shader (mostly mix textures/colors in various ways), and choose from some precooked lighting models.

  • 9% of the cases would tweak the lighting model. Most of the things would settle for “standard” (Blinn-Phong or similar), with some stuff using skin or anisotropic or …

  • The “light shader” only needs to be touched once in a blue moon by ninjas. Once the shadowing and attenuation systems are implemented, there’s almost no reason for shader authors to see all the dirty bits.

Yes, current hardware operates on vertex/geometry/pixel shaders, which is a logical thing to do for hardware. After all, these are the primitives it works on when rendering. But those primitives are not the things you work on when authoring how a surface should look or how it should react to a light.

Simple code; no redundant info; sensible defaults

In the ideal world, here’s a simple surface shader (the syntax is deliberately stupid):

Haz Texture;
Albedo = sample Texture;

Or with bump mapping added:

Haz Texture;
Haz NormalMap;
Albedo = sample Texture;
Normal = sample_normal NormalMap;

And this should be all the info you have to provide. This would choose the lighting model based on used things (in this case, Lambertian). It would somehow just work with all kinds of lights, shadows, ambient occlusion and whatnot.

Compare to how much has to be written to implement a simple surface in your current shader technology, so that it would work “with everything”.

From the above shader, proper hardware shaders can be generated for DX9, DX11, DX1337, OpenGL, next-gen and next-next-gen consoles, mobile platforms with capable hardware, etc.

It can be used in accumulative forward rendering, forward rendering with multiple lights per pass, hybrid (light pre-pass / prelight) rendering, deferred rendering etc. Heck, even for a raytracer if you have one at hand.

I want!

Now of course, it won’t be as nice as more complex materials have to be expressed. Some might not even be possible. But shader text complexity should grow with material complexity; and all information that is redundant, implied, inferred or useless should be eliminated. There’s no good reason to stick to conventions and limits of current hardware just because it operates like that.

Shaders must die!


Google O3D - it's going to be interesting

A couple of weeks ago Google announced O3D: an open source web browser plugin for low level accelerated 3D graphics. The website for O3D project is here.

Of course this created some buzz (hey, it’s Google after all). And it is in some way a competing technology with Unity. I think it’s going to be interesting, so I say “welcome competition!”

Preemptive blah blah: this website is my personal opinion and does not represent the views of my employer, former employers or anyone else other than myself.

Unity is one of the players in “3D on the web” space. 3D graphics in the browser are in fact nothing new. Unity’s browser plugin has existed since 2005 and is now in eight digits installations count. There is VRML, X3D, Adobe Shockwave, 3DVIA/Virtools, software rendering approaches on top of Flash and so on.

In my view, major advantages that Unity has compared to O3D:

  • It’s not only about the graphics. Unity has physics, audio, input, scripting, streaming, networking, asset pipeline and whatnot. O3D is only about the graphics, and at a lower level.

  • Unity runs on wider range of hardware. O3D requires Shader Mode 2.0 or later hardware, so about 30% of the “machines on the internet” can’t run O3D (based on our 2009Q1 data). Couple that with lots of compatibility workarounds that we have and it’s probably safe to say that Unity is more stable and mature at this point.

  • Unity is not only about the web. There’s support for iPhone, Nintendo Wii, standalone games, and with time more console and mobile platforms will come.

  • Creating and improving Unity is our primary and only focus as a company. In Google’s case, O3D is just another technology in their vast portfolio.

Of course, O3D also has advantages:

  • It’s done by Google! When Google does something anything, people notice immediately :)

  • O3D is free and open source. Hard to beat the free price, and open source does have it’s benefits. O3D is not a “standard” of any sort right now, but it looks like Google would want it to become one.

  • Only focusing on low level graphics has it’s benefits: it’s lightweight, it appeals to hackers and graphics programmers who want to be in control. Unity’s higher level is much easier and faster to use, but low level hacking can be fun.

Of course there are tons of other differences (I might have missed something important as well).

For me as a rendering guy, it’s interesting to see O3D taking similar decisions here and there (e.g. they don’t use GLSL on OpenGL either because it does not really work in the real world).

So… we’ll see where things will go. It’s going to be interesting!


All games in one short paragraph

Here, ryg nails it:

why would you want sound and physics when you can have sparsely clothed ninja space marine amazon secret agents riding on chainsaw-hoofed flying pink stealth space unicorns through a brightly colored dystopian african urban jungle fantasy wasteland island state populated with mutated propaganda-spewing gas mask-wearing alien nazi zombie demons that entered this island planet dimension through a hellgate portal invasion triggered by a black magic freak teleportation experiment resonance cascade accident caused by a power-hungry mad scientist wizard evil genius working for a multinational corporation conspiracy of lawyers and weapons manufacturers without morals, and all that in its proper realtime dynamically lit globally illuminated deferred-shaded parallax-occlusion-mapped ambient-occluded shadow-buffered high dynamic range silky smooth glory?

Pretty much sums up the mainstream game industry!