During development of Unity 3.0, I was not-so-pleasantly surprised to see that our cross-compiled shaders run slow on iPhone 3Gs. And by “slow”, I mean SLOW; at the speeds of “stop the presses, we can not ship brand new OpenGL ES 2.0 support with THAT performance”.
Take this HLSL pixel shader for particles, that does nothing but multiplies texture with per-vertex color:
This is about as simple as it can get; should be one texture fetch and one multiply for the GPU.
Now of course, when HLSL gets cross-compiled into GLSL, it is augmented by some dummy functions/moves to match GLSL’s semantics of “a function called main that takes no arguments and returns no value”. So you get something like this in GLSL:
1 2 3 4 5 6 7 8 9
Makes sense. The original function was translated, and main() got added that fills in the input structure, calls the function and writes result to
Lo and behold, the above (with some OpenGL ES 2.0 specific stuff added, like precision qualifiers, definitions of varyings etc.) runs like sh*t on a mobile platform.
Which probably means mobile platform drivers are quite bad at optimizing GLSL. I mostly tested iOS, but some tests on Android indicate that situation is the same (maybe even worse, depending on exact kind of Android you have). Which is sad since said platforms also do not have any way to precompile shaders offline, where they could afford good but slow compilers.
Now of course, if you’re writing GLSL shaders by hand, you’re probably writing close to optimal code, with no redundant data moves or wrapper functions. But if you’re cross-compiling them from Cg/HLSL, or generating from some shader fragments, or from visual shader editors, you probably depend on shader compiler being decent at optimizing redundant bits.
Around the same time I accidentally discovered that Mesa 3D guys are working on new GLSL compiler, dubbed GLSL2. I looked at the code and I liked it a lot; very hackable and “no bullshit” approach. So I took that Mesa’s GLSL compiler and made it output GLSL back after it has done all the optimizations.
Here it is: http://github.com/aras-p/glsl-optimizer
It reads GLSL, does some architecture independent optimizations (dead code removal, algebraic simplifications, constant propagation, constant folding, inlining, …) and spits out “optimized” GLSL back.
The above simple particle shader example. GLSL optimizer optimizes it into:
1 2 3 4
Save for redundant swizzle outputs (on my todo list), this is pretty much what you’d be writing by hand. No redundant moves, function call inlined, no extra temporaries, sweet!
How much difference does this make?
Lots of particles, non-optimized GLSL on the left; optimized GLSL on the right (click for larger image). Yep, it’s 236 vs. 36 milliseconds/frame (4 vs. 27 FPS).
This result is for iPhone 3Gs running iOS 4.1. Some Android results: Motorola Droid (some PowerVR GPU): 537 vs. 223 ms; Nexus One (Snapdragon 8250 w/ Adreno GPU): 155 vs. 155 ms (yay! good drivers!); Samsung Galaxy S (some PowerVR GPU): 200 vs. 60 ms. All tests were ran at native device resolutions, so do not take this as performance comparisons between devices.
What about a more complex shader example? Let’s try per-pixel lit Diffuse shader (which is quite simple, but will do ok as “complex shader” example for a mobile platform). You can see that the GLSL code below is mostly auto-generated; writing it by hand wouldn’t produce that many data moves, unused struct members etc. Cg compiles original shader code into 10 ALU and 1 TEX instructions for D3D9 pixel shader 2.0, and is able to optimize away all the redundant stuff.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62
Running the above through GLSL optimizer produces this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
All functions got inlined, all unused variable assignments got eliminated, and most of redundant moves are gone. There are some redundant moves left though (again, on my todo list), and the variables are assigned cryptic names after inlining. But otherwise, writing the equivalent shader by hand would be pretty close.
Difference between non-optimized and optimized GLSL in this case:
Non-optimized vs. optimized: 350 vs. 267 ms/frame (2.9 vs. 3.7 FPS). Not bad either!
Pulling off this GLSL optimizer quite late in Unity 3.0 release cycle was a risky move, but it did work.
Hats off to Mesa folks (Eric Anholt, Ian Romanick, Kenneth Graunke et al) for making an awesome codebase of the GLSL compiler! I haven’t merged up latest GLSL compiler developments on Mesa tree; they’ve implemented quite a few new compiler optimizations but I was too busy shipping Unity 3 already. Will try to merge them in soon-ish.
I’ve tested non-optimized vs. optimized GLSL a bit on a desktop platform (MacBook Pro, GeForce 8600M, OS X 10.6.4) and there is no observable speed difference. Which makes sense, and I would have expected mobile drivers to be good at optimization as well, but apparently that’s not the case.
Now of course, mobile drivers will improve over time, and I hope offline “GLSL optimization” step will become obsolete in the future. I still think it makes perfect sense to fully compile shaders offline, so at runtime there’s no trace of GLSL at all (just load binary blob of GPU microcode into the driver), but that’s a story for another day.
In the meantime, you’re welcome to try GLSL Optimizer out!