US Vacation Report 2014

This April I had a vacation in the USA, so here’s a write up and a bunch of photos. Our trip: 12 days, group of five (myself, my wife, our two daughters and my sister), rented a car and drove around. Made the itinerary ourselves; tried to stay out of big cities or hotel chains – used airbnb where possible. For everyone except me, this was the first trip to USA; I actually never did venture outside of conference cities before either.

TL;DR: Grand Canyon and Death Valley are awesome.

Summary of what we wanted:

  • See nature. Grand Canyon, Death Valley, Yosemite, Highway 1 etc. were on the potential list. Decided to skip Yosemite this time.
  • No serious hiking; that’s hard to do with kids and we’re lazy :)

Asking friends, reading the internets (wikitravel, wikipedia, lonely planet, random blogs), came up with a list of “places I’d like to go”. Thanks for everyone on Facebook for telling that my initial plan was way too ambitious. This is what we ended up at:

Flew to Vegas, a couple of round trips from there (to Valley of Fire and Grand Canyon), and then off towards the ocean. Through Death Valley, and then up north to San Francisco. Fly back from there. In total ended up close to 3000 kilometers; a pretty good balance between “want to see lots of stuff” and “gotta always be driving”.

Almost all photos below are my wife’s. Equipment: Canon EOS 70D with Canon 24-70mm f/2.8 L II and Sigma 8-16mm f/4.5-5.6. A couple photos by me, taken with iPhone 4S :)

Day 1: Valley of Fire

Close to Vegas, there’s this amazing formation of red sandstone. It’s fairly big, so that took the whole day. A good short trip for a jetlagged day :)

This is Elephant Rock, if you squint enough:

Day 2: Vegas to Grand Canyon via Route 66

Stela (my 4yo daughter) is a huge fan of Pixar’s Cars, so a short detour though actual Route 66, and the Hackberry General Store was a joy for her.

Yellow Arizona landscapes:

Arrived at Grand Canyon towards the evening, in the South Rim. It is touristy, so we picked a less-central trail (Kaibab Trail). The views are absolutely breathtaking; something that is very hard to tell via photos. The scale is hard to comprehend: heigt from the rim to the bottom is 1.6km!

We walked a bit below the rim on Kaibab Trail. Would be cool to get to the bottom, but that is a full-day hike one way. Some next time.

Day 3: Grand Canyon, and back to Vegas through Hoover Dam

It’s next to impossible to find lodging at the Grand Canyon Village itself (unless you book half a year in advance?), so we slept in Tusayan 10km south. Next morning, went to canyon rim again.

Visited Hoover Dam on the way back to Vegas:

Day 4: Death Valley

Death Valley, lowest and driest place in North America, and hottest place on Earth. This was April, so the temperature was a mild 40°C in the shade :) Death Valley is amazing due to lots and lots of very different geological things close to each other.

Here, Zabriskie Point, a fantastic erosional landscape. If you’re into obscure movies, you might know a film of the same name.

Next up, Devil’s Golf Course, a salt pan. Apparently almost a century ago one guide book had “Only the devil could play golf here” description, and it stuck. Salt crystals are really sharp on these rocks; there are signs advising “do not trip over or you’ll cut yourself”.

Natural Bridge in the mountains right next to it. It was also getting very, very hot.

Badwater Basin, lowest point in North America. Aistė (my wife) walking on pure salt.

Artist’s Palette, with rocks of any color you like.

Remains of a borax mine, Harmony Borax Works:

Mesquite Flat Sand Dunes. They say some scenes from Star Wars were shot here!

Day 5: Death Valley to Pacific

We stayed at Panamint Springs Resort in the Panamint Valley. Tried to catch a sunrise next morning, but it was all cloudy :( Ground here is covered with salt as well:

Leaving Panamint Valley, and through much more green California:

Day 6: San Luis Obispo to Big Sur

Spent a night in San Luis Obispo (lovely little town!), and took the Route 1 up north towards Big Sur.

Turns out, the coast redwoods are quite tall!

Day 7: Big Sur to Santa Cruz

Impressive scenery of ocean and clouds and rocks; and Bixby Creek Bridge.

Jellyfish at Monterey Aquarium:

Big waves and Santa Cruz beach:

Days 8, 9, 10: To San Francisco and there

Rented an apartment in the western side (Sunset District), so that it would be possible to find some parking space :) Moraga Steps and view towards sunset.

Obligatory Golden Gate Bridge.

Muir Woods just north of SF. This is again a redwood park, but much more crowded than Big Sur.

Random places in SF: wall art at Broadway/Columbus, and a block of Lombard Street, “the crookedest street in the world”.

Day 11: back home

A long flight back home. 5AM in the airport :)

Rant About Rants About OpenGL

Oh boy, people do talk about state of OpenGL lately! Some exhibits: Joshua Barczak’s “OpenGL is Broken”, Timothy Lottes’ reply on that, Michael Marks’ reply to Timothy’s reply. Or an earlier Rich Geldreich’s “Things that drive me nuts about OpenGL” and again Timothy’s reply.

Edit: Joshua’s followup

In all this talk, one side (the one that says GL is broken) frequently bring up Mantle or Direct3D 12. The other side (the one that says GL is just fine and is indeed better) frequently bring up AZDO (“Almost Zero Driver Overhead”) approaches. There are long twitter and reddit and hackernews threads on all this.

It might seem weird – why OpenGL would get such bashing all of a sudden? But this is much better state than some 7 years ago… Back then almost no one cared about OpenGL at all! If people complain, that at least means they do care!

But you know what, both of these sides are right.

OpenGL has issues

Trying to flat-out deny that would be ignorant. Too many ways to do things; too much legacy; integer names instead of pointer handles; bind-to-edit; poor multi-threading; lack of offline shader compilation; the list goes on – all these are real actual issues. And yes, most or all of these are being worked on, or indeed are fixed if you can use latest GL versions.

These technical issues are important, I think. And no, saying “state changes are expensive? don’t do them” – which much of AZDO advocacy ends up being – is not a real answer IMHO. Yes, moving to “pull data from GPU” model is perhaps the future and is great. Does not mean you can completely ignore CPU side things and pretend inefficiencies do not exist there. CPUs are still great at doing some things!

However, the biggest issue in my view is not a technical one, but a political one. On Windows, out of the box, you do not get an OpenGL driver (but you do get a D3D one for most GPUs). And no, actual people out there do not update their drivers. Ever.

I know that you do. And your technically savvy gamer friends do. But for each one of you, there are 10 people who don’t. We have hardware stats from hundreds of millions of machines; the most popular driver versions on Windows are the ones that ship with the OS.

On Mac, OpenGL is “somewhat behind” (GL 4.1 right now). But in comparison to Windows, it’s a much better practical usage, since GL drivers do come with the OS. And OS updates are free, and somehow Mac users do update their OSes at much faster rate than Windows users do.

I’ve no idea about user behaviour Linux. I know there are binary drivers for NV/AMD (tracking latest GL), and open source for Intel (behind latest GL), and nouveau and gallium etc. But no idea about whether Linux people update drivers, or whether they come with OS etc. So no informed opinion on this particular part from me.

So, on Windows we have the problem that GL drivers aren’t shipped with OS, and that people generally don’t update the OS. How to solve that? Perhaps all of us should try to persuade Microsoft to change this, and ship GL drivers with OS & windows updates. Maybe it’s not as crazy as it sounds these days (hey, no one believed MS would ever support WebGL… but they do! kind of).

On Mac we don’t have that particular problem, but we do have a problem that GL implementation is lagging behind the latest version. How to solve that? Perhaps try to persuade Apple to not lag behind. And/or make such kickass games that the advantage of latest GL tech would be too obvious to ignore (this one is a bit of a chicken-and-egg problem, sure).

OpenGL also has potential

Modern OpenGL does indeed have some crazy-awesome features. Check out AZDO again - that’s a whole new level of thinking there. The combination of persistent mapping, fine-grained fences, bindless resources and multi-draw-indirect does enable building substantially different rendering pipelines.

The extension mechanism is an excellent vehicle for bleeding-edge capability delivery. Bindless and sparse textures, flexible indirect draw, persistent buffer mapping, stencil export and so on – all these things appeared as extensions in OpenGL, long before Direct3D picked them up.

OpenGL is an API that spans the most platforms. On some of them, it is the only API. This is a valuable thing.

This is where I forgot what I wanted to say

I’m not sure where I’m going with this, really. Maybe “make love, instead of fighting over whether OpenGL is good or bad”. It will all be alright in the end. Or something like that.

If you really want OpenGL fixed, perhaps joining Khronos is a good idea. That particular piece is a topic for another rant I guess… We (Unity) are on Khronos but there’s too much bureaucracy for my taste so I just can’t be bothered. Thankfully Christophe often carries the flag.

I’m actually quite happy that Mantle and upcoming DX12 has caused quite a stir of discussions (to be fair PS3’s libGCM was probably the first “modern to the metal” API, but everyone who knows anyting about it can’t say it). Once things shake out, we’ll be left with a better world of graphics APIs. Maybe that will be the world with more than two non-console APIs, who knows. In any case, competition is good!

Shader Compilation in Unity 4.5

A story in several parts. 1) how shader compilation is done in upcoming Unity 4.5; and 2) how it was developed. First one is probably interesting to Unity users; whereas second one for the ones curious on how we work and develop stuff.

Short summary: Unity 4.5 will have a “wow, many shaders, much fast” shader importing and better error reporting.

Current state (Unity <=4.3)

When you create a new shader file (.shader) in Unity or edit existing one, we launch a “shader importer”. Just like for any other changed asset. That shader importer does some parsing, and then compiles the whole shader into all platform backends we support.

Typically when you create a simple surface shader, it internally expands into 50 or so internal shader variants (classic “preprocessor driven uber-shader” approach). And typically there 7 or so platform backends to compile into (d3d9, d3d11, opengl, gles, gles3, d3d11_9x, flash – more if you have console licenses). This means, each time you change anything in the shader, a couple hundred shaders are being compiled. And all that assuming you have a fairly simple shader - if you throw in some multi_compile directives, you’ll be looking at thousands or tens of thousands shaders being compiled. Each. And. Every. Time.

Does it make sense to do that? Not really.

Like most of “why are we doing this?” situations, this one also evolved organically, and can be explained with “it sounded like a good idea at the time” and “it does not fix itself unless someone works on it”.

A long time ago, Unity only had one or two shader platform backends (opengl and d3d9). And the amount of shader variants people were doing was much lower. With time, we got both more backends, and more variants; and it became very apparent that someone needs to solve this problem.

In addition to the above, there were other problems with shader compilation, for example:

  • Errors in shaders were reported, well, “in a funny way”. Sometimes the line numbers did not make any sense – which is quite confusing.
  • Debugging generated surface shader code involved quite some voodoo tricks (#pragma debug etc.).
  • Shader importer tried to multi-thread compilation of these hundreds of shaders, but some backend compilers (Cg) have internal global mutexes and do not parallelize well.
  • Shader importer process was running out of memory for really large multi_compile variant counts.

So we’re changing how shader importing works in Unity 4.5. The rest of this post will be mostly dumps of our internal wiki pages.

Shader importing in Unity 4.5

  • No runtime/platforms changes compared to 4.3/4.5 – all changes are editor only.
  • No shader functionality changes compared to 4.3/4.5.
  • Shader importing is much faster; especially complex surface shaders (Marmoset Skyshop etc.).
    • Reimporting all shaders in graphics tests project: 3 minutes with 4.3, 15 seconds with this.
  • Errors in shaders are reported on correct lines; errors in shader include (.cginc) files are reported with the filename & line number correctly.
    • Was mostly “completely broken” before, especially when include files came into play.
    • On d3d11 backend we were reporting error column as the line, hah :) At some point during d3dcompiler DLL upgrade it changed error printing syntax and we were parsing it wrong. Now added unit tests so hopefully it will never break again.
  • Surface shader debugging workflow is much better.
    • No more “add #pragma debug, open compiled shader, remove tons of assembly” nonsense. Just one button in inspector, “Show generated code”.
    • Generated surface shader code has some comments and better indentation. It is actually readable code now!
  • Shader inspector improvements:
    • Errors list has scrollview when it’s long; can double click on errors to open correct file/line; can copy error text via context click menu; each error clearly indicates which platform it happened for.
    • Investigating compiled shader is saner. One button to show compiled results for currently active platform; another button to show for all platforms.
  • Misc bugfixes
    • Fixed multi_compile preprocessor directives in surface shaders sometimes producing very unexpected results.
    • UTF8 BOM markers in .shader or .cginc files don’t produce errors.
    • Shader include files can be at non-ASCII folders and filenames.

Overview of how it works

  • Instead of compiling all shader variants for all possible platforms at import time:
    • Only do minimal processing of the shader (surface shader generation etc.).
    • Actually compile the shader variants only when needed.
    • Instead of typical work of compiling 100-1000 internal shaders at import time, this usually ends up compiling just a handful.
  • At player build time, compile all the shader variants for that target platform
    • Cache identical shaders under Library/ShaderCache.
    • So at player build time, only not-yet-ever-compiled shaders are compiled; and always only for the platforms that need them. If you never ever use Flash, for example, then none of shaders will be compiled for Flash (as opposed to 4.3, where all shaders are compiled to all platforms, even if you never ever need them).
  • Shader compiler (CgBatch) changes from being invoked for each shader import, into being run as a “service process”
    • Inter-process communication between compiler process & Unity; using same infrastructure as for VersionControl plugins integration.
    • At player build time, go wide and use all CPU cores to do shader compilation. Old compiler tried to internally multithread, but couldn’t due to some platforms not being thread-safe. Now, we just launch one compiler process per core and they can go fully parallel.
    • Helps with out-of-memory crashes as well, since shader compiler process never needs to hold bazillion of shader variants in memory all at once - what it sees is one variant at a time.

How it was developed

This was mostly a one-or-two person effort, and developed in several “sprints”. For this one we used our internal wiki for detailed task planning (Confluence “task lists”), but we could have just as well use Trello or something similar. Overall this was probably around two months of actual work – but spread out during much longer time. Initial sprint started in 2013 March, and landed in a “we think we can ship this tomorrow” state to 4.5 codebase just in time for 1st alpha build (2013 October). Minor tweaks and fixes were done during 4.5 alpha & beta period. Should ship anyday now, fingers crossed :)

Surprisingly (or perhaps not), largest piece of work was around “how do you report errors in shaders?” area. Since now shader variants are imported only on demand, that means some errors can be discovered only “some time after initial import”. This is a by-design change, however - as the previous approach of “let’s compile all possible variants for all possible platforms” clearly does not scale in terms of iteration time. However, this “shader seemed like it did not have any errors, but whoops now it has” is clearly a potential downside. Oh well; as with almost everything there are upsides & downsides.

Most of development was done on a Unity 4.3-based branch, and after something was working we were sending off custom “4.3 + new shader importer” builds to the beta testing group. We were doing this before any 4.5 alpha even started to get early feedback. Perhaps the nicest feedback I ever got:

I’ve now used the build for about a week and I’m completely blown away with how it has changed how I work with shaders.

I can try out things way quicker.
I am no longer scared of making a typo in an include file.
These two combine into making me play around a LOT more when working.
Because of this I found out how to do fake HDR with filmic tonemapping [on my mobile target].

The thought of going back to regular beta without this [shader compiler] really scares me ;)

Anyhoo, here’s a dump of tasks from our wiki (all of them had little checkboxes that we’d tick off when done). As usual, “it basically works and is awesome!” was achieved after first week of work (1st sprint). What was left after that was “fix all the TODOs, do all the boring remaining work” etc.

2013 March Sprint:

  • Make CgBatch a DLL
    • Run unit tests
    • Import shaders from DLL
    • Don’t use temp files all over the place
  • Shader importer changes
    • Change surface shader part to only generate source code and not do any compilation
    • Make a “Open surface compiler output” button
    • At import time, do surface shader generation & cache the result (serialize in Shader, editor only)
    • Also process all CGINCLUDE blocks and actually do #includes at import time, and cache the result (after this, left with CGPROGRAM blocks, with no #include statements)
    • ShaderLab::Pass needs to know it will have yet-uncompiled programs inside, and able to find appropriate CGPROGRAM block:
      • Add syntax to shaderlab, something like Pass { GpuProgramID int }
      • Make CgBatch not do any compilation, just extract CGPROGRAM blocks, assign IDs to them, and replace them with “GpuProgramID xxx”
      • “cache the result” as editor-only data in shader: map of snippet ID -> CGPROGRAM block text
    • CgBatch, add function to compile one shader variant (cg program block source + platform + keywords in, bytecode + errors out)
    • Remove all #include handling from actual shader compilers in CgBatch
    • Change output of single shader compilation to not be in shaderlab program/subprogram/bindings syntax, but to produce data directly. Shader code as a string, some virtual interface that would report all uniforms/textures/… for the reflection data.
  • Compile shaders on demand
    • Data file format for gpu programs & their params
    • ShaderLab Pass has map: m_GpuProgramLookup (keywords -> GPUProgram).
    • GetMatchingSubProgram:
      • return one from m_GpuProgramLookup if found. Get from cache if found
      • Compile program snippet if not found
      • Write into cache

2013 July Sprint:

  • Pull and merge last 3 months of trunk
  • Player build pipeline
    • When building player/bundle, compile all shader snippets and include them
    • exclude_renderers/include_renderers, trickle down to shader snippet data
    • Do that properly when building for a “no target” (everything in) platforms
      • Snippets are saved in built-in resource files (needed? not?)
    • Make building built-in resource files work
      • DX11 9.x shaders aren’t included
      • Make building editor resource file work
    • Multithread the “missing combinations” compilation while building the player.
      • Ensure thread safety in snippet cache
  • Report errors sensibly
  • Misc
    • Each shader snippet needs to know keyword permutation possibly needed: CgBatch extracts that, serialized in snippet (like vector< vector >)
    • Fix GLSLPROGRAM snippets
    • Separate “compiler version” from “cgbatch version”; embed compiler version into snippet data & hash
    • Fix UsePass

2013 August Sprint:

  • Move to a 4.3-based branch
  • Gfx test failures
    • Metro, failing shadow related tests
    • Flash, failing custom lightmap function test
  • Error reporting: Figure out how to deal with late-discovered errors. If there’s bad syntax, typo etc.; effectively shader is “broken”. If a backend shader compiler reports an error:
    • Return pink “error shader” for all programs ­ i.e. if any of vertex/pixel/… had an error, we need to use the pink shaders for all of them.
    • Log the error to console.
    • Add error to the shader, so it’s displayed in the editor. Can’t serialize shader at that time, so add shaders to some database under Library (guid­>errors).
      • SQLite database with shader GUID -> set of errors.
    • Add shader to list of “shaders with errors”; after rendering loop is done go over them and make them use pink error shader. (Effectively this does not change current (4.2) behavior: if you have a syntax error, shader is pink).
  • Misc
    • Fix shader Fallback when it pulls in shader snippets
    • “Mesh components required by shader” part at build time - need to figure them out! Problem; needs to compile the variants to even know it.
    • Better #include processing, now includes same files multiple times
  • Make CgBatch again into an executable (for future 64 bit mac…)
    • Adapt ExternalProcess for all communication
    • Make unit tests work again
    • Remove all JobScheduler/Mutex stuff from CgBatch; spawn multiple processes instead
    • Feels like is leaking memory, have to check
  • Shader Inspector
    • Only show “open surface shader” button for surface shaders
    • “open compiled shader” is useless now, doesn’t display shader asm. Need to redo it somehow.

2013 September Sprint:

  • Make ready for 4.5 trunk
    • Merge with current trunk
    • Make TeamCity green
    • Land to trunk!
  • Make 4.3-based TeamCity green
    • Build Builtin Resources, fails with shader compiler RPC errors GL-only gfx test failures (CgProps test)
    • GLSLPROGRAM preprocessing broken, add tests
    • Mobile gfx test failures in ToonyColors
  • Error reporting and #include handling
    • Fixing line number reporting once and for all, with tests.
    • Report errors on correct .cginc files and correct lines on them
    • Solve multiple includes & preprocessor affecting includes this way: at snippet extraction time, do not do include processing! Just hash include contents and feed that into the snippet hash.
    • UTF8 BOM in included files confusing some compilers
    • Unicode paths to files confusing some compilers
    • After shader import, immediately compile at least one variant, so that any stupid errors are caught & displayed immediately.
  • Misc
    • Make flags like “does this shader support shadows?” work with new gpu programs coming in
    • Check up case 550197
    • multi_compile vs. surface shaders, fix that
  • Shader Inspector
    • Better display of errors (lines & locations)
    • Button to “exhaustively check shader” - compiles all variants / platforms.
    • Shader snippet / total size stats

What’s next?

Some more in shader compilation land will go into Unity 5.0 and 5.x. Outline of our another wiki page describing 5.x related work:

  • 4.5 fixes “compiling shaders is slow” problem.
  • Need to fix “New standard shader produces very large shader files” (due to lots of variants - 5000 variants, 100MB) problem.
  • Need to fix “how to do shader LOD with new standard shader” problem.

Visuals in Some Great Games

I was thinking about visuals of the best games I’ve recently played. Now, I’m not a PC/console gamer, and I am somewhat biased towards playing Unity-made games. So almost all these examples will be iPad & Unity games, however even taking my bias into account I think they are amazing games.

So here’s some list (Unity games):


Monument Valley by ustwo.


DEVICE 6 by Simogo.


Year Walk by Simogo (also for PC).


Gone Home by The Fullbright Company.


Kentucky Route Zero by Cardboard Computer.


The Room by Fireproof Games.

And just to make it slightly less biased, some non-Unity games:


Papers, Please by Lucas Pope.


The Stanley Parable by Galactic Cafe.

Now for the strange part. At work I’m working on physically based shading and things now, but take a look at the games above. Five out of eight are not “realistic looking” games at all! Lights, shadows, BRDFs, energy conservation and linear colors spaces don’t apply at all to a game like DEVICE 6 or Papers, Please.

But that’s okay. I’m happy that Unity is flexible enough to allow these games, and we’ll certainly keep it that way. I was looking at our game reel from GDC 2014 recently, and my reaction was “whoa, they all look different!”. Which is really, really good.

Cross Platform Shaders in 2014

A while ago I wrote a Cross platform shaders in 2012 post. What has changed since then?

Short refresher on the problem: people need to do 3D things on multiple platforms, and different platforms use different shading languages (big ones are HLSL and GLSL). However, no one wants to write their shaders twice. It would be kind of stupid if one had to write different C++ for, say, Windows & Mac. But right now we have to do it for shader code.

Most of the points from my previous post; I’ll just link to some new tools that appeared since then:

#1. Manual / Macro approach

Write some helper macros to abstract away HLSL & GLSL differences, and make everyone aware of all the differences. Examples: Valve’s Source 2 (DevDays talk), bkaradzic’s bgfx library (shader helper macros), FXAA 3.11 source etc.

Pros: Simple to do.

Cons: Everyone needs to be aware of that macro library and other syntax differences.

#2. Invent your own language with HLSL/GLSL backends

Or generate HLSL/GLSL from some graphical shader editor, and so on.

#3. Translate compiled shader bytecode into GLSL

Pros: Simpler to do than full language translation. Microsoft’s HLSL compiler does some decent optimizations, so resulting GLSL would be fairly optimized.

Cons: Closed compiler toolchain (HLSL) that only runs on Windows. HLSL compiler in some cases does too many optimizations that don’t make much sense these days.

Tools:

  • HLSLCrossCompiler by James Jones. Supports DX10-11 bytecode and produces various GLSL versions as output. Under active development.
  • MojoShader by Ryan Gordon. Supports DX9 (shader model 1.1-3.0).
  • TOGL from Valve. Again DX9 only, and only partial one at that (some shader model 3.0 features aren’t there).

#4. Translate HLSL into GLSL at source level, or vice versa

  • hlsl2glslfork from Unity. DX9 level HLSL in, GLSL 1.xx / OpenGL ES (including ES3) out. Does work (used in production at Unity and some other places), however quite bad codebase and we haven’t shoehorned DX10/11 style HLSL support into it yet.
  • ANGLE from Google. OpenGL ES 2.0 (and possibly 3.0?) shaders in, DX9/DX10 HLSL out. This is whole OpenGL ES emulation on top of Direct3D layer, that also happens to have a shader cross-compiler.
  • OpenGL Reference Compiler from Khronos. While itself it’s only a GLSL validator, it has a full GLSL parser (including partial support for GL4.x at this point). Should be possible to make it emit HLSL with some work. A bit weird that source is on some subversion server though - not an ideal platform for contributing changes or filing bugs.
  • HLSL Cross Compiler from Epic. This is in Unreal Engine 4, and built upon Mesa’s GLSL stack (or maybe glsl optimizer), with HLSL parser in front. Note that this isn’t open source, but hey one can dream!
  • hlslparser from Unknown Worlds. Converts DX9-style HLSL (with constant buffers) into GLSL 3.1.
  • MojoShader by Ryan Gordon. Seems to have some code for parsing DX9-style HLSL, not quite sure how production ready.

I thought about doing similar thing like Epic folks did for UE4: take glsl optimizer and add a HLSL parser. These days Mesa’s GLSL stack already has support for compute & geometry shaders, and I think tessellation shaders will be coming soon. This would be much better codebase than hlsl2glslfork. However, never had time to actually do it, besides thinking about it for a few hours :(

Call to action?

Looks like we’ll stay with two shading languages for a while now (Windows and all relevant consoles use HLSL; Mac/Linux/iOS/Android use GLSL). So each and every graphics developer who does cross platform stuff is facing this problem.

I don’t think IHVs will solve this problem. NVIDIA did try once with Cg (perhaps too early), but Cg is pretty much dead now.

DX9-level shader translation is probably a solved problem (hlsl2glslfork, mojoshader, ANGLE). However, we need a DX10/11-level translation - with compute shaders, tessellation and all that goodness.

We have really good collaboration tools in forms of github & bitbucket. Let’s do this. Somehow.