Daily Pathtracer Part 0: Intro

As mentioned before, I realized I’ve never done a path tracer. Given that I suggest everyone else who asks “how should I graphics” start with one, this sounded wrong. So I started making a super-simple one. When I say super simple, I mean it! It’s not useful for anything, think of it as [smallpt] with more lines of code :)

However I do want to make one in C++, in C#, in something else perhaps, and also run into various LOLs along the way. All code is at github.com/aras-p/ToyPathTracer.

Now, all that said. Sometimes it can be useful (or at least fun) to see someone who’s clueless in the area going through parts of it, bumping into things, and going into dead ends or wrong approaches. This is what I shall do in this blog series! Let’s see where this path will lead us.

Actually useful resources

If you want to actually learn someting about path tracing or raytracing, I’d suggest these:


Random Thoughts on Raytracing

Big graphics news at this GDC seem to be DirectX Raytracing. Here’s some incoherent (ha!) thoughts about it.

Raytracing: Yay

“Traditional” rasterized graphics is hard. When entire books are written on how to deal with shadows, and some of the aliasing/efficiency problems are still unsolved, it would be nice to throw something as elegant as a raytracer at it. Or screen-space reflections, another “kinda works, but zomg piles upon piles of special cases, tweaks and fallbacks” area.

There’s a reason why movie industry over the last 10 years has almost exclusively moved into path tracing renderers. Even at Pixar’s Renderman – where their Reyes was in fact an acronym for “Renders Everything You Ever Saw” – they switched to full path tracing in 2013 (for Monsters University), and completely removed Reyes from Renderman in 2016.

DirectX Raytracing

Mixed thoughts about having raytracing in DirectX as it is now.

A quick glance at the API overall seems to make sense. You get different sorts of ray shaders, acceleration structures, tables to index resources, zero-cost interop with “the rest” of graphics or compute, etc etc.

Conceptually it’s not that much different from what Imagination Tech has been trying to do for many years with OpenRL & Wizard chips. Poor Imgtec, either inventing so much or being so ahead of it’s time, and failing to capitalize on that in a fair way. Capitalism is hard, yo :| Fun fact: Wizard GPU pages are under “Legacy GPU Cores” section of their website now…

On the other hand, as Intern Department quipped, DirectX has a long history of “revolutionary” features that turned out to be duds too. DX7 retained mode, DX8 Matrox tessellation, DX9 ATI tessellation, DX10 geometry shaders & removal of FP16, DX11 shader interfaces, deferred contexts etc.

Yes, predicting the future is hard, and once in a while you do a bet on something that turns out to be not that good, or not that needed, or something entirely else happens that forces everyone else to go in another direction. So that’s fair enough, in best case the raytracing abstraction & APIs become an ubiquitous & loved thing, in worst case no one will use it.

I’m not concerned about “ohh vendor lock-in” aspect of DXR; Khronos is apparently working on something there too. So that will cover your “other platforms” part, but whether that will be a conceptually similar API or not remains to be seen.

What I am slightly uneasy about, however, is…

Black Box Raytracing

The API, as it is now, is a bit of a “black box” one.

  • What acceleration structure is used, what are the pros/cons of it, the costs to update it, memory consumption etc.? Who knows!
  • How is scheduling of work done; what is the balance between lane utilization vs latency vs register pressure vs memory accesses vs (tons of other things)? Who knows!
  • What sort of “patterns” the underlying implementation (GPU + driver + DXR runtime) is good or bad at? Raytracing, or path tracing, can get super bad for performance at divergent rays (while staying conceptually elegant); what and how is that mitigated by any sort of ray reordering, bundling, coalescing (insert N other buzzwords here)? Is that done on some parts of the hardware, or some parts of the driver, or DXR runtime? Who knows!
  • The “oh we have BVHs of triangles that we can traverse efficiently” part might not be enough. How do you do LOD? As Sebastien and Brian point out, there’s quite some open questions in that area.

There’s been a massive work with modern graphics APIs like Vulkan, D3D12 and partially Metal to move away from black boxes in graphics. DXR seems to be a step against that, with a bunch of “ohh, you never know! might be your GPU, might be your driver, might be your executable name lacking a quake3.exe” in it.

It probably would be better to expose/build whatever “magics” the upcoming GPUs might have to allow people to build efficient tracers themselves. Ability to spawn GPU work from other GPU work; whatever instructions/intrinsics GPUs might have for efficient tracing/traversal/intersection math; whatever fixed function hardware might exist for scheduling, re-scheduling and reordering of work packets for improved coherency & memory accesses, etc. etc.

I have a suspicion that the above is probably not done “because patents”. Maybe Imagination has an imperial ton of patents in the area of ray reordering, and Nvidia has a metric ton of patents in all the raytracing research they’ve been doing for decades by now, and so on. And if that’s true, then indeed “just expose these bits to everyone” is next to impossible, and DXR type approach is “best we can do given the situation”… Sad!

I’ll get back to my own devices :)

So, yeah. Will be interesting to see where this all goes. It’s exciting, but also a bit worrying, and a whole bunch of open questions. Here’s to having it all unfold in a good way, good luck everyone!

And I just realized I’ve never written even a toy path tracer myself; and the only raytracer I’ve done was for an OCaml course in the university, some 17 years ago. So I got myself Peter Shirley’s Ray Tracing in One Weekend and two other minibooks, and will play around with it. Maybe as a test case for Unity’s new Job System, ECS & Burst compiler, or as an excuse to learn Rust, or whatever.


Unity in 2018

I don’t remember if I ever was as excited for what’s coming to Unity, as I am right now. And I have been through quite some times, all the way from Unity 1.5! (that was in 2006, or somewhere in the middle of Priabonian age)

A lot of exciting things are falling into place:

A lot of other stuff is happening too; many pieces that were considered “experimental/preview” before will soon drop their experimental labels (e.g. Progressive Lightmapper or .NET 4.6 Scripting Runtime).

And then way more stuff is being developed; some of it fairly close to shipping and I hope will ship this year; some still a bit further out. I wish I could tell more… suffice to say, among other things we have this custom emoji – whatever it might mean – in the company Slack, and it’s getting quite a lot of usage lately.

This is all very exciting!

But, what is perhaps even better, is that I think we’ve found a way how to do a big jump/move from “where we are today” to “where we want to be in 5 years”.

This is one of the hardest problems in evolving a fairly popular product; it’s very hard to realize how hard it is without actually trying to do it. Almost every day you’re off with something you’d want to change, but a lot of possible changes might break some existing content. A “damned if you do, damned if you don’t” type of situation, that @mcclure111 described so brilliantly:

Library design is this: You have made a mistake. It is too late to fix it. There is production code depending on the mistake working exactly the way the mistake works. You will never be able to fix it. You will never be able to fix anything. You wrote this code nine seconds ago. [source]

It’s easy to make neat tech that barely anyone uses. It’s moderately simple to make technically brilliant engine that gets two dozen customers, and then declare a ground-up 100% rewrite or a whole new engine, that This Time Will Be Even More Brilliant-er-er. Get two dozen customers, rinse & repeat again.

Doing a re-architecture of an engine (or anything, really), while hundreds of thousands of projects are in flight, and trying to disrupt them as little as possible, is a hundred times harder. And I’m not exaggerating, it’s easily a hundred times harder. When I was doing customer-facing features, improvements & fixes, this was the hardest part in doing it all.

So I’m super happy that we seem to have a good plan in how to tackle this! The Package Manager is a huge part of that. The new Entity Component System is the first big piece of this “re-architecture the whole thing”. You can opt in to use it, or you can ignore it for a bit… but we hope the benefits are too big to ignore. You can also start to use it piece by piece, transitioning your knowledge & production to it.

Many other systems are likely to follow in a similar fashion. For example the current Scriptable Render Pipeline approach replaces the high-level rendering code with C#, but the underlying “graphics platform” layer is more or less the same. Some parts of it are in less-than-ideal state or design… I’ve been thinking that it would be possible to “upgrade” it in-place to be way more modern, but by now it feels like maybe parts of it should be started anew. And so at some point a new graphics platform layer will be built, a new material/shader runtime will happen, etc. etc. It will live side by side with the “old stuff” for a while, similar to how the new ECS and the old GameObject/Component system will live together.

And this time I feel like we will be able to pull it off, more so than previous times :) Wish us luck!


UWP/WinRT Headers are Fun (not)

As established before, <windows.h> is a bit of a mess that has accumulated over 30+ years. Symbols in global namespace, preprocessor macros, ugh:

#include <windows.h>
// my code
void* GetObject(...);
// welp, GetObject is actually GetObjectW now

So naturally, someone at Microsoft decided it’s time to make a “v2” API set for programming Windows, without any of the horrors of the past, using more modern approaches, and so on. And so in 2012 Windows Runtime was born.

No more preprocessor hijacking identifiers! No more global namespaces! …hmm. or is it? Try this (tested with Windows SDK 10.0.16299.0, VS2017):

class Plane;
#include <windows.ui.core.h>

What does the compiler say?

Windows.Foundation.Numerics.h(490): error C2371: 'ABI::Windows::Foundation::Numerics::Plane': redefinition; different basic types
Windows.Foundation.Numerics.h(317): note: see declaration of 'ABI::Windows::Foundation::Numerics::Plane'

Compiling with /W3 tells more detail on why that happens:

Windows.Foundation.Numerics.h(317): warning C4099: 'Plane': type name first seen using 'class' now seen using 'struct'
test.cpp(2): note: see declaration of 'Plane'

Lo and behold, turns out that Windows.Foundation.Numerics.h (which is included by a lot of WinRT headers) has this:

namespace ABI {
    namespace Windows {
        namespace Foundation {
            namespace Numerics {                
                typedef struct Plane Plane;
            } 
        } 
    } 
} 

The code tried to be namespace-aware, but typedef struct Plane Plane apparently is not. Why does it have that thing in the first place? No idea!

But this means you can’t forward-declare classes/structs, that match WinRT structs/classes (even inside namespaces!), before including WinRT headers. Your own forward-declarations have to come after WinRT header inclusion.

Since that pattern in headers essentially creates code like this, which is a compile error on all C++ compilers:

class Plane;
namespace Test
{
    typedef struct Plane Plane;
    struct Plane { int a; };
}
// clang 5.0.0:
// warning: struct 'Plane' was previously declared as a class [-Wmismatched-tags]
// error: definition of type 'Plane' conflicts with typedef of the same name
//
// vs2017:
// warning C4099: 'Plane': type name first seen using 'class' now seen using 'struct'
// error C2371: 'Test::Plane': redefinition; different basic types
//
// gcc 7.2:
// error: using typedef-name 'Test::Plane' after 'struct'

“Great” job, WinRT headers. At least in WinAPI times I could undo most of the damage with some #undef

:sadpanda:


Header Hero Improvements

There’s a neat little tool for optimizing C++ codebase header #include dependencies: Header Hero (thanks Niklas for making it!).

It can give an estimate of “how many lines of code” end up being parsed by the compiler, when all the header files have been included. I suggest you read the original post about it. A more recent post from Niklas at how they are approaching the header file problem now is very interesting too; though I’m not convinced it scales beyond “a handful of people” team sizes.

Anyway. I just made some small improvements to Header Hero while using it on our codebase:

Precompiled Headers

I added a new field at the bottom of the main UI where a “precompiled header” file can be indicated. Everything included into that header file will be not counted into “lines of code parsed” lists that the UI shows in the report.

What goes into the precompiled header itself is shown at the bottom of the report window:

Small UI tweaks

Added quick links to “list of largest files” and “list of most included files” (hubs) at top of build report. Initially I did not even know the UI had the “hubs” list, since it was so far away on the scrollbar :)

Switched the “Includes” tab to have file lists with columns (listing include count / line count), and added a “go to previous file” button for navigation.

That’s it! Get them on github here.