Encoding floats to RGBA, again

Hey, it looks like the quest for encoding floats to RGBA textures (part 1, part 2) did not end yet.

Here’s the “best available” code that I have now:

inline float4 EncodeFloatRGBA( float v ) {
  return frac( float4(1.0, 255.0, 65025.0, 16581375.0) * v ) + bias;
}
inline float DecodeFloatRGBA( float4 rgba ) {
  return dot( rgba, float4(1.0, 1/255.0, 1/65025.0, 1/16581375.0) );
}

Before I thought that bias should be +0.5/255.0 normally, except it had to be around -0.55/255.0 on Radeon cards (older than Radeon HD series). Well, turns out I was wrong, the bias mostly has to be around -0.5/255.0.

Here’s the list (same bias on Windows/D3D9 and OS X/OpenGL, so it seems to be hardware dependent, and not something in API/drivers):

  • Radeon 9500 to X850: -0.61/255
  • Radeon X1300 to X1900: -0.66/255
  • Radeon HD 2xxx/3xxx: -0.49/255
  • GeForce FX, 6, 7, 8: -0.48/255
  • Intel 915, 945, 965: -0.5/255

Those are the best bias values I could find. Still, every once in a while (rarely) encoding the value to RGBA texture and reading it back would produce something where one channel is half a bit off. Not a problem if you were encoding numbers were originally 0..1 range, but for example if you were encoding something that spans over whole range of the camera, then 0..1 range gets expanded into 0..FarPlane…

And all of a sudden there are huge precision errors, up to the point of being unusable. I just tried doing a quick’n’dirty depth of field and soft particles implementation using depth encoded this way… not good.

Oh well. Has anyone successfully used encoding of high precision number into RGBA channels before?


Depth bias and the power of deceiving yourself

In Unity we very often mix fixed function and programmable vertex pipelines. In our lighting model, some amount of brightest lights per object are drawn in pixel lit mode, and the rest are drawn using fixed function vertex lighting. Naturally the pixel lights most often use vertex shaders, as they want to calculate some texcoords for light cookies, or do something with tangent space, or calculate some texcoords for shadow mapping, and so on. The vertex lighting pass uses fixed function, because it’s the easiest way. It is possible to implement fixed function lighting equivalent in vertex shaders, but we haven’t done that yet because of complexities of Direct3D and OpenGL, the need to support shader model 1.1 and various other issues. Call me lazy.

And herein lies the problem: most often precision of vertex transformations is not the same in fixed function versus programmable vertex pipelines. If you’d just draw some objects in multiple passes, mixing fixed function and programmable paths, this is roughly what you will get (excuse my programmer’s art):

Mixing fixed function and vertex shaders

Not pretty at all! This should have looked like this:

All good here

So what do we do to make it look like this? We “pull” (bias) some rendering passes slighly towards the camera, so there is no depth fighting.

Now, at the moment Unity editor runs only on the Macs, which use OpenGL. In there, most of hardware configurations do not need this depth bias at all - they are able to generate same results in fixed function and programmable pipelines. Only Intel cards do need the depth bias on Mac OS X (on Windows, AMD and Intel cards need depth bias). So people author their games using OpenGL, where it does not need depth bias in most cases.

How do you apply depth bias in OpenGL? Enable GL_POLYGON_OFFSET_FILL and set glPolygonOffset to something like -1, -1. This works.

How do you apply depth bias in Direct3D 9? Conceptually, you do the same. There are DEPTHBIAS and SLOPESCALEDEPTHBIAS render states that do just that. And so we did use them.

And people complained about funky results on Windows.

And I’d look at their projects, see that they are using something like 0.01 for camera’s near plane and 1000.0 for the far plane, and tell them something along the lines of “increase your near plane, stupid!” (well ok, without the “stupid” part). And I’d explain all the above about mixing fixed function and vertex shaders, and how we do depth bias in that case, and how on OpenGL it’s often not needed but on Direct3D it’s pretty much always needed. And yes, how sometimes that can produce “double lighting” artifacts on close or intersecting geometry, and how the only solution is to increase the near plane and/or avoid close or intersecting geometry.

Sometimes this helped! I was so convinced that their too-low-near-plane was always the culprit.

And then one day I decided to check. This is what I’ve got on Direct3D:

Depth bias artefacts

Ok, this scene is intentionally using a low near plane, but let me stress this again. This is what I’ve got:

Epic fail!

Not good at all.

What happened? It happened in roughly this way:

  1. First, depth bias documentation on Direct3D is wrong. Depth bias is not in 0..16 range, it is in 0..1 range which corresponds to entire range of depth buffer.
  2. Back then, our code was always using 16 bit depth buffers, so the equivalent of -1,-1 depth bias in OpenGL was multiplied with something like 1.0/65535.0, and that was fed into Direct3D. Hey, it seemed to work!
  3. Later on, the device setup code was modified to do proper format selection, so most often it ended up using 24 bit depth buffer. Of course no one I never modified the depth bias code to account for this change…
  4. And it stayed there. And I kept deceiving myself that the content of the users is to blame, and not some stupid code of mine.

It’s good to check your assumptions once in a while.

So yeah, the proper multiplier for depth bias on Direct3D with 24 bit depth buffer should be not 1.0/65535.0, but something like 1.0/(2^24-1). Except that this value is really small, so something like 4.8e-7 should be used instead (see Lengyel’s GDC2007 talk). Oh, but for some reason it’s not really enough in practice, so something like 2.0*4.8e-7 should be used instead (tested so far on GeForce 8600, Radeon HD 3850, Radeon 9600, Intel 945, reference rasterizer). Oh, and the same value should be used even when a 16 bit depth buffer is used; using 1.0/65535.0 multiplier with 16 bit depth buffer produces way too large bias.

With proper bias values the image is good on Direct3D again. Yay for that (fix is coming in Unity 2.1 soon).

…and yes, I know that real men fudge projection matrix instead of using depth bias… someday maybe.


OpenCL?

Okay, so Apple just announced OpenCL (Open Computing Language) technology in upcoming OS X 10.6. This is starting to get interesting.

My prediction? OpenCL should be something along lines of CUDA or BrookGPU. Will work on various DX10-level graphics cards, and on the CPU. I think trying to target older graphics cards does not make sense - using real actual integer types is useful in general purpose computing (DX10 tech), and Apple will probably only be shipping DX10 level graphics cards in a year (at the moment only Intel cards in Macs are DX9 level; the rest is GeForce 8s and Radeon HDs). With a multithreaded CPU fallback any older machines will be taken care of anyway (and leaves the future open for Larrabees). So yeah, quite similar to BrookGPU actually.

It has “open” in the title, so maybe they will make it for other platforms as well. I doubt that they will ship implementation though; perhaps just make it royalty/patent/whatever free and publish the spec. Which is about the same level of “openness” as other technologies with “open” in their name (OpenGL, OpenAL, OpenMP, OpenCV, …) - not exactly open, but not the worst kind either.

Oh, and suddenly there are new uses for other technologies recently developed at Apple, like LLVM or clang.

We’ll see how it goes.


The problem with Vista

Jeff Atwood notes the lack of polish in Windows Vista UI. Long Zheng has started Windows UI Taskforce. I agree - Vista’s UI has tons of polish problems.

You know, little things that would seem unimportant, but screams something like “I was made in a hurry by people who don’t really love me”. Aliased shield icon overlays? Check. Horrible screen flickering when logging in or UAC prompt pops up? Check. The infamous Shut Down menu? Check. Awful file copy progress dialogs? Check. Explorer window title bar sometimes displaying green progress bar inside of it for some reason I can never understand? Check. General lack of unified style for UI? Check. The list goes on.

But still, I wonder whether lack of polish is the real problem with Vista. From my point of view, lack of direction or lack of vision seems to be a problem of similar size, if not larger. What is the vision for Vista?

“Security!” is not a vision. However hard it is to make something secure, “more security” is an improvement in one area, and not a vision on what a product should be. And second, “security” does not explain everything else about Vista. At start, it looked like some architecture astronauts had some fancy visions, like “all your filesystem is a database now!”… Well, that did not end up in Vista, and it is something that users genuinely don’t care about.

I might sound like an Apple fanboy (and indeed, OS X grows on you after a while), but when upgrading from OS X 10.4 (Tiger) to 10.5 (Leopard) I had a pretty clear list of what will be more useful to me:

  • New version feels faster (on the same machine). I am not sure if it is actually faster; or it’s only a perceived improvement. Maybe they optimized something, maybe they multithreaded something, I don’t really care. It feels faster and smoother. That’s good.

  • Quick Look is amazing. A seemingly simple feature - press Space over a file to preview it. With added polish, like when pressing space over multiple images selected, you can go into slideshow mode. Simple, yet highly effective.

  • Spotlight (desktop search) that is fast.

  • …and so on.

Those are things I, as a user, care about. I want computer to feel faster. I want to instantly preview files. I want to search for something fast.

A filesystem that is a database? I can almost see the regular user salivating over that… Yeah right. Users don’t want a platform, users want useful features.

And this is where Vista fails - it does not have obvious new useful features or improvements. Aside from Direct3D 10 - which I am not using yet - all so called “improvements” just feel like gimmicks.

  • It feels slower (I don’t care whether it actually is faster but just feels sluggish). And yes, it feels slower on a quad-core CPU with 4 gigs of RAM and a fast graphics card, so no “Vista runs circles around XP on a new box” please.

  • The reorganized menus, title bars and layout of Explorer just scream “I totally don’t understand what users need” at you. Previews are too small to be useable, organization of menus and buttons is horrible, and the constantly fading-in-and-fading-out user interface elements (folders tree view) are just distracting. I dig the new Office 2007 UI and I can see some understanding of users and vision behind it (see Jensen Harris), but Vista’s UI feels like it was designed by a bunch of people who never talked to each other. And it’s not just lack of polish, the “design” of it is wrong.

  • The Sidebar? Again, an attempt at doing something that seemed good, but without any understanding. Yes, I know that Apple might have taken the idea and implemented it right, but that does not leave Sidebar as being useful.

  • The new skin? Oh come on. How many users did upgrade because window close buttons now glow in red when you hover over them?

  • Was there anything else new in Vista? I didn’t notice anything.

So this pretty much sums up my view on Vista. Zero new useful things, many annoyances. Microsoft, here’s you chance to execute it better next time around.


Amazing! Demoscene news that actually make sense!

There’s a news item on next-gen.biz on Plastic’s Linger in Shadows PS3 demo.

What is totally amazing, is that the news item does actually make sense. It does not treat the demo as a game, or as some “what the f?” thing. Kudos.

About the demo itself - I totally dig the insane amount of work put in there; but I was quite confused with “story” or “meaning”. The visuals are good, the tech is good, it is impressive, but the message of the demo I just did not get. But still great work, go Plastic!