It must be a bug in OS/compiler/...

Ever looked at the code which is absolutely correct, yet runs incorrectly? Sometimes it looks like a genuine compiler bug. “I swear, mister! The compiler corrupts my code!”

Look again. And again. Eventually you’ll find where your code is broken.

(Of course, in some cases quite often the compiler is broken… GLSL, anyone?)

Pimp my code, part 15: The Greatest Bug of All says the above in a much nicer way:

Maybe the problem was there was some huge bug in Apple’s Mach, where if you open too many files in a short period of time, the filesystem tried to, like, cache the results, and the cache blew up, and as a result the filesystem incorrectly just would fail to open any more files, instead of flushing the cache.

I’ve also been around long enough to know that whenever I know the operating system must be bugged, since my code is correct, I should take a damn close look at my code. The old adage (not mine) is that 99% of the time operating system bugs are actually bugs in your program, and the other 1% of the time they are still bugs in your program, so look harder, dammit.

A post well worth reading… about the process of investigating tricky bugs. And sincere as well. It’s so good that I’ll just quote it again:

It’s a bug we should have caught. We should have spent the time to get the images in the 10,000 item file. I messed up.

Software is written by humans. Humans get tired. Humans become discouraged. They aren’t perfect beings. As developers, we want to pretend this isn’t so, that our software springs from our head whole and immaculate like the goddess Athena. Customers don’t want to hear us admit that we fail.

The measure of a man cannot be whether he ever makes mistakes, because he will make mistakes. It’s what he does in response to his mistakes. The same is true of companies.

We have to apologize, we have to fix the problem, and we have to learn from our mistakes.

So very true.



Invincible shutdown buttons!

I booted into Vista yesterday to test something. It offered a bunch of updates to install. After they were installed, I got this:

Invincible buttons

I am not sure what shutdown buttons do when they look like this. I guess they are invincible, or something. Ha, I’m your log off button! You can’t kill me!

Yes, one of the updates installed was the ATI driver update, so I guess there’s some glitch somewhere in the driver update that makes some buttons be displayed like this… But hey, this is not some random driver that I found on the net, it’s the one that is officially suggested by Vista’s update!


I can has vertex?

'I can has vertex data?'You know something became a cultural phenomenon when hardware review sites start putting up images like this…

From AnandTech’s Radeon HD 4850 & 4870 review: I can has vertex data?

Edit: gee, nowadays the reviews have funny performance measures. Like, FPS per square centimeter (of GPU die size)! It does actually make sense, but it’s still funny. Frames per second per square centimeter… mmm… delicious.


Encoding floats to RGBA, again

Hey, it looks like the quest for encoding floats to RGBA textures (part 1, part 2) did not end yet.

Here’s the “best available” code that I have now:

inline float4 EncodeFloatRGBA( float v ) {
  return frac( float4(1.0, 255.0, 65025.0, 16581375.0) * v ) + bias;
}
inline float DecodeFloatRGBA( float4 rgba ) {
  return dot( rgba, float4(1.0, 1/255.0, 1/65025.0, 1/16581375.0) );
}

Before I thought that bias should be +0.5/255.0 normally, except it had to be around -0.55/255.0 on Radeon cards (older than Radeon HD series). Well, turns out I was wrong, the bias mostly has to be around -0.5/255.0.

Here’s the list (same bias on Windows/D3D9 and OS X/OpenGL, so it seems to be hardware dependent, and not something in API/drivers):

  • Radeon 9500 to X850: -0.61/255
  • Radeon X1300 to X1900: -0.66/255
  • Radeon HD 2xxx/3xxx: -0.49/255
  • GeForce FX, 6, 7, 8: -0.48/255
  • Intel 915, 945, 965: -0.5/255

Those are the best bias values I could find. Still, every once in a while (rarely) encoding the value to RGBA texture and reading it back would produce something where one channel is half a bit off. Not a problem if you were encoding numbers were originally 0..1 range, but for example if you were encoding something that spans over whole range of the camera, then 0..1 range gets expanded into 0..FarPlane…

And all of a sudden there are huge precision errors, up to the point of being unusable. I just tried doing a quick’n’dirty depth of field and soft particles implementation using depth encoded this way… not good.

Oh well. Has anyone successfully used encoding of high precision number into RGBA channels before?