Usability depends on context!

Here’s a little story on how usability decisions need to depend on context.

In Unity editor pretty much any window can be “detached” from the main window. An obvious use case is putting it onto a separate monitor. But of course you can just end up having a ton of detached windows overlapping each other.

Here I have four windows in total on OS X:

Overlapped Windows on OS X

Here I have four windows on Windows:

Overlapped Windows on Windows

However, users of OS X and Windows are used to applications behaving differently.

On OS X, it is very common that a single application has many overlapping windows. Usually users don’t have problems finding their windows either, thanks to Exposé. Press a key, voilà, here they are:

Exposé on OS X

On Windows, there is no Exposé. So there’s a problem: when a detached window is obscured by another window, how do you get to it? One would ask “well, what’s wrong in having windows partially overlapped, like in above screenshot?”, to which I’d say “you’re a Mac user”.

Windows users do not have a ton of windows on screen. They tend to maximize the application they are currently working with. I was doing this myself all the time, and it took 3 years of Mac laptop usage before I stopped maximizing everything on my Windows box!

So what a typical Windows user might see when using Unity is this. Now, where are the other three detached windows?

Maximized

On Windows, it is very uncommon for a single application to have many overlapped windows. When an application does that, the “detached” windows are always positioned on top of the main window. There are some applications that do not do this (yes I’m looking at you GIMP), and almost everyone is not happy with their usability.

So we decided to take this context into account. Windows users do not have Exposé, and they expect “detached” windows to be always on top of the main window. Unity 2.6 will do this soon.

In Front on Windows

Of course, you still can dock all the windows together and this whole “windows are obscured by other windows” issue goes away:

Docked on Windows

Hmm… I think the screenshots above show two new big features in upcoming Unity 2.6. Preemptive note: UI of the stuff above is not final. Anything might change, don’t become attached to any particular pixel!


Talks & Demos from Assembly 2009

I went to Assembly 2009 demoparty this year.

No demo submissions, but I did a seminar presentation about developing graphics technology for small games (PDF slides). Mostly on hardware statistics, GPU features, testing and stability:

Asm'09 seminar: developing gfx tech for small games from Unity3D on Vimeo.

However, the awesome talk was given by ReJ: low level iPhone (pre-3GS) rendering details (PDF slides). Inner workings of iPhone’s GPU, OpenGL ES drivers, command buffers, VFP assembly and so on. Bringing assembly back to the Assembly, yeah!

Asm'09 seminar: developing gfx tech for small games from Unity3D on Vimeo.

If you’re going to watch some demos from Assembly 2009, make sure to see:

  • Frameranger (1st place demo). Rocked the big screen! Seems somewhat unfinished though.

  • The Golden Path (3rd place demo) - for something fresh. Also, a good way to disprove the saying that “the winners don’t take drugs” :)

  • Muon Baryon (1st place 4 kilobyte intro) - that’s what kids do with sphere marching on the GPU these days.


Compact Normal Storage for small g-buffers

I’ve been experimenting with compact storage of view space normals for small g-buffers. Think about storing depth and normal in a single 8 bit/channel RGBA texture.

Here are my findings - with error visualization and shader performance numbers for some GPUs.

If you know any other method to encode/store normals in a compact way, please let me know!


Encoding floats to RGBA - the final?

The saga continues! In short, I need to pack a floating point number in [0..1) range into several channels of 8 bit/channel render texture. My previous approach is not ideal.

Turns out some folks have figured out an approach that finally seems to work.

Here it is for my own reference:

So here’s the proper way:

inline float4 EncodeFloatRGBA( float v ) {
  float4 enc = float4(1.0, 255.0, 65025.0, 16581375.0) * v;
  enc = frac(enc);
  enc -= enc.yzww * float4(1.0/255.0,1.0/255.0,1.0/255.0,0.0);
  return enc;
}
inline float DecodeFloatRGBA( float4 rgba ) {
  return dot( rgba, float4(1.0, 1/255.0, 1/65025.0, 1/16581375.0) );
}

That is, the difference from the previous approach is that the “magic” (read: hardware dependent) bias is replaced with subtracting next component’s encoded value from the previous component’s encoded value.


Implementing fixed function T&L in vertex shaders

Almost half a year ago I was wondering how to implement T&L; in vertex shaders.

Well, finally I implemented it for upcoming Unity 2.6. I wrote some sort of a technical report here.

In short, I’m combining assembly fragments and doing simple temporary register allocation, which seems to work quite well. Performance is very similar to using fixed function (I know it’s implemented as vertex shaders internally by the runtime/driver) on several different cards I tried (Radeon HD 3xxx, GeForce 8xxx, Intel GMA 950).

What was unexpected: the most complex piece is not the vertex lighting! Most complexity is in how to route/generate texture coordinates and transform them. Huge combination explosion there.

Otherwise - I like! Here’s a link to the article again.