Work, work, work

Demo demo demo! Beware!

We still have a chance to finish it on time. I’ve taked a day off the work tomorrow and want it to be really finished by tomorrow evening (or night). The real deadline is after 4 days, but I’ve got to test it on various hardware…

It’s going pretty well, I think. It works, other team members say it looks pretty good (myself - I’m not sure, I think I’ve been watching it for too long). Does 100-150FPS on my GeForce 6800GT at 1024x768 with some FSAA. Again, we’re under-utilizing the available hardware :)

Back to work.


24 / deadline's near

It’s my 24th birthday today, yay!

On the other hand, I won’t “celebrate” (whatver that means) it now, maybe after a week. The deadline for ImagineCup demo is just around the corner, and my todo list still contains lots of stuff…


Dilemmas

Sometimes you must make decisions that clearly affect your whole foreseeable future in a significant way. Here comes the really unexpected though: making such decisions is hard! :)


Improving UV mapping skills

Hey, UV mapping the second human-like figure took me much less time than the first one! This one took about 1 hour, again in Blender (and again, marking seam edges, LSCM that and just arranging resulting pieces rocks).

That said, making the low-poly mesh from the high-poly one took much longer than UV mapping; something like 4 hrs. A good artist would probably make a new mesh from scratch in shorter time, but hey, I’m not an artist :)


Ambient occlusion takes ages to compute!

Really. Three hours for a model at 1024x1024 and 5x supersampling!

Now I’m using ATI’s NormalMapper to compute normal/AO maps (Previously was using nVidia’s Melody, but switched for no obvious reason). The good thing with NormalMapper is that it comes with sourcecode; I’ve already sped up AO computation about 20% by capping octree traversal distances (that took less than an hour). I suspect with some thought it could be optimized even more.

Previously I was using a hacked solution - compute normal map with either tool (that doesn’t take long), then use my custom small tool that does low-order GPU PRT simulation on low-poly normalmapped model with D3DX. Get the first term of results, scale it and there you have ambient occlusion. I was thinking it produces good results, but in the truth is that ‘real’ AO maps look somewhat better, especially for small ornaments that aren’t captured in low-poly geometry.

The good thing about this hacked approach is that it takes ~10 seconds for a model (compare to 3hrs). Using it as a quick preview is great, and the differences between hacked-AO and real-AO aren’t that much visible once you add textures and conventional lighting.

I’m thinking about doing GPU-based AO simulation on the high poly model, with quick-n-dirty UV parametrization; then just fetching the resulting texture in normal map computation tool (afaik, Melody can do that natively; for NormalMapper it would be easy to add I think). With recent DX9 SDK such tool should not take more than 200-300 lines of code (D3DX has both UVAtlas and GPU PRT simulation now). On the other hand, I know that nVidia guys are preparing something similar :)

Update: added image - on the left is hacked-n-fast AO, on the right is real-AO. Ornaments inside aren’t present in low-poly model (only in normal map). Differences are less visible when the model is textured and other stuff is added.