Precalculated 2D fracture

I’m working on ImagineCup2005 realtime rendering demo now, and one thing we are planning to do is fracturing/exploding walls of some room (in realtime of course). I’ve been thinking how to implement all this, and together with Paulius Liekis we came up with half-precomputed, half-cheated solution.

Our ‘walls’ are perfectly flat, so all fracture process is 2D, just the pieces that fly/fall out turn into 3D simulation. In the demo, some things will hit the walls, and the fracture must start there.

The first cheat we thought is to have some precomputed ‘fracture patterns’ (bunch of connected lines in 2D). Choose one pattern, ‘put’ that onto wall and there you go. Now, the problem is that the pattern has to be ‘clipped’ to the existing patterns, the falling out pieces and the remaining wall has to be triangulated, etc. I think it’s not ‘slow’ process (i.e. suitable for realtime), but pretty tedious to implement.

The next idea was: why not precompute the fracture pattern for the whole wall, and make it pretty detailed? When something hits the wall, you take out some elements from it and let them fly/fall. Now, the fracture pattern is always fixed for the whole wall, so this isn’t entirely ‘correct’ fracture, but I think it’s ok for our needs. I coded up some lame ‘fracture pattern generator’ (tree-like, nodes either branch or not, branches are at some nearly random angles, and terminate when they hit existing branch), and the patterns do look pretty cool.

The only problem with this is when I tried calculating how many fractured pieces our walls will contain. I get half a million or so for the whole room; that’s certainly a bit too much.

One idea to cope with this is: have a (sort of) quadtree for the wall, and each cell ‘combines’ the pieces it contains entirely into one ‘super piece’ (what a term!). Some of the internal nodes vanish, hence the super-piece contains less triangles, and it gets better when we walk up the quadtree. Now, when a wall is hit somewhere, only a small portion of it ‘fractures out’, so most of the wall can still be displayed as super-pieces, and it gets detailed only around the fractured area.

So, in the end there’s almost no computations performed for the fracture. The fracture pattern and the super-piece hierarchy is precomputed once, and at runtime we just use it. Of course, we still need to simulate the physics of flying/falling pieces, but that’s another story.

In one moment, we’ll have most of the walls ’exploded’ at once, I think for that case we’ll just use larger fractured elements, and everything will be ok :)


What makes me an awful team member

If I ever see code that I think could be written in ‘much better’ way, I have a really big temptation to rewrite/refactor it. Sometimes this can be good, eg. when I spot buggy code. Sometimes it depends, as I often spot the code that is (or I think is) ‘sub-obtimal’.

The worst situation is when the code is buggy, but some other part of the codebase depends on that code being buggy. If you find one, and not the other, it’s bad.

Working on some big unknown codebase remotely (I’m a contractor for one game in development) makes this worse. I must resist the temptation to alter the code, no matter how bad it looks…


ShaderX3

Today received my ‘contributor copy’ of ShaderX3. Pretty sad that the authors themselves receive the book only now, when it has been released in November. Well, maybe that’s because for some reason my shipping address contained city ‘Kannas’ instead of ‘Kaunas’, and an obsolete postal index (we’ve got ‘refactoring’ of postal indices recently here :)). Anyway.

Like with most similar books, half of this one is old, well known or pretty basic stuff. At a first glance, Generating Shaders from HLSL Fragments by Shawn Hargreaves is really good; Dean Calver’s stuff (Accessing and Modifying Topology on the GPU and Deferred Lighting on PS3.0 with HDR) also looks very cool. Probably these alone are worth the book; much like Kozlov’s article on PSMs in GPU Gems was. My own articles - oh well; one (Shaderey…) is really useless; the other (Fake Soft Shadows…) is maybe ‘interesting’, but of unknown practical purpose :)

Ok, back to reading…


Linear programming, redefined

I’d like to redefine the term ’linear programming. No, it’s not about optimization problems; instead it’s about programming style. You know you deal with linear programming when:

  • You see a function that’s 6 pages long. It’s been programmed linearly, literally. Similar things: a try-catch block of several pages, in C++, that catches everything and prints just “error occured” into log.

  • You see 6 functions that each is pretty long, and the differences between them are a couple of lines.

  • In a big project, you find 10 long functions that are exactly the same, each do exactly the same thing, but are defined in 10 different places/modules.

The other name for it could be ‘copy-paste programming’, except for the 1st point, where everything is coded linearly.

I tend to find lots of linear programming at work.


The video cards are damn fast

I was working on our next demo the other day. Boy, the video cards are damn fast nowadays!

We have a high-poly model for the main character (~200k tris), for the demo we use low-poly (~6500 tris) and a normalmap. Now, I’ve put 128 lights scattered on the hemisphere above him, each using shadow buffer. I have 4 shadow buffers, render to these from four lights, then render the character, fetching shadows from four shadowmaps at once. The result is that it’s almost realtime ambient occlusion for the animating character, and it runs at ~40FPS on my geforce 6800gt!

This is of course pretty useless, we don’t need realtime AO in the demo. But it has been nice :)