Doom in Blender VSE

You know how in Blender Video Sequence Editor (VSE) you can create Color strips, and then their color is displayed in the timeline?

You can create many of them, and when sufficiently zoomed out, the strip headings disappear since there’s not enough space for the label:

So if you created say 80 columns and 60 rows of color strips…

…and kept on changing their colors constantly… you could run Doom inside the Blender VSE timeline.

And so that’s what I did. Idea sparked after seeing someone make Doom run in Houdini COPs.

Result

Here’s the result:

And the file/code on github: github.com/aras-p/blender-vse-doom

It is a modal blender operator that loads doom file, creates VSE timeline full of color strips (80 columns, 60 rows), listens to keyboard input for player control, renders doom frame and updates the VSE color strip colors to match the rendered result. Escape key finishes the operator.

All the Doom-specific heavy lifting is in render.py, written by Mark Dufour and is completely unrelated to Blender. It is just a tiny pure Python Doom loader/renderer. I took it from “Minimal DOOM WAD renderer” and made two small edits to avoid division by zero exceptions that I was getting.

Performance

This runs pretty slow (~3fps) in current Blender (4.1 .. 4.4) 😢

I noticed that is was slow when I was “running it”, but when stopped, navigating the VSE timeline with all the strips still there was buttery smooth. And so, being an idiot that I am, I was “rah rah, Doom rendering is done in pure Python, of course it is slow!”

Yes, Python is slow, and yes, the minimal Doom renderer (in exactly 666 lines of code – nice!) is not written in “performant Python”. But turns out… performance problems are not there. Another case for “never guess, always look at what is going on”.

The pure-python Doom renderer part takes 7 milliseconds to render a 80x60 “frame”. Could it be faster? Probably. But… it takes 300 milliseconds to update the colors of all the VSE strips.

Note that in Blender 4.0 or earlier it runs even slower, because redrawing the VSE timeline with 4800 strips takes about 100 milliseconds; that is no longer slow (1-2ms) in later versions due to what I did a year ago.

Why does it take 300 milliseconds to update the strip colors? For that of course I brought up Superluminal and it tells me the problem is cache invalidation:

Luckily, cache invalidation is one of the easiest things in computer science, right? 🧌

Anyway, this looks like another case of accidental quadratic complexity: for each strip that gets a new color set on it, there’s code that 1) invalidates any cached results for that strip (ok), and 2) tries to find whether this strip belongs to any meta-strips to invalidate those (which scans all the strips), and 3) tries to find which strips intersect the strip horizontal range (i.e. are “composited above it”), and invalidate partial results of those – this again scans all the strips.

Step 2 above can be easily addressed, I think, as the codebase already maintains data structures for finding which strips are part of which meta-strips, without resorting to “look at everything”.

Step 3 is slightly harder in the current code. However, half a year ago during VSE workshop we talked about how the whole caching system within VSE is maybe too complexicated for no good reason.

Now that I think about it, I think most or all of that extra cost could be removed, if Someone™️ would rewrite VSE cache to be along the lines of how we discussed at the workshop.

Hmm. Maybe I have some work to do. And then the VSE timeline could be properly doomed.