Iceland Vacation Report

tl;dr: Just spent a week in Iceland and it was awesome!

Some folks have asked for impressions of my Iceland vacation or some advice, so here it goes. Caveats: my first (and only so far) trip there, we went with kids, came up with the itinerary ourselves (with some advice from friends), etc.

Our trip was one week; myself, my wife and two kids (10 and 4yo). If you’re going alone, or for a honeymoon, or with a group of friends, the experience might be very different. Without small kids, I’d go for longer than a week, and try to wander further away from main roads.

Planning

Summary of what we wanted:

  • Go in summer so it’s fairly warm.
  • Stay away from people ;) Well, at least a bit.
  • No serious hiking climbing; just rent a car and go places.

Asking friends, reading the internets (wikitravel, wikipedia, lonely planet, random blogs), came up with a list of “places I’d like to go”. Used Google Maps Engine Lite to layout the plan, and google maps to estimate driving times.

The plan was mostly to explore the nothern part of Iceland, staying in Akureyri for 3 nights; and last two nights in Reykjavik.

We booked everything in advance. Larger places (Akureyri and Reykjavik) through airbnb – so far all my experiences with airbnb have been very positive, and it’s much nicer to stay in an actual apartment instead of some generic hotel/guesthouse. Towns in Iceland are extremely small though - Akureyri, being the 2nd largest city outside of Reykjavik area, is only 18000. Which means airbnb is only really practical in Reykjavik & Akureyri. We booked some small cottages & guesthouses for several nights elsewhere (through tripadvisor, booking.com etc.).

Driving

Rented a car in advance as well. For the first Iceland trip decided to go “casual driving”. Car rental is expensive. In our case, we paid as much for a simple Renault Megane as we paid for all the housing. Rent a local GPS; neither Apple nor Google maps have very good road coverage, and cell connectivity might be shaky in more remote places.

Paved roads (the 1 “ring road” and most of two-digit roads) are good quality but not wide. Larger gravel roads are okay. Smaller gravel roads are small and rocky – and we didn’t even go to more mountain places. Big chunk of area inside the land is only accessible by 4x4 vehicles; which we decided not to do this time.

Notes! When a sign says “blindhæd”, it means exactly that - a road goes through a top of some hill and you wouldn’t see a car approaching in front. Gas stations are around towns, but you can easily have 100km without a single station in between. Some clouds literally sit on the ground; and visibility while driving through that is really, really bad - couple dozen meters. Sheep often found on the gravel roads. A lot of bridges that are only wide enough for one car. Driving off-road is illegal to preserve vegetation (hey it takes several thousand years for even moss to start growing on lava fields).

Generaly driving conditions are okay (in summer and good weather at least ;)), there’s little traffic going on, and other drivers are very considerate. When two cars have to pass by on a narrow road, one of them often carefully stops several hundred meters away to let the other through. For me, the hardest thing was just that driving 4-5 hours each day is tiring (hey my usual daily dose is 30 minutes! and I don’t like driving to begin with). That, and driving through the clouds - your eyes are used to scanning the road at least several hundred meters ahead, but you can’t quite do that in the cloud.

Next time I’m going there, I want to get a 4x4 and go more remote places. The beauty and non-Earthiness of the landscapes is too stunning.

Next up: travel log with pictures. SPOILER ALERT!

Day 1: Þingvellir and Deildartunguhver

Landed in Keflavik airport past midnight, got our car and slept over in some guesthouse in Keflavik itself.

Þingvellir park with rift valley - somewhat too many people for my taste ;). Took smaler gravel roads up north. Surprise find - a lake with flat as a mirror surface; I didn’t even notice the lake at first. Sandkluftavatn is the name.

Deildartunguhver hot springs. Fairly impressive to see boiling water coming out of the earth, just like that.

Also, the smell! This is a common theme - Iceland has abundant hot water that’s used for heating & stuff, but most of it has that hard boiled eggs sulfur smell. They somehow do not mention the smell in, for example, Blue Lagoon advertising material ;)

Pathfinding in the GPS led us through some scary road where 7km lasted forever, mostly in 1st gear and trying to avoid damaging the car’s underside or rolling off a hill. A jeep would have been useful. A fence sign that could either be interpreted as “you’ll be shot for going there” or “no shooting here” provided some nice ambiguity! That was the only scary driving experience I had. Moral here: if you’re entering a road and wondering “I wonder if my car is really good for this”, turn around now. The road will not get better!

Rest of the day, highway up to Hvammstangi, slept over in small, simple & nice cottages. “Double story bed, yay!!!” – kids.

Day 2: to Akureyri

The plan was “just get from Hvammstangi to Akureyri”. Took a little detour to Skagafjördur.

Settled down in Akureyri, which we used as our “home base” for 3 nights. Really lovely town! Just small enough to be, well, small; and just large enough to have decent places to eat ;) Kids loved the swimming pool. Due to lots of natural hot water, swimming pools are everywhere in Iceland, and they are extremely cheap.

Day 3: Ásbyrgi, Dettifoss, Mývatn

Just found out now that our trip almost went along the “Diamond Circle” route. Akureyri -> Husavik -> Ásbyrgi -> Dettifoss -> Mývatn.

From Husavik people usually go on whale watching tours, but we only stopped for cupcakes.

Ásbyrgi canyon is impressive; hard to imagine all that being caused by water.

From the internets I imagined Ásbyrgi to be a cube of rock in the middle of nowhere; most of the photos show it like this. It’s not a cube; that’s just one end of a long wall.

Dettifoss is big, but I don’t have photos to do it justice. We went on the east side which is more gravel driving, but supposedly better view.

On our way back, accidental find - Hverarönd which gets you wondering “are we still on Earth?” - a bunch of fumaroles and mudpots.

Next up, Mývatn nature baths which folks say is a less touristy version of Blue Lagoon (we haven’t been to that one). Less crowded = good in my book; even if Mývatn ones are still quite crowded. Water from 36 to 45˚C (97 to 113 F), sulfur smell, oh my!

Drive back to Akureyri and observe sunlight scattering in distant cloud of rain.

Day 4: Godafoss, Dimmuborgir, Viti

Same area around Mývatn. Godafoss waterfall:

Dimmuborgir, which I wanted to check out if only because of Dimmu Borgir. It’s okay. Not metal though ;)

Cloud rolling over a mountain:

There’s also a Hverfjall crater right next to Dimmuborgir, but we decided not to climb it with kids. Next time?

Víti crater near Krafla, and some fumaroles right next to it.

Thermal power plants there look like some alien constructions, with pipes spanning vast distances. Here, Krafla power station:

Day 5: to Reykjavik

Long drive from Akureyri to Reykjavik. Unplanned find, Grábrók crater right next to the highway; in a group of 3 craters.

Nice fBm noise generator for the terrain you’ve got there, Iceland:

Arrive in Reykjavik, check out downtown. It’s full of colors!

Day 6: Geysir, Gullfoss

Geysir, the geyser that named them all, is mostly dormant now. However, Strokkur right next to it goes off each 3-5 minutes. There’s a lot of people there and I initially was wary of that (“them tourists ruin everything!”) but geysers are indeed impressive.

One of the eruptions, we were standing a bit further away to get a better view. Either the wind blew stronger, or the eruption was higher, or both – but the water just landed onto all of us. Good thing it was not hot. Achievement unlocked: got soaked by the geyser!

Gullfoss:

And finally, friendly sheep joining us for our lunch stop:

Next time?

This time, we’ve mostly been to the north and some major attractions around Reykjavik. Did not see any glaciers up close, nor anything that is in the south or middle. I guess that’s left for the next time(s). Update: “next time” has happened in 2018!

Most of the photos above shot by my wife Aistė. I’ll just end the post with this picture. BAA!


Reviewing ALL THE CODE

I like to review ALL THE THINGS that are happening in our codebase. Currently we have about 70 programmers, mostly comitting to a single Mercurial repository (into a ton of different branches), producing about 120 commits per day. I used to review all that using RhodeCode’s “journal” page, but Lucas taught me a much better way. So here it is.

Quick description of our current setup

We use Mercurial for source control, with largefiles extension for versioning big binary files.

Branches (“named branches”, i.e. not “bookmarks”) are used for branching. Joel’s hg init talks about using physical separate repositories to emulate branching, but don’t listen to that. That way lies madness. Mercurial’s branches work perfectly fine and are much superior workflow (we used to use “separate repos as branches” in the past, back when we used Kiln - not recommended).

We use RhodeCode as a web interface to Mercurial, and to manage repositories, user permissions etc. It’s also used to do “pull requests” and for commenting on the commits.

1. Pull everything

Each day, pull all the branches into your local repository clone. Just hg pull (difference from normal workflow, where you pull only your current branch, hg pull -b .).

Now you have the history of everything on your own machine.

2. Review in SourceTree

Use SourceTree’s Log view and there you have the commits. Look at each and every one of them.

Next, setup a “custom action” in SourceTree to go to a commit in RhodeCode. So whenever I see a commit that I want to comment on, it’s just a right click away:

SourceTree is awesome by the way (and it’s now both on Windows and Mac)!

3. Comment in RhodeCode

Add comments, approve/reject the commit etc.:

And well, that’s about it!

Clarifications

Why not use RhodeCode’s Journal page?

I used to do that for a long time, until I realized I’m wasting my time. The journal is okay to see that “some activity is happening”, but not terribly useful to get any real information:

I can see the commit SHAs, awesome! To see even the commit messages I have to hover over each of them and wait a second for the commit message to load via some AJAX. To see the actual commit, I have to open a new tab. At 100+ commits per day, that’s massive waste of browser tabs!

Why not use Kiln?

We used to use Kiln indeed. Everything seemed nice and rosy until we hit massive scalability problems (team size grew, build farm size grew etc.). We had problems like build farm agents stalling the checkout for half an hour, just waiting for Kiln to respond (Kiln itself is the only gateway to the underlying Mercurial repository, so even the build farm had to go through it).

Afer many, many months of trying to find solutions to the scalability problems, we just gave up. No amount of configuration / platform / hardware tweaking seemed to help. That was Kiln 2.5 or so; they might have improved since then. But, once bitten twice shy.

Kiln still has the best code review UI I’ve ever seen though. If only it scaled to our size…

Seriously, you review everything?

Not really. In the areas where I’d have no clue what’s going on anyway (audio, networking, build infrastructure, …), I just glance at the commit messages. Plus, all the code (or most of it?) is reviewed by other people as well; usually folks who have some clue.

I tried tracking review time last week, and it looks like I’m spending about an hour each day reviewing code like this. Is that too low or too high? I don’t know.

There’s a rumor going on that my office is nothing but a giant wall of monitors for watching all the code. That is not true. Really. Don’t look at the wall to your left.

How many issues do you find this way?

3-5 minor issues each day. By far the most common one: accidentally comitting some debugging code leftovers or totally unrelated files. More serious issues every few days, and a “stop! this is, like, totally wrong” maybe once a week.

Another side effect of reviewing everything, or at least reading commit messages: I can tell who just started doing what and preemptively prevent others from starting the same thing. Or relate a newly introduced problem (since these slip through code reviews anyway) to something that I remember was changed recently.


Mobile Hardware Stats (and more)

Short summary: Unity’s hardware stats page now has a “mobile” section. Which is exactly what it says, hardware statistics of people playing Unity games on iOS & Android. Go to stats.unity3d.com and enjoy.

Some interesting bits:

Operating systems

iOS uptake is crazy high: 98% of the market has iOS version that’s not much older than one year (iOS 5.1 was released in 2012 March). You’d be quite okay targetting just 5.1 and up!

Android uptake is… slightly different. 25% of the market is still on Android 2.3, which is almost two and a half years old (2010 December). Note that for all practical reasons Android 3.x does not exist ;)

Windows XP in the Web Player is making a comeback at 48% of the market. Most likely explained by “Asia”, see geography below.

  • Windows Vista could be soon dropped, almost no one is using it anymore. XP… not dropping that just yet :(
  • 64 bit Windows is still not the norm.

Geography

Android is big in United States (18%), China (13%), Korea (12%), Japan (6%), Russia (4%), Taiwan (4%) – mostly Asia.

iOS is big in United States (30%), United Kingdom (10%), China (7%), Russia (4%), Canada (4%), Germany (4%) – mostly “western world”.

Looking at Web Player, China is 28% while US is only 12%!

GPU

GPU makers on Android: Qualcomm 37%, ARM 32%, Imagination 22%, NVIDIA 6%.

  • You wouldn’t guess NVIDIA is in the distant 4th place, would you?
  • ARM share is almost entirely Mali 400. Strangely enough, almost no latest generation (Mali T6xx) devices.
  • OpenGL ES 3.0 capable devices are 4% right now, almost exclusively pulled forward by Qualcomm Adreno 320.
  • On iOS, Imagination is 100% of course…

No big changes on the PC:

  • Intel slowly rising, NVIDIA & AMD flat, others that used to exist (S3 & SIS) don’t exist anymore.
  • GPU capabilities increasing, though shader model 5.0 uptake seems slower than SM4.0 was.
  • Due to rise of Windows XP, “can actually use DX10+” is decreasing :(

Devices

On Android, Samsung is king with 55% of the market. No wonder it takes majority of the Android profits I guess. The rest is split by umpteen vendors (Sony, LG, HTC, Amazon etc.).

Most popular devices are various Galaxy models. Out of non-Samsung ones, Kindle Fire (4.3%), Nexus 7 (1.5%) and then it goes into “WAT? I guess Asia” territory with Xiaomi MI-One (1.2%) and so on.

On iOS, Apple has 100% share (shocking right?). There’s no clear leader in device model; iPhone 4S (18%), iPhone 5 (16%), iPad 2 (16%), iPhone 4 (14%), iPod Touch 4 (10%).

Interesting that first iPad can be pretty much ignored now (1.5%), whereas iPad 2 is still more popular than any of the later iPad models.

CPU

Single core CPUs are about 27% on both Android & iOS. The rest on iOS is all dual-core CPUs, whereas almost a quarter of Androids have four cores!

ARMv6 can be quite safely ignored. Good.

On PC, the “lots and lots of cores!” future did not happen - majority are dual core, and 4 core CPU growth seemingly stopped at 23% (though again, maybe explained by rise of Asia?).

FAQ

How big is this data set exactly?

Millions and millions. We track the data at quarterly granularity, and in the last quarter mobile has been about 200 million devices (yes really!); whereas web player has been 36 million machines.

Why no “All” section in mobile pages, with both Android & iOS?

We’ve added hardware stats tracking on Android earlier, so there are more Unity games made with it out there. Would be totally unfair “market share” - right now, 250 million Android devices and “only” 4 million iOS devices are represented in the stats. As more developers move to more recent Unity versions, the market share will level out and then we’ll add “All” section.

Nice charts, what did you use?

Flot. It is nice! I added “track by area” option to it.

How often is stats.unity3d.com page updated?

Roughly once a month.


"Parallel for" in Apple's GCD

I was checking out OpenSubdiv and noticed that on a Mac it’s not exactly “massively parallel”. Neither of OpenGL backends work (transform feedback one requires GL 4.2, and compute shader one requires GL 4.3 - but Macs right now can only do GL 3.2), OpenCL backend is much slower than the CPU one (OS X 10.7, GeForce GT 330M) for some reason, I don’t have CUDA installed so didn’t check that one, and OpenMP isn’t exactly supported by Apple’s compilers (yet?). Which leaves OpenSubdiv doing simple single threaded CPU subdivision.

This isn’t webscale multicorescale! Something must be done!

Apple platforms might not support OpenMP, but they do have something called Grand Central Dispatch (GCD). Which is supposedly a fancy technology to make multicore programming very easy – here’s the original GCD unveiling. Seeing how easy it is, I decided to try it out.

As a baseline, single threaded “CPU” subdivision kernel takes 33 milliseconds to compute 4th subdivision level of a “Car” model:

OpenMP dispatcher in OpenSubdiv

Subdivision in OpenSubdiv is computed by running several loops over data: loop to compute new edge positions, new face positions, new vertex positions etc. Fairly standard stuff. Each loop iteration is completely independent from others, for example:

void OsdCpuComputeEdge(/*...*/ int start, int end) {
    for (int i = start; i < end; i++) {
    	// compute i-th edge, completely independent of all other edges
    }
}

So of course OpenMP version just trivially says “hey, this loop is parallel!”:

void OsdOmpComputeEdge(/*...*/ int start, int end) {
	#pragma omp parallel for //<-- only this line is different!
    for (int i = start; i < end; i++) {
    	// compute i-th edge
    }
}

And then OpenMP-aware compiler and runtime will decide how to run this loop best over multiple CPU cores available. For example, it might split the loop into as many subsets as there are CPU cores, run these subsets (“jobs”) on its worker threads for these cores, and wait until all of them are done. Or it might split it up into more jobs, so that if the job lenghts will end up being different, it will still have some jobs to process on the other cores. This is all up to the OpenMP runtime to decide, but generally for large completely parallel loops it does a pretty good job.

Except, well, OpenMP doesn’t work on current Xcode 4.5 compiler (clang).

Initial parallel loop using GCD

GCD documentation suggests using dispatch_apply to submit a number of jobs at once; see Performing Loop Operations Concurrently section. This is easy to do:

void OsdGcdComputeEdge(/*...*/ int start, int end, dispatch_queue_t gcdq) {
	// replace for loop with:
	dispatch_apply(end-start, gcdq, ^(size_t blockIdx){
		int i = start+blockIdx;
    	// compute i-th edge
    });
}

See full commit here. That was easy. And slower than single threaded: 47ms with GCD, compared to 33ms single threaded. Not good.

OpenMP looks at the whole loop and hopefully partitions it into sensible count of subsets for parallel execution. Whereas GCD’s dispatch_apply submits each iteration of the loop to be executed in parallel. This “submit stuff to be executed on my worker threads” is naturally not a free operation and incurs some overhead. In our case, each iteration of the loop is fairly simple, it pretty much does weighted average of some vertices. Dispatch overhead here is probably higher than the actual work that we’re trying to do!

Better parallel loop using GCD

Of course the solution here is to batch up work items. Imagine that this loop processes, for example, 16 items (vertices, edges, …), then goes to next 16, and so on. These “packets of 16 items” would be what we dispatch to GCD. At the end of the loop, we might need to handle the remaining ones, if the number of iterations was not a multiple of 16. In fact, this is exactly what GCD documentation suggests in Improving on Loop Code.

All OpenSubdiv CPU kernels take “start” and “end” parameters that are essentially indices into an array of where to do the processing. So from our GCD blocks we can just call the regular CPU functions (see full commit):

const int GCD_WORK_STRIDE = 16;

void OsdGcdComputeEdge(/*...*/ int start, int end, dispatch_queue_t gcdq) {
    // submit work to GCD in parallel
    const int workSize = end-start;
    dispatch_apply(workSize/GCD_WORK_STRIDE, gcdq, ^(size_t blockIdx){
        const int start_i = start + blockIdx*GCD_WORK_STRIDE;
        const int end_i = start_i + GCD_WORK_STRIDE;
        OsdCpuComputeFace(/*...*/, start_i, end_i);
    });
    // do trailing block that's less than our batch size
    const int start_e = end - workSize%GCD_WORK_STRIDE;
    const int end_e = end;
    if (start_e < end_e)
        OsdCpuComputeFace(/*...*/, start_e, end_e);
}

This makes 4th subdivision level of the car model be computed in 15ms:

So that’s twice as fast as single threaded implementation. Is that good enough or not? My machine is a dual core (4 thread) one, so it is within my ballpark of expectations. Maybe it could go higher, but for that I’d need to do some profiling.

But you know what? Take a look at the other numbers - 62 milliseconds are spent on “CPU Draw”, so clearly that takes way more time than actual subdivision now. Fixing that one will have to be for another time, but suffice to say that reading data from GPU vertex buffers back into system memory each frame might not be a recipe for efficiency.

There’s at least one place in the above “GCD loop pattern” (hi Christer!) that might be improved: dispatch_apply waits until all submitted jobs are done. But to compute the trailing block we don’t need to wait for the other ones. The trailing block could be incorporated into the dispatch_apply loop, with better computation of end_i variable. Some other day!


Adventures in 3D Printing

I shamelessly stole whole idea from Robert Cupisz and did some 3D printed earrings. TL;DR: raymarching, marching cubes, MeshLab.

Now for the longer version…

Step 1: pick a Quaternion Julia fractal

As always, Iñigo Quilez’ work is a definitive resource. There’s a ready-made GLSL shader for raymarching this fractal in ShaderToy (named “Quaternion”), however current state of WebGL doesn’t allow loops with dynamic number of iterations, so it does not quite work in the browser. The shader is good otherwise!

Step 2: realtime tweaking with raymarching

With some massaging I’ve brought the shader into Unity.

Here, some experimentation with parameters for the fractal (picked 7.45 for “time value”), as well as extending the distance function to have a little torus for the earring hook, etc.

Keep in mind that while a fractal might look nice, it might not be printable fine because of too thin walls. All materials have a minimum “wall thickness”, and for example silver printed at Shapeways has a minimum thickness of 0.6-0.8mm. So I had to make the hape somewhat less interesting.

Now this leaves us with a signed distance field function (in a form of a GPU shader). This needs to be turned into an actual 3D model.

Step 3: marching cubes

Welcome old friend, Marching Cubes! I couldn’t find anything out of the box that would do “here’s my distance field function, do marching cubes on it”, so I wrote some quick-n-dirty code myself. Started with classic Paul Bourke’s code and made it print everything into an .OBJ file.

Here’s a non-final version of the distance field, gone through marching cubes, and brought back into Unity:

At this point I realized that the output will be quite noisy and some sort of “smoothing” will have to be done. Did a quick try at doing something with 3dsmax, but it is really no good at dealing with more than a million vertices at a time. Just doing a vertex weld on a million vertex model was taking two hours (?!).

Step 4: filtering in MeshLab

Some googling leads to MeshLab which is all kinds of awesome. And open source (which means the UI is not the most polished one, but hey it works).

Here’s my final model, as produced by marching cubes, loaded in MeshLab:

It’s still quite noisy, has several thin features and possibly sharp edges. Here’s what I did in MeshLab:

  • Remove duplicate vertices
  • Filters -> Remeshing, Simplification and Reconstruction -> Surface Reconstruction: Poisson. Entered 8 as octree depth, left others at default (solver divide: 6, sample per node: 1, surface offsetting: 1).
  • Scale the model to be about 26mm in length. Scale tool, measure geometric properties filter, freeze matrix.

Did I say MeshLab is awesome? It is.

Step 5: print it!

Export smoothed model from MeshLab, upload the file to a 3D printing service and… done! I used Shapeways, but there’s also i.materialize and others.

Here is the real actual printed thing!

I’ve been doing computer graphics since, well, last millenium. And this is probably the first time when this “graphics work” directly ends up in a real actual thing. Feels nice ;)