Thursday, 17 December 2009


We code our games and put in asserts to catch when things go wrong. We put the game to the test, then when it comes to release, we turn off asserts and go about optimising and submitting the game/app to the testers/users.

So, when you train to ride a motorcycle, you should wear a helmet and jacket, special boots, because you don't know how to ride. But once you can ride, you might as well go around in flip flops and shorts.

Why don't these two things match up?

In console games, we've got to make the game work. Almost 100% or else. However, during our 99% of the time building it before final release day, we should leave asserts on even in the release code. I'd go so far as to say that as games can be patched on consoles now, it would be good to have a crash actually let the user submit a bug report. Just like windows apps keep on trying to do, but instead of actively doing it, maybe we should build it into the game so it silently reports bugs if it can. A bit like "document recovery" if you like.


The pre-processor provides some keywords: __FILE__, __LINE__, and __COUNTER__. They provide you with some locality information, however, counter is only relative to itself.

If you read my earlier post on the nastiness of the ## command in the pre-compiler, then you'll be able to guess what this is going to be used for.

multiple uniquifying identifier function(issimo)

#define APPENDER(x) PASTER1(x,__COUNTER__)
#define PASTER1(x,y) PASTER2(x,y)
#define PASTER2(x,y) x##y

Okay, now you have a unique in that file, identifier system. Go team.

Tuesday, 8 December 2009

Perfect Code

If you write a basic code class, such as a stack, a list, a queue, you can get the code to be clean and readable and fast. You can optimise it and make it very small and fast and readable too.

You can perfect the code. You have the capacity to actually make something that cannot be improved. This is because there is a finite number of ways to do it and the finite number is not too large for human time scales.

Now, in your mind, imagine doing that for a full size development project. Can you see the towering complexity of doing it perfect in your mind yet? Okay, well, that's what's called clean room development. It takes moderately more time than normal development, about 100-200% more, but returns almost perfect complex systems. Almost perfect, because at some point a human will have made a mistake or an assumption. There is no escaping errors when the possible set of solutions is so large that to even count it would take a multitude of universe lifetimes.

So, should we adopt the clean room development model? Can we adopt it? I don't think we can, or even should, because one of the main features of a clean room project is that the full project definition is known about from day 1. That's something we never have.

Now, consider bridge building. Would you submit to clean room development on a bridge, where lives are at stake in case of the collapse of the bridge? Would you spend a little more time making sure that everything was perfect and there were gross over tolerances in all the materials used? Yes should be your answer. And do you think that real world mechanical structural engineers use clean room development model?

They don't. They get a half finished proposal written on a napkin, told a random budget and time frame and get started. Just like us. The only benefit they have is that their product can't get exponentially more complex after the initial design.

If the only difference between physical engineering and software engineering is the linear vs exponential complexity. That alone is enough to explain why clean room development is so successful in programming. It stems the complexity bleed we get in computer software. It stops the applications or games from getting out of hand. It makes the job more linear.

So, now I ask again, should we use the clean room development model?

I think we need something like it, if not it. We need something to stop the constant complexity increase. What can that be? Can we reign in our apps and games to be made of "materials" that don't grow in complexity? Can we do this with well engineered architecture model that allows modules to be used without adding any more than a linear increase in complexity?

Behaviour oriented development might be a solution to this for games where much of the complexity of the game comes from relatively independent subsystems connecting where it's only just about necessary. Behaviour oriented development also allows for simpler upgrade path if the game is developed iteratively with napkin design changes.