Friday, 24 September 2010

Invent a new spline

For some reason of another, I had to invent a spline, and not a normal four control point one, but a three control point variant. It had to go through all three points, so a three point Bezier was out. I decided to limit the spline so that the tangent would match the end point difference, like Catmull-Rom splines do.

So, here is my new spline.

P(t) = P0*(1+2t^2-2t) + P1*(1-(2t-1)^2) + P2*(2t^-t)

P(0) == P0
P(0.5) == P1
P(1) == P2

At t=0.5 the tangent has equal slope to (P2-P0)

This should make for a reasonably useful curve.

view in glory:

Tuesday, 24 August 2010

How fast is your debug build?

Why is your debug build slow? Lots of asserts? Awkward templates that only compile out to sensible stuff in release?

It's only a theory right now, but I think that if your game is running like a dog in debug, maybe there's something wrong with the number of branches you're doing. Not just in debug, but in general.
Asserts are the usual cause of slow down (they break the cache and branch and cause memory wastage), but they are valuable as warnings when things are going wrong. They provide great protection, but they aren't necessary if you make sure that the condition that would normally trigger them cannot be achieved.

One of the most common uses I've seen for an assert is checking for NULL. This can be avoided by just making sure that your queries are either returning null objects instead (dummy objects), or by making the query inherently null proof (use a queued up todo list rather than a fetch from and check for work to do).

Another slowdown can be from templates that aren't being fully expanded or optimised in debug code. This is a lot simpler to fix. Stop using template meta programming. Your compile time will go down too. Template meta-programming is also a replacement for moving wholesale to scripted development. Consider the problem your meta-programming is solving in light of a scripted environment.

On the PS2, we never saw that big a difference in final build and the debug build. This was mostly down to the fact that the PS2 was spending all its time streaming data from one place to another (no nulls in the middle of a stream), and processing them with data driven transforms (no meta-programming for us, just plain coded transforms driven by data demand). The only code that ever got all that faster was the C++ class style math library (which was hilarious to watch as it went from being slower than hand coded vector unit code by a factor of four to being faster than the hand coded vector unit code by a factor of two.)

Tuesday, 10 August 2010

Hidden Branches

How often do we use if's and function calls without thinking about the cost? Quite often I'd bet. But in addition to the branch operations that are explicit, there are some that are hidden, or implicit. Consider the humble logical and operation &&

if( A==1 && B==2 && C==3 )
// I needed to branch three times to get here
// I needed to branch at least once to get here, possibly two times, sometimes three.

consider the alternative

resultA = A-1;
resultB = B-2;
resultC = C-3;

if( !( resultA|resultB|resultC ) )
// I needed to branch once to get here
// I needed to branch once to get here

This is better because it's more consistent, there is no short-circuiting going on. Short circuits worked well when CPU time mattered. Now it doesn't so much. It only matters when the arguments in the short circuit depend on previous items in the chain:

if( pThing && pThing->GetStuff() && pThing->GetStuff()->IsOkay() )

But this coding style is slow and should be used sparingly.

Tuesday, 3 August 2010

Mother of invention

Developing your game through a data and transform centered design philosophy leads to short cuts around things that have been ubiquitous in games development for some years.

Managers: Managers for tweakable variables, for resource loading, for inventories, bad guys, weapons and debris.
Setup functions: Setup functions or scripts for menu systems and user interfaces, for scene trees, for levels, for rendering subsystems, and shaders, and player input configuration.
Ticks: Ticks for entities, for pre render phases, for pre physics phases, for post frame memory defragging, physics, rendering, sound and asynchronous file IO.

These are artifacts of the coding style that C++ in games has given us. They, like their design patterns cousins, offer us an insight into what's wrong with the language as much as they help us get stuff done.

Managers vanish when making an item is a simple case of adding a row to a table, deleting the item is simply removing it, and getting a reference to an item is just holding it's primary key in another table. This explains why databases never had managers for their tables. They just had the tables. Everyone knew how to access the tables, and helpful people wrote helpers to access them in terms of stored procedures (which are like macros for databases) so that changing the access pattern was removed from the use of the data.

Setup functions seem to vanish as what was setup now becomes the main loop, or the set of transforms for a particular way of doing something. In SQL, you define the tables at the beginning, and after that you don't add new ones unless something major changes. In games development that's akin to having globals for all your tables, and the main loop has static functions coupling transforms to process the data on a frame by frame basis. the code is the design, and therefore doesn't need to change unless something significant about the game changes. After the rubbish has boiled off, you naturally end up with the real data driving the game, the scripts, the level data, but none of the unnecessarily dynamic schema changing code that plagues games development built by people with good intentions.

All the various ticks vanish as the very idea of a tick becomes quite obsolete in the face of transforming data over the designated frame or other interval. Tick functions and schedulers are for systems that don't know what they're ticking. If your design is in the code, you don't need a tick, you just need to perform the sequence of transforms to produce your frame output or update your network or game logic.

In conclusion, big helpful managers, setup functions and scheduling tick systems are all symptoms of object oriented development in C++ in games. They're not implicitly good or bad, but they do point to a problem because they are not necessary.

Monday, 2 August 2010

Entity, tick thy self!

When developing in OO C++, you tend to think "entities, bunches of them, then tick them once a frame"...
However, I have come across TickAll(), Think(), Advance(), Update(), PreUpdate(), PreRender(), Step() and many other functions in my history of games development with C++. Not all the tick functions were even OOD, but the ideas behind why they were ticking generic entities were definitely coming from some OOD way of thinking.

Many engines have a form of scheduling their entities to do things in some specific order, so data is ready just in time for the next stage of it's processing. Think about how your game engine ticks movement, before doing collisions, then after collisions, ticks responses to collisions... so, you're already serially ticking sub parts of your entities. Funny thing is that you're normally ticking the entities that aren't involved in physics along with the ones that are, just in some post physics or pre rendering tick because you can't think where else to put them, or more often than not, in a function that used to be called MainTick(), but it's now called PostMovePreRender() because other ticks needed to be added to sync up data in stages over the years the engine's been evolving.

Simple example:
1. Do movement on all the moving things.
2. Do collision on all the colliding things.
3. Do collision response on all the collided things.
4. Do a general tick based on general state.

The object oriented approach is normally to MoveTick all objects in the moving things list (if you have the foresight to actually have an awake list), then Collide in the collision system (that caches and spatially coordinates stuff its own way), then CollisionResponse all objects that emerged from that tick, then GeneralTick all objects in the system that aren't hibernating. This stalls the things that don't interact from updating until their interactive brethren have finished poking each other.

It works, but it's a series of calls. So when you get around to re-writing your code as data oriented, remember that you were already calling a series of functions even when doing OO, so don't look down on a long list of function calls as if it's something horrible you'd never have done in the days of object oriented. You did before, you will again.

Splitting out the ticks like this is a habit of necessity. Coding, for games, like many successful games themselves, is a set of simple rules and a very large number of exceptions. Thinking you can get away with one Tick method is like saying that "if you hit, you deal damage"... except that's not always true.

It should also be considered as another reason why everyone should move away from entities containing all their data. Entities containing data for all circumstances is bad. Simply because if you put everyone's exceptions in your data size goes up unnecessarily. Also, if your entities share data (simple stuff like their position) then an update function has to be called before or after some other tick. This is a stall somewhere. If instead you assume prev-frame data is good enough, then you can make a read-only reference for all the things using the entity's position, while still updating the physics position multiple times due to collision and response before committing ready for the next frame.

A series of logical steps at a high level isn't bad, in fact, having your main thread a series of discrete steps through the logical transforms gives you massive benefits when it comes to debugging, performance analysis, and tuning. Consider this: if you start out by knowing how everything fits together you can reason about the whole system better and make much larger changes without fear of damage, and a general idea of what to fix/change.

Just because it seems easy, doesn't make it wrong.

Monday, 7 June 2010

Moving away from OO

One of the pitfalls that OO coders fall into when doing data oriented development seems to be trying to access the outside world while stream processing. One thing that's come up numerous times on talks about components before has been getting the interoperability between components right. That was a problem when components were objects, but if you're thinking data oriented those issues go away.

In OO, if part A calls in part B, it can happen like this:

void partA::Tick()
/* some body */
if( this->SomeCondition() )
/* more body */

I've seen this in numerous places, from rendering systems to AI code.

in data oriented development, you can do this:

InputOutput( partA* ), Output( partBProcessRequest* ) -> Tick()
InputOutput( partB* ), Input( partBProcessRequest* ) -> Process()

Keeping the partB process requests this way, it's possible to use partB process requests to preload the cache with the right partBs so that processing doesn't stall, and you don't have the extra I cache miss of moving between scopes of doing the tick for partA and processing of partB.

Friday, 14 May 2010

Bit masks for asking questions

Suppose you have some state questions that give answers, such as "taller than six foot", "have brown hair", "weighs under 100kg", "is married"

these questions can be asked of a large population, and if you represent the answers on a bit field, you get a simple to scan list of ints for answers to questions.
e.g. b1101 is a brown haired heavy tall person that is married, and b0000 is a typical coder with non-brown hair ;)

if you want to find all the people with brown hair that weigh under 100kg, then you can just use a simple mask: b0110, and check for equality.

answer is the bitfield for the answers for one element / person
question is the question bit field

if( answer&question==question )
  // is what I am looking for.

this is okay for sitations where you want all positive answers, but what about negative ones? how do you find the single people under six foot?

if( answer&question == 0 )
  // is what I'm looking for.

this is okay, but what about combining the two? say you want to know how many people are unmarried brown-hairs?

if( answer&interest == question )
  // is what I'm looking for.

what's interest? its the bit field of question bits I'm interested in. so, for the last question, unmarried-brown-haired : Interest b0101 Question b0100

this effectively gives you tri state checking on binary bit fields of information.

One more example:


put in "preCalcSkinningList" I110 Q110 // don't skin calc for unchanging meshes
put in "doLightingList" I101 Q001 // don't pre-light skinned meshes
put in "renderStaticList" I110 Q100 // render non-animated skins as statics
put in "renderStaticList" I100 Q000 // everything else is static
put in "renderSkinnedListUnlit" I111 Q110 // and skinned rendering too
put in "renderSkinnedListLLit" I111 Q111

Monday, 22 March 2010

Ronseal Rule

Now here is something you should think about every time you write a function. Does the function do what it says on the tin?

It's an important thing to think about. Some code is like this:

int CountThings()
// iterate through things
// while there, update their cache stuff
// and if one needs deleting, do it now
// return the count

That's pretty bad, but in a way, it's what a lot of systems do. Okay, so there are some optimisations to be made. Now, how about this one?

Thing *GetThing()
return mThingPtr;

That's not Ronseal either. And, it's prevalant in many systems I've worked on (including my own, oops).
No, I mean it. It's not Ronseal.
No, really. Have a look at what it's doing. It's not actually returning a Thing, it's returning a pointer to a thing...
You think I'm being picky? Well, most of the time I'd say you're right, but, what if I told you I could refactor it into two different functions?

Thing& GetThing()
return *mThingPtr;
bool ThingExists()
return mThingPtr != NULL;

Did you have an "ah-ha" moment there?

Okay, now think about this from the point of view of all those function calls you do to get objects, then check for NULL on return... You've had to infer two pieces of information from one call return value.
Now, go and fix your code.

Sunday, 21 March 2010

Defensive programming is offensive programming

Some people advocate defensive programming, think that it's better than a system carries on working, logs the fault and continues on merrily. This is okay for any programming where performance isn't of the utmost importance, and you don't mind shipping your software riddled with bugs that have all been caged. What it's not good for is any software that needs to be really safe, or really fast.

Why does it slow stuff down? First reason is, it's usually code that makes sure that return values from get functions are not null, or systems that try to handle illegal or irregular arguments.

if not null is bad because it is an inherent indirect branch (to get a value), then a probably predicted branch (not null). Constantly getting pointers to things and checking them for null is just going to thrash your branch and memory to death. It's offensive to the cache, and offensive to the in order processors in general.

What to do instead? Use asserts. Assume things are not null and carry on regardless. Make your game break when things are actually going wrong. What is wrong with finding out it's all broken a year before you release rather than a day after?

Friday, 19 March 2010

How I do the washing up.

Here is how I do the washing up.

  • Take one dirty thing from the pile of dirty things
  • if it looks as if it needs food scraping off:
  • I grab my scraping thing, walk to the bin, scrape off the crud, return to the sink, put down the scraper.
  • if it still looks dirty
  • I fill the bowl to the necessary level to wash up the item, put on gloves, wash it up, put it on the drainer, empty the bowl, take off gloves.
  • if it is now wet
  • I grab a drying towel, dry the item, put the towel back down
  • then as it must be clean by this point, i put it away where it belongs, then return to the sink ready to start all over again.
No, hang on, that's not how I wash up, that's how I code with virtuals. Doh. Silly me.

Thursday, 18 March 2010

A quote and a rethink

Rule of Modularity: The only way to write complex software that won’t fall on its face is to build it out of simple modules connected by well-defined interfaces, so that most problems are local and you can have some hope of fixing or optimizing a part without breaking the whole." - The Art of Unix Programming

Now, looking at what's been going on with data-oriented development I see that there are some words that though at the time pertinent, actually cause an inflexibility of interpretation. An inflexibility that will allow many to point and laugh at the data-oriented crowd. The problem is with the natural interpretation of modules and interfaces.

Modules and interfaces were just a way of saying break down the complex stuff into smaller easier to manage stuff, but the meaning has been lost as we have solidified what a module and an interface means. Let's go back and think about this again. Breaking a problem of many entities down was solved a long time ago by database engineers, they invented many tools, and even created a generic but powerful interface to manipulate their data. This could be called modularising the idea of persistence. SQL was the interface.

We can do the same, ignore the words "modules and interfaces" instead concentrate on the idea, separated and distinct techniques for processing objects.
If we allow ourselves to define objects as the streams or arrays of data, then all we need to do is write a lot of processes that operate on them. Each process does something, usually something somewhere between simple and complex as we don't want to waste data throughput on tediously simple problems (which is what OO advocates normally), and we don't want too much data per row (as that will cause cache issues). Which is why the data-oriented approach works really well with the old Unix programming quote, but only if you try to distil the essence of the quote, not just use the words blindly.

So, a refactored quote.

"Rule of Modularity: The only way to write complex software that won’t fall on its face is to build it out of simple data definitions connected by well-defined transforms, so that most problems are locally defined and you can have some hope of fixing or optimizing a single link without breaking the whole chain." - The Art of Unix Programming (revisited for modern hardware architecture - me)

Wednesday, 17 March 2010

Right Shift for the win

I have learned something new today.

Something that had been a "don't know, won't assume" has finally filtered into a fact. Right shift operator maintains the highest bit.
I didn't know if this was true, or if it was somehow costly, but I remember it not being true too. I now also remember there being a difference between arithmetic right shift and just plain right shift. Now the important thing here is, if you can provide a last bit, you can provide a mask.

i.e., any negative number right shifted by 31 (while identified as a signed int of 32 bits), is -1, or, the all inclusive mask. Any positive number, right shifted by 31 is 0, or, the all exclusive mask.

Now, apply that logic to branches and you get a good general branchless technique for value manipulation. Remember, most significant bit is maintained when you use signed types, don't go using unsigned ints.

Have fun!

Tuesday, 23 February 2010

STL and project development speed

Hidden Costs:

I have been a fan of trying to figure out what makes development pace faster, quickening the development of any project and quickening the development of all future ones too. I tended to look for solutions that made development less about getting it done and more about reducing the amount of work to get it done. This was probably a rebellion against tendencies in games to just get it done.

Recently, looking at how my brain works with relation to really good high level languages such as python and C#, I can see why I made some huge mistakes with my first true foray into the world of STL. The idea of things being objects in the high level languages makes the idea of containers equally simple to understand. STL is not written in a high level language, it's written for a low level language. There is a big problem with this for my way of thinking about problems in high level languages, I think of containers as being able to be passed around as easily as any other types. This lead to quite a large inefficiency in my code. I was returning containers as answers to queries.

This is how you do it in high level languages because of the nice object oriented approach to containers and arguments and everything, but in C++ this is a hugely damaging thing that I overlooked simply because it's sensible in other languages. I should have known better, I think I even did, but tried it anyway just to be sure.

Problem I'm having now is, how many junior games coders that have come from learning C# or java are going to be aware of these kinds of problems with performance. How many non C++ coders are going to be aware of the effect of copying a container? Also, how many are not going to know of an alternative to using the built in sort? I've got to write a better sort (problem domain specific) for my container as the STL sort is missing a trick or ten thousand as it's a lot slower than I was expecting for such a small amount of data.

What's actually dangerous about STL?

There are ideas that are easy to write out in STL, and there are ones that are hard to write out. Some of the ideas in STL are great, but things like functors are insane. I want to write less code, not more. Especially not more if it turns out it's actually slower to write it all nice and STL conformant too.
STL does have a lot of nice ideas, but they should be my coders idea toolbox, not actual coherent and coupled templated functions and classes that implement those ideas. There are too many situations where STL provides a mechanism by which the problem can be solved, but can't be solved simply, which adds lots of typing without much benefit. Also, when you start using STL, there is a tendency to not roll your own solutions, which can cause you to forget that things could be better.
However, the most dangerous thing about STL from my personal experience is the compilation time problem. This helper library really, really shouldn't come at such a price.

Monday, 22 February 2010


Some of you may already be aware of the quote:

"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Brian Kernighan

I came across this recently at work when I worked on a particularly evil algorithm for visibility checking. The algorithm as a whole is bound by three states, have-recursed, incoming-state, current-state. This, coupled with the fact that the algorithm changes based on each of these states means that the function that handles it looks really large as it has to have six separate code paths. The fact is, it is actually a really big problem space, not as large a problem space as when I wrote my own version of Judy Tables, but big enough. It's the combination of problem space complexity and the verbosity of each of the techniques for finding answers to questions that arise due to the current problem space configuration that means this function bloats into quite a scary pile of ifs and math functions.

I will be attempting to make it readable soon, as a co-worker has pointed out "it doesn't make sense", and that is going to make it impossible to maintain later on. The main thing I have to worry about is that the algorithm seems "okay" to me, but I think half of that is that the maths and the logic flow do make sense to me. But for how long? I'm worried that I might not be able to tell when the code is simple enough. First attempt is going to be trying to function-off the questions and states as much as possible.

Has anyone else had any experience of this? How do you tell when your code is good enough for public consumption?

Friday, 5 February 2010

Database of Flow

I was talking with my friend last night on the way back from London about my editor and how I implemented undos and redos. My initial quote was "use memento pattern", but I'm not quite sure this is right now, as the technique I used extended the idea a little and touched the "command" pattern too.
What I've done is simply base class all model operations with a command class and made all commands push themselves onto a done stack. Undo commits the undo method of the top of the undo stack into the model, then pushes that item onto a redo stack, and pops the undo stack.

Redo just reverses the operation and commits the redo method on the model instead. The command is it's own memento.

This way of working was important for me because I wanted (eventually) to be able to save commands to disk as I was working. This was to be my uncrashable editor. Or at least very easily debuggable editor.

Having recently played with SQLlite, I'm thinking of using a database like this to store my commands as well as the data. It's well known to be safe from crashes and general corruption problems so should be the basis of a very safe editor system.

If a crash happens, I should be able to just load up the last session, and replay the commands issued up to the point of the crash, which should allow me to debug even final release code by having repeatability.

But the thing that came to me last night was more useful than just that. Being able to replay commands is useful in at least two other ways. Firstly, we have a database of the commands that are used to do jobs. We can replay the actions of a user (even the undos and redos if that's part of the saved structure), and learn about what actions could be done quicker if there were macros or combination actions that did more than what's currently available.

Think of it like sample profiling, find out what the user is spending most of their time on and optimise the operation so it takes less time, or can be automated.

The other thing, by adding other metrics to the commands such as when they were done/called, we can find out which users are most productive, and learn techniques from them. We can learn how to shortcut (minimal actions or time to do a task), and learn about how they get into flow (time between commands). Although this would probably feel like an invasion of privacy to some people in a professional environment, I'm actually a bit fired up at the idea of "time and motion" being done inside the tool you're using. There's nothing wrong with making your job easier!

Tuesday, 19 January 2010

Designing things very carefully.

The Complex System:

We all know that software architecture is important in large projects. We've all envisioned systems to help us maintain larger and larger complex problems with less user or developer input. Scripting systems, shader building tools, data-driven editors, and many other cool things that would take a long time to write properly, but when they're done they're a great benefit to everyone that uses them.

Except half finished, they're worse than useless.

I work in games, and games are highly complex systems of interactions. Games would take forever to write if they were written correctly with all the great and clever architecture they should have, so producers push for a minimum feature set, veto the clever architecture and demand a simpler approach to the solution.

Later on in the project, usually about half way through to maybe three quarters, the project starts to slow down due to growing complexity and difficulty separating sections of code from each other. When production asks why it is, often the coders will say that it's because the way it's written is wrong, it needed more abstraction or better architecture, or just more time spent on the design stage, and less time on just getting features in.

The bad programmers say "oh, noes, we should have designed it betterer from the beginning. We told you that we should have written all the code that was only marginally likely to be necessary, but would have been fun to write and figure out."

However, the good coders would note that only small proportion of the code that they would have written, is now necessary. They will be note the time saved by writing code that does the job rather than over-architected super-code that could do anything, and breath a sigh of relief that all they have to do is one month of overtime getting the final features in place, rather than a year of overtime getting the "correct way to do it" code working at all.

In fact, when I've been on over-engineered projects, the one thing that has come to bite us more than anything else has been the inability to refactor the code because it is too complex or too clever for a single human to understand.

Complexity When Necessary:

There's a big difference between sloppy coding and coding to requirements, but it seems that the longer I spend in software engineering, the more I find people thinking that they are the same thing. I realise that a lot of people architect away because they think reuse is good and encapsulation and data hiding this that and the other, but we don't actually reuse a lot of our code because it's usually game specific, genre specific, platform specific. The stuff that isn't, is usually our "we brewed our own because we didn't trust 3rd party software", or just generally us reinventing the wheel for small sections of module specific code. If you're lucky, you'll get a series of games that just need new art, that's real code reuse, never need to actually do more than change some constants and recompile for a new game. If you think you can write reusable code, they why haven't you released it as a library (whether for sale, or for free)?

Tell me that it's really important to write a fully data driven engine to power your next game, and I'll remind you that your consoles and PCs already have a powerful and fast, data-driven, flexible, ready-for-anything fully-supported easily-debuggable and profilable, and generally very well documented system. The system is C/C++. Why don't you write some game in it?

Monday, 11 January 2010

Deleting Code

I've learnt that keeping old stuff around is handy, but dangerous.

I've been a hoarder for years, but over the last few months I've come to appreciate reducing code even more than I'd thought possible. I've never been one for liking big bloaty code, but in the end, our company engine at Broadsword got a bit large. Working on a new engine has given me the freedom to destroy. It's been fun and educational. You spend less time maintaining and more time making only the right stuff work right. Join this with my new approach to problems (code it first and fast, then refactor optimise refactor), my game has come along at a startling pace.

Now, any time I want to do something I've done before I'll try to copy in from the old engine (code reuse), but if it's novel, it gets given a bit of time, not much though. Time is the only asset I have available to me, so I always take the very shortest route to the solution, and at the moment I haven't hit any horrible problems because of it. In fact, after only 58 hours of coding (and design and photoshoppery) I have a game demo up on my development blog. I'm both pleased and surprised at how quick I can code (and art and design) when I don't think about the bigger picture all the time. It might be that I've been doing games so long I don't do stupid things any more, but it also might be that "future thinking" is pace destroying. I'm not sure which, but I think that my experience has helped reduce the amount of time necessary to make things happen.

Does that mean we should all train up in large companies then go out and make small applications forever more?

Tuesday, 5 January 2010

Classes in H files

While working on my homebrew game, I noticed a thing I'd never really considered before. I've been thinking about compile times and link times recently and this one clicked as a fresh thought.

When you include your classes in header files, you automatically add complexity in the project through the header includes that are necessary to compile the class, and the link time complexity of the methods of that class being linked if used.

We know that the linker doesn't try to link methods that aren't called, as we've probably quite often seen classes written without their implementations as half done cases. So what's wrong with adding class methods? It's quite simple: all the methods that are implemented in a class are subject to being possibly linked at some point and therefore become one more item in the linkers list of suspects.

That's for every class you write.

There's no simple way of stopping the linker from giving access to class methods. There is a simple solution if you use global functions. You can use static global functions for your implementation details.

This thinking lead me to go half way on my game with my change from putting classes in headers to putting them as worker objects in the source files, and then operating them with global functions where possible. I think there might be a way to remove the class symbols from the objects, but until I find it, at least I'm ready, and thinking good for large-scale development. The class in question was a game level class that contains the stuff for managing the game level. The overall game needs to know very little about the class, just that it's running, and that it can be managed (transitioned) and saved (serialised). This simple interface is quite different from the internal workings and therefore the header would have had to include all sorts of unnecessary information. I think this is proper data-hiding and encapsulation rather than the C++ textbook version.

I'll keep tabs on this and see how it goes.