Thursday, 17 December 2009

Asserts

We code our games and put in asserts to catch when things go wrong. We put the game to the test, then when it comes to release, we turn off asserts and go about optimising and submitting the game/app to the testers/users.

So, when you train to ride a motorcycle, you should wear a helmet and jacket, special boots, because you don't know how to ride. But once you can ride, you might as well go around in flip flops and shorts.

Why don't these two things match up?

In console games, we've got to make the game work. Almost 100% or else. However, during our 99% of the time building it before final release day, we should leave asserts on even in the release code. I'd go so far as to say that as games can be patched on consoles now, it would be good to have a crash actually let the user submit a bug report. Just like windows apps keep on trying to do, but instead of actively doing it, maybe we should build it into the game so it silently reports bugs if it can. A bit like "document recovery" if you like.

__COUNTER__

The pre-processor provides some keywords: __FILE__, __LINE__, and __COUNTER__. They provide you with some locality information, however, counter is only relative to itself.

If you read my earlier post on the nastiness of the ## command in the pre-compiler, then you'll be able to guess what this is going to be used for.

multiple uniquifying identifier function(issimo)

#define APPENDER(x) PASTER1(x,__COUNTER__)
#define PASTER1(x,y) PASTER2(x,y)
#define PASTER2(x,y) x##y

Okay, now you have a unique in that file, identifier system. Go team.

Tuesday, 8 December 2009

Perfect Code

If you write a basic code class, such as a stack, a list, a queue, you can get the code to be clean and readable and fast. You can optimise it and make it very small and fast and readable too.

You can perfect the code. You have the capacity to actually make something that cannot be improved. This is because there is a finite number of ways to do it and the finite number is not too large for human time scales.

Now, in your mind, imagine doing that for a full size development project. Can you see the towering complexity of doing it perfect in your mind yet? Okay, well, that's what's called clean room development. It takes moderately more time than normal development, about 100-200% more, but returns almost perfect complex systems. Almost perfect, because at some point a human will have made a mistake or an assumption. There is no escaping errors when the possible set of solutions is so large that to even count it would take a multitude of universe lifetimes.

So, should we adopt the clean room development model? Can we adopt it? I don't think we can, or even should, because one of the main features of a clean room project is that the full project definition is known about from day 1. That's something we never have.

Now, consider bridge building. Would you submit to clean room development on a bridge, where lives are at stake in case of the collapse of the bridge? Would you spend a little more time making sure that everything was perfect and there were gross over tolerances in all the materials used? Yes should be your answer. And do you think that real world mechanical structural engineers use clean room development model?

They don't. They get a half finished proposal written on a napkin, told a random budget and time frame and get started. Just like us. The only benefit they have is that their product can't get exponentially more complex after the initial design.

If the only difference between physical engineering and software engineering is the linear vs exponential complexity. That alone is enough to explain why clean room development is so successful in programming. It stems the complexity bleed we get in computer software. It stops the applications or games from getting out of hand. It makes the job more linear.

So, now I ask again, should we use the clean room development model?

I think we need something like it, if not it. We need something to stop the constant complexity increase. What can that be? Can we reign in our apps and games to be made of "materials" that don't grow in complexity? Can we do this with well engineered architecture model that allows modules to be used without adding any more than a linear increase in complexity?

Behaviour oriented development might be a solution to this for games where much of the complexity of the game comes from relatively independent subsystems connecting where it's only just about necessary. Behaviour oriented development also allows for simpler upgrade path if the game is developed iteratively with napkin design changes.

Monday, 30 November 2009

Const in arguments.

Keyword const is used in arguments to both make the code more readable, and faster. You normally use const& to show that you want access to a large thing, but don't want it on the stack (that would be nasty and big copy constructor stuff). However, some people think that const means that something won't be changed...

Okay, const is used to define that the code about to be using it in its const form wont change it. It's not a declaration that the value won't change.

Consider:

int Foo( int &out, const int &in1, const int &in2 )
{
out = 0;
if( in1 > 0 )
{
out += in1;
}
out += in2;
}

This code initialises the output variable and then adds in1 if it's positive, and in2 regardless.

Does it do what's expected in the following code?

{
int a = 1;
int b = 2;
int c = 5;

Foo( a, b, c );
// a now equals 7
}

yes, it does. Now how about this?

{
int a = 2;
int b = 5;

Foo( a, a, b );
// a now equals 5!?
}

Also, what is a if you call Foo(a,a,a) ? ZERO!

Remember, const doesn't safeguard you against change from the outside. it protects you against change from things you call, not things you're called from. Even then it doesn't actually safeguard completely as it's perfectly valid code to do a const_cast<>().

Have a look at any 3d math library's implementation of a "matrix * matrix". Unless they do all their work in registers, they will generally check to see if the output matrix is the same pointer as the input matrices and asserts on it.

Monday, 12 October 2009

Lost Souls of memory managment strike forth from the grave

One of the horrible things about memory managers that note down your file and line is the necessity to differentiate them in some way if they're going to be "if safe".

By if-safe, I mean, safe to have mid argument. Any clever memory manager stuff takes line and file to tell you where your allocations were made. The hierarchical one that I wrote for platform was okay, but it's nature lead to needing to invent the "Again" method. That is, I had to because i didn't know about the double indirection of macro token expansion. Of course, I was a fool to think that macros were ever safe, but i didn't realise quite how ghastly they could be.

Would you beleive that the code below:

#define ThingOnLine( X ) Concat( int t ## X, __LINE__ )
#define Concat( X, Y ) Concat2( X, Y )
#define Concat2( X, Y ) X ## Y

compiles fine and does what you think it should, but the code below:

#define ThingOnLine( X ) Concat( int t ## X, __LINE__ )
#define Concat( X, Y ) X ## Y

doesn't.

That's right.

Try it.

There's no substitute for looking at an error log and going WTF.

Simply put, the __LINE__ doesn't get expanded straight away, it takes a couple of indirections until it gets around to it. The reason is probably to do with how dumb the pre-processor is. It could be that it's doing all it can on each pass, and it needs to be double indirect in order to get around to putting numbers on __LINE__ statements on it's first pass before its allowed to join them up in subsequent ones.

Friday, 9 October 2009

CastAssert

I'm going to start adding runtime checks to my casts. Silly?

I'm going to make debug builds check every cast from an int to a char, or signed to an unsigned, actually check to see if any data was lost. I bet that this ten minute fix will save me a couple of hours from now on.

Wednesday, 7 October 2009

Inheritance

You long lost aunt left you loads of cash, but when her solicitor called to give you her rubels, it turned out you'd only overridden the sterling deposit capability.

just found a website called the C++ FAQ LITE, and section 23, part 3 has an interesting take on how to solve the problem of overloaded overrides.

class Person
{
public:
virtual void Deposit( Sterling amount ) { PutUnderBed( ToCash( amount ) ); }
virtual void Deposit( Rubels amount ) { PutUnderBed( ToCash( amount ) ); }
virtual void Deposit( Euros amount ) { PutUnderBed( ToCash( amount ) ); }
}

class Me : public Person
{
public:
// this overrides the deposit mechanism in Person so that it pays straight into my bank account
virtual void Deposit( Sterling amount ) { PutInBank( amount ); }
}

In this code, the Sterling argumented version overrides all of the base class virtuals. When our auntie clips her clogs, she leaves us rubels, but we can't accept because we've not implemented banking our rubels.

Handled Differently:


class Person
{
public:
void Deposit( Sterling amount ) { DepositSterling( amount ) ); }
void Deposit( Rubels amount ) { DepositRubels( amount ) ); }
void Deposit( Euros amount ) { { DepositEuros( amount ) ); }

protected:
virtual void DepositSterling( Sterling amount ) { PutUnderBed( ToCash( amount ) ); }
virtual void DepositRubels( Rubels amount ) { PutUnderBed( ToCash( amount ) ); }
virtual void DepositEuros( Euros amount ) { PutUnderBed( ToCash( amount ) ); }
}

class Me : public Person
{
protected:
// this overrides the deposit mechanism in Person so that it pays straight into my bank account
virtual void DepositSterling( Sterling amount ) { PutInBank( amount ); }
}

Now, stirling goes into my bank account, but rubels still go under the bed, and I can be thankful of having a dead auntie.

This idea of allowing the base class to reinterpret functionality and leave defaults in place is a nice way of securing your code against accidental casts to the wrong type to make things fit where they can go (imagine ignoring the warning that it's converted an int to a float, and much later finding out that it's called the wrong function instead of doing what you thought it should).

Thursday, 3 September 2009

Imaginary Numbers

can't see em.

okay, so you might know that sqrt(-1) == i, but how?

wrong! the sqrt of any number can be one of two numbers! haha caught you already.

sqrt( -1 ) == i OR -i

oh. okay.

so, given that, what's the sqrt of i, or -i?

well the simplest way of looking at the problem is actually using the 2D plane of the complex number space...

at -1, the square (1) is 180 degrees away.
at i (or -i) the square (-1), is 90 degrees away.
the same can be said of the sqrt of i or -i. The number is no longer a simple Real, or Imaginary number, but a complex number consisting of the 45 degree components of a unit vector between i (or -i) and 1.

sqrt( i ) == cos(pi/4) + sin(pi/4) * i
sqrt( -i ) == cos(pi/4) + sin(pi/4) * -i or sqrt( i ) = cos(pi/4) - sin(pi/4) * i

i used cos and sin even though they give the same values because the technique is used for any other root of -1.

-1 ^ (1/rootPower) == cos( pi / (rootPower) ) + sin( pi / ( rootPower ) * i

some of you may or may not know that x ^ 1/2 == sqrt( x )... well you all know now. cube roots are ^1/3, and beyond is fun but not as easily identifiable to the human mind.

Friday, 14 August 2009

Faster Compile Pussycat

The problem we often face with games development is debug cycles and how long between getting into the game and back to the problem. There are two issues at stake here, one is the rate at being able to get back to where you are once the game code has recompiled, the other is how long it takes to recompile.

I'm going to talk about the latter.

Header file changes. They cost you a rebuild-most if it's a relatively important header. A rebuild all if it's a base file, and at least a rebuild few if it's a relatively unused one. If it is only going to cause one file to rebuild, then why is it a header?!?

If you want faster compile times overall, use precompiled headers. Use them to reduce the number of times your compiler has to load up the whole windows.h tree of includes.
Of course, even though this will make your rebuild alls take about 25% of the time they used to take, it will mean that any header change will cause a rebuild all.

Oops.

So, take off precompiled headers?

Well, you can, but you will get the headache inducing compilation times back for actual rebuild-alls... except there is an alternative. if you think about how long it takes to compile a simple piece of code, think also about how long it took to link the CRT...

no, i don't mean use libs liberally, that can be cumbersome, tedius to debug, generally a pain in the what-not. No, just think like a library. I library is an object file generated from multiple CPP files right? well, you can do that mid project if you like.

for example:

-- MyFile --

#include "stdafx.h" // for the pch

#include "MyModule.cpp"
#include "MyHelpers.cpp"
#include "MyMaths.cpp"
#include "MyGUI.cpp"
#include "MyDataBase.cpp"
#include "MyLogic.cpp"

-- End --

okay, what does this give us?

it gives us a cpp file that compiles quickly (because it's only including one set of headers), and it compiles to one object that can be linked like any other single CPP object in your project.

so, at the expense of looking after namespace collisions between inlcluded CPPs, we have a vast reduction in recompilation speed approximately the same as using PCHs that will still only rebuild-most when we change a header file.

WIN-WIN... except some people will think it's ugly and bad engineering. Oh well.

Friday, 12 June 2009

Lists of data.

I've got to list out a load of stock items, make an enum entry for each of the items, and then add information about each stock type to a set of global arrays. The chance of me forgetting one of the arrays, or getting a position in one of the arrays wrong, is so high that I have to use a system to ensure simpler updates and more secure ordering of the data.

Along comes a technique I learnt a long time ago, someone called it the XMACRO system, well, I still call mine XMACRO, but it's just habit.

have a look at these three files to see how it works.

---- Stock.hxx ----

//XMACRO( xEnum, xName, xStockLow, xStockHigh, xStockPriceLow, xStockPriceNormal, xStockPriceHigh )
XMACRO( ST_FUEL, "Fuel", 20, 200, 3, 5, 8 )
XMACRO( ST_FOOD, "Food", 40, 200, 3, 5, 10 )
XMACRO( ST_WATER, "Water", 40, 200, 3, 5, 10 )
XMACRO( ST_STUFF, "Stuff", 50, 200, 10, 20, 60 )

---- StockTypes.h ----

#ifndef _STOCK_TYPES_H_
#define _STOCK_TYPES_H_

#define XMACRO( xEnum, xName, xStockLow, xStockHigh, xStockPriceLow, xStockPriceNormal, xStockPriceHigh ) xEnum,
enum STOCK_TYPE
{
#include "Stock.hxx"
NUM_STOCK_TYPES
};
#undef XMACRO

extern int gStockLow[ NUM_STOCK_TYPES ];
extern int gStockHigh[ NUM_STOCK_TYPES ];
extern int gStockNormalPrice[ NUM_STOCK_TYPES ];
extern int gStockLowPrice[ NUM_STOCK_TYPES ];
extern int gStockHighPrice[ NUM_STOCK_TYPES ];

#endif _STOCK_TYPES_H_

---- StockTypes.cpp ----

#include "StockTypes.h"

#define XMACRO( xEnum, xName, xStockLow, xStockHigh, xStockPriceLow, xStockPriceNormal, xStockPriceHigh ) xStockLow,
int gStockLow[ NUM_STOCK_TYPES ] =
{
#include "Stock.hxx"
};
#undef XMACRO

#define XMACRO( xEnum, xName, xStockLow, xStockHigh, xStockPriceLow, xStockPriceNormal, xStockPriceHigh ) xStockHigh,
int gStockHigh[ NUM_STOCK_TYPES ] =
{
#include "Stock.hxx"
};
#undef XMACRO

.......



What we have is a macro being defined, then being used to generate lines of information by including a file that is made solely out of those macro calls.
This technique lets you add and remove items very easily, as all you have to do is modify one file to add or remove or adjust a line of data.

Tuesday, 2 June 2009

Smoothness


x^2·(3 - 2·x)

this is a simple formula for a smooth curve from 0-1 given an input from 0-1. The curve has first derivative of zero at both start and end.

Friday, 20 March 2009

Public Global Operator Functions

I've just added this function to mathlib:

inline Vec4 operator*( float scale, const Vec4 &vec)
{
return vec * scale;
}

Can you recognise the need for this function?

I wrote it because I wanted to scale a vector by a float, but didn't want to have the float trailing the vector. Simple really. What's even more simple is that this function doesn't need any special access privileges, and is inlined so it takes up zero time in the linker. If we wrote more functions like this, we'd have cleaner code and lower link times.

Friday, 13 March 2009

The problem with artists

There's always been a communication problem while working with games artists, but the situation can be remedied as long as you always remember one thing.

Artists believe in magic.

Monday, 2 March 2009

floats and ints

you know how int's work, signed ints are not much harder to properly understand, but floats make some people whimper.

for starters, a float is not magic, it's just a number that indexes a non-smooth curve through the number space. The non-smooth curve has data points set according to the following rules:

if the sign bit is true, the number is negative.
you use the exponent to make a 2^x number which is then multiplies by the mantissa
you use the mantissa data to make a mantissa between 1.000001ish and 2.0

so, for a 1.0f, the data is (0)(-1)(1)
that is (no sign)( * 2^-1 == 0.5 )( * 1+1 = 2 )
sign bits can only be true or false. exponents can be anything between -127 and 127 (128 is reserved for infinities.) mantissa can be any value between 0.000001ish and 1.0 (using 23 bits of the float)

using this information, you can now see how easy it can be to double or halve a float (just +1 or -1 from the exponent), and you can see how making it negative is purely a 1 bit twiddle.

other fun facts about floats because of this:
  • After masking off the sign bit, floats can be compared for greater than and less than just like ints.
  • Because the mantissa is 23 bits, you can use the last few bits for other information without losing too much clarity (this is how we store whether to render a triangle or skip it in the PS2)
  • Almost any bit pattern can represent a float. Apart from when the exponent is 128.
hex and int versions of common floats:

0.5 : 0x3F00000
1 : 0x3F80000
2 : 0x40000000
-1 : 0xBF800000
0 : 0x00000000
256 : 0x43800000
-0 : 0x80000000

learn them and you can spot data going wrong in debuggers easier.

Thursday, 26 February 2009

#define( ignoreThisArgument ) 0

# commands are great for breaking things in novel ways, so they should be avoided like gotos. That is, avoided when necessary, used sparingly, sensibly.

things you may not know about the # commands:
  • They operate on text and text alone. They have no idea what a symbol is.
  • The #include command actually copy pastes the entine file inline into the buffer for compilation (which means you can #include anything you want in odd ways to create cool effects.)
  • The #if/#ifdef/#ifndef/#endif/#else blocks actually delete code from the source file buffer. Anything cut from the code in this way is never seen by the compiler
  • if you #define a macro, the macro code is placed inline for every time you use the #define, which can lead to a lot of code generated from a very small amount of code.
  • because #define actually replaces code, unused arguments can be filled with trash that wouldn't compile on an inline function
so, think about the pre-processor as being a text macro system for a completely different language and you'll be pretty close to what it really is.