Another lesson in using vectors to get around in the world. I've got a tank, and I want to get it to move to another position on the map and also point it in the direction of the enemy. To do this, i need a three phase controller: something that turns the tank until it's pointing in the direction of the target position, something that will make the tank move forward until it's got where it's going, then another to aim the tank at the target baddie.

first, using the right vector of the tank transform (tank.right), I can dot the difference vector of the target position and the tank's current position.

directionDot = dot( target.pos - tank.pos, tank.right )

the value of this dot product will tell we which direction to turn. If the value is positive, then turn right, else, turn left. We can repeat this until the dot product changes value, and at that point, we can pretend we turned correctly, but instead just make a new transform matrix that points exactly at the target.

making a look at matrix can be as simple as normalising the difference vector and crossing with up to create the side vector.

tank.forward = normalised( target.pos - tank.pos )

tank.right = cross( tank.up, tank.forward )

The tank now needs to move forward. We can use the forward vector to do this, just add on the product of the tank speed with the forward vector to the position.

tank.pos += tank.forward * tank.speed

do this until we're no longer decreasing the size of the difference vector (target.pos - tank.pos), once it is increasing again, just set the tank pos right.

tank.pos = target.pos

and then it's back to aiming the tank.

aimDot = dot( baddie.pos-tank.pos, tank.right )

if( aimDot ) rotateRight else rotateLeft

again, once the dot changes sign, just create a look at matrix at the tank position.

For more advanced simulators, you might want to be less stringent about when to move from one phase to the next.

e.g. Turn until the dot for turning is very small, then begin to move forward at the same time, but only if the distance is over a specified threshold. Once within a certain distance of the target, begin turning to face baddie.

None of these operations require sins or cosines because you can pre-create all your rotation matrices for the simple solution, and for the more advanced solution, you're probably playing with a physics engine to make the tank move.

## Friday, 28 November 2008

## Monday, 17 November 2008

### Ask, but not all at once.

Asking questions can get you answers, but every time you interrupt a fellow coder, it can break their flow. Some programmers don't skip a beat, others are made angry, you'll have to figure out which ones are which. Oh, and they can change depending on what they're working on, how interested they are, and how long they've been doing it.

So, in case of an unanswered question, first try google, then try http://stackoverflow.com/

If you still don't have an answer, or it's game specific, formulate an e-mail. Sometimes, writing the e-mail can focus you long enough to answer the question yourself. It's called rubber ducking, and it's the process by which you can access details of a problem by trying to talk it out with a unresponsive entity. These can be great tools for solving simple, sometimes complex, but not presently obvious problems.

Personally, I'm big on flow. So is Joel Spolsky, Tom DeMarco, and Steve McConnel. This is why once a question is asked of me, I tend to go off long and hard into the wherefores and whys, because I've already lost any flow I had, so there's no benefit in being terse or concise. I find that I'd prefer to teach something useful if I can, because hopefully it will reduce the number of future flow breakers.

Flow is what makes my day. I can have a day full of pain, but if I get into flow, it makes it all okay. Flow is when you're no longer working a job, but doing your thing. Flow is a musician playing music that makes you feel. Flow is an artist unable to describe how he's painting what he's painting. Flow is using your highly tuned skills to concentrate on too many different aspects of the same thing to think about stupid stuff like phones, e-mails, food, or going to the toilet. Flow is the magical extended moment of pure creativity usually only experienced by the highly skilled artist. It's what makes my job worth doing.

So, in case of an unanswered question, first try google, then try http://stackoverflow.com/

If you still don't have an answer, or it's game specific, formulate an e-mail. Sometimes, writing the e-mail can focus you long enough to answer the question yourself. It's called rubber ducking, and it's the process by which you can access details of a problem by trying to talk it out with a unresponsive entity. These can be great tools for solving simple, sometimes complex, but not presently obvious problems.

Personally, I'm big on flow. So is Joel Spolsky, Tom DeMarco, and Steve McConnel. This is why once a question is asked of me, I tend to go off long and hard into the wherefores and whys, because I've already lost any flow I had, so there's no benefit in being terse or concise. I find that I'd prefer to teach something useful if I can, because hopefully it will reduce the number of future flow breakers.

Flow is what makes my day. I can have a day full of pain, but if I get into flow, it makes it all okay. Flow is when you're no longer working a job, but doing your thing. Flow is a musician playing music that makes you feel. Flow is an artist unable to describe how he's painting what he's painting. Flow is using your highly tuned skills to concentrate on too many different aspects of the same thing to think about stupid stuff like phones, e-mails, food, or going to the toilet. Flow is the magical extended moment of pure creativity usually only experienced by the highly skilled artist. It's what makes my job worth doing.

## Friday, 24 October 2008

### Where does my bullet come from?

First in a series of "how do I use vectors in my game"

One problem we come across is how to instance a new entity that is meant to be created at a position relative to some pre-existing entity. A good example is the rocket coming out of your rocket launcher. You want to know where to start the rocket, what direction to fire it, and what initial velocity to give it.

firstly, you know that you want it to go in whatever direction the launcher was facing, so you can get the player's direction and use that, but about where it starts out...

well, if you imagine your launcher to be positioned, relative to yourself, then you have a local position for that launcher, giving it an xyz position relative to your centre. If you just add the local position of the launcher to your local position, what will happen?

launcher xyz1 + you xyz1 = xyz2 (uh oh, that's not right)

so, you need to convert your local position to a world relative position.

Firstly, we need to know what being sideways in player space means in the world, so we take the player's Right-Vector (or x row of their transform matrix), and multiply that by the launcher's local space x. This gives us a vector that tells us what the sideways component of the launcher means in world space.

Secondly, we need to know how the vertical position of the launcher is represented in world space, so we take the player's up vector (usually the same as the launcher for most FPS games as it's also the axis of rotation for left-right looking). We take the up-vector and multiply that by the launcher's local space y position. This gives us a vector telling us how the launcher's y position is represented in world space.

Thirdly, we want to know about how far backward or forward the launcher is, so we take the local space z value and multiply that by the direction vector of the player (the z row of the transform matrix, or the "Forward" vector) and get a world space representation of the local space z distance.

Lastly, yes we need the player's position added on to those three created vectors to build our final world space position for the launcher.

so that's launcher (L) and player (P)

world L is:

L.position.x * P.right +

L.position.y * P.up +

L.position.z * P.forward +

1 * P.position

and we already have a 1 in L.position.w because L.position is a position, so we can say

World L = L.position * P

In order to not have the rocket explode straight away because it's very close to the player when it is launched, we might decide to pretend it exists for a little while, then place it some distance along it's trajectory. To do this we need to add on the player direction to the existing calculated World position vector. let's see if we can do this.

WorldL = xyz1, player.forward = xyz0

adding these together will be valid as 1+0 = 1 (meaning the result is still a position)

we can add on a scalar multiplied version of the player's forward vector to give us any point along the line.

WorldL = L.postion * P + P.forward * speed * time

we can use this because speed is distance over time, and we want distance, so we cancel out the times by multiplying.

"d/t*t = d"

now we have our intially offset launched rocket!

One problem we come across is how to instance a new entity that is meant to be created at a position relative to some pre-existing entity. A good example is the rocket coming out of your rocket launcher. You want to know where to start the rocket, what direction to fire it, and what initial velocity to give it.

firstly, you know that you want it to go in whatever direction the launcher was facing, so you can get the player's direction and use that, but about where it starts out...

well, if you imagine your launcher to be positioned, relative to yourself, then you have a local position for that launcher, giving it an xyz position relative to your centre. If you just add the local position of the launcher to your local position, what will happen?

launcher xyz1 + you xyz1 = xyz2 (uh oh, that's not right)

so, you need to convert your local position to a world relative position.

Firstly, we need to know what being sideways in player space means in the world, so we take the player's Right-Vector (or x row of their transform matrix), and multiply that by the launcher's local space x. This gives us a vector that tells us what the sideways component of the launcher means in world space.

Secondly, we need to know how the vertical position of the launcher is represented in world space, so we take the player's up vector (usually the same as the launcher for most FPS games as it's also the axis of rotation for left-right looking). We take the up-vector and multiply that by the launcher's local space y position. This gives us a vector telling us how the launcher's y position is represented in world space.

Thirdly, we want to know about how far backward or forward the launcher is, so we take the local space z value and multiply that by the direction vector of the player (the z row of the transform matrix, or the "Forward" vector) and get a world space representation of the local space z distance.

Lastly, yes we need the player's position added on to those three created vectors to build our final world space position for the launcher.

so that's launcher (L) and player (P)

world L is:

L.position.x * P.right +

L.position.y * P.up +

L.position.z * P.forward +

1 * P.position

and we already have a 1 in L.position.w because L.position is a position, so we can say

World L = L.position * P

In order to not have the rocket explode straight away because it's very close to the player when it is launched, we might decide to pretend it exists for a little while, then place it some distance along it's trajectory. To do this we need to add on the player direction to the existing calculated World position vector. let's see if we can do this.

WorldL = xyz1, player.forward = xyz0

adding these together will be valid as 1+0 = 1 (meaning the result is still a position)

we can add on a scalar multiplied version of the player's forward vector to give us any point along the line.

WorldL = L.postion * P + P.forward * speed * time

we can use this because speed is distance over time, and we want distance, so we cancel out the times by multiplying.

"d/t*t = d"

now we have our intially offset launched rocket!

## Thursday, 23 October 2008

### You are not a good designer.

Big design up front is a way of being sure that you're going to be wrong. You can keep at it day after day and eventually your design will be so big that the probability that it's all right is nearing zero.

We have a solution to this problem, it's called approximation and evolutionary prototyping.

Orgel's second rule : "evolution is cleverer than you" applies to the science of software creation just as it does to advancing biological forms. If you work on a piece of software, you're better off refactoring it than starting again. It's quite simple really, in the old not good enough code is a bunch of stuff that works. Every time you try to do it from scratch, you forget and in turn reduce the complexity (and capacity) of the software. Sometimes the reduction in complexity is good, and refactoring could have done it too, but sometimes it's disasterous (which refactoring doesn't do).

Consider these two alternatives when you next start writing that great new piece of code that is going to be just like X but better.

We have a solution to this problem, it's called approximation and evolutionary prototyping.

Orgel's second rule : "evolution is cleverer than you" applies to the science of software creation just as it does to advancing biological forms. If you work on a piece of software, you're better off refactoring it than starting again. It's quite simple really, in the old not good enough code is a bunch of stuff that works. Every time you try to do it from scratch, you forget and in turn reduce the complexity (and capacity) of the software. Sometimes the reduction in complexity is good, and refactoring could have done it too, but sometimes it's disasterous (which refactoring doesn't do).

Consider these two alternatives when you next start writing that great new piece of code that is going to be just like X but better.

Labels:
best practice,
debugging,
management,
organisation,
techniques

## Thursday, 2 October 2008

### Directions in 3D space are 2D

In something of a response to one of the whinges about my previous post (that all vectors are a direction and a magnitude), I hereby declare that all directions in 3D space are in fact 2D, therefore do not count as 3D vectors.

Eh?

okay, imagine any vector for a direction in 3D space, you can map it to a sphere (remember, directions are unit vectors, so there's no 0,0,0, nor any vector of any length other than 1), and as you know, places on this world can be pinpointed on a map.

... yes? more?

if you don't need anything other than 2 dimensions to fully describe a vector of values, then that's how many dimensions it's got. Any production of more values from the lower order vector of values is just a function of them.

consider this, if you were told that foward acceleration was a vector, you could say that it was not true, in fact it was a 1 dimensional vector (scalar), because any acceleration that wasn't forward didn't classify as a forward acceleration, therefore you only need one value to descibe the acceleration.

3D -> 1D

Care for more?

How about the horizon? point to somewhere on the horizon... you can give me a value, a single value, back, and i can know where you were pointing. That's right, up and down, left and right, and how far away the thing you were pointing at can be reduced down to one number.

Eh?

okay, imagine any vector for a direction in 3D space, you can map it to a sphere (remember, directions are unit vectors, so there's no 0,0,0, nor any vector of any length other than 1), and as you know, places on this world can be pinpointed on a map.

... yes? more?

if you don't need anything other than 2 dimensions to fully describe a vector of values, then that's how many dimensions it's got. Any production of more values from the lower order vector of values is just a function of them.

consider this, if you were told that foward acceleration was a vector, you could say that it was not true, in fact it was a 1 dimensional vector (scalar), because any acceleration that wasn't forward didn't classify as a forward acceleration, therefore you only need one value to descibe the acceleration.

3D -> 1D

Care for more?

How about the horizon? point to somewhere on the horizon... you can give me a value, a single value, back, and i can know where you were pointing. That's right, up and down, left and right, and how far away the thing you were pointing at can be reduced down to one number.

## Wednesday, 1 October 2008

### matrices and vectors and positions

A 3 dimensional vector has three components: x,y,z

It specifies a direction and a magnitude.

You can use it to describe a velocity, a direction (if it's unit), an acceleration, momentum, displacement between two positions, and angular velocity (using magnitude and direction), but you can never use it as a position.

A 3 dimensional positoin has thre components: x,y,z

It specifies where in space a point exists.

You can use it for the position of a vertex, the centre of a sphere/circle/shape, the target of a camera, but you can never use it as a vector.

Why so strict?

A vector is usually defined by having it's w value set to 0 and a position equally by a w of 1.

This means that to do any mathematics with these two constructs, you must follow some simple rules that can be found out by trying out the maths on the four element vector:

1. a position can never be added to another position

2. the difference between two positions is a vector (displacement)

3. the sum of a position and a vector is a position

4. the sum of two vectors is a vector

5. you cannot dot product a position with anything

6. you cannot multiply a position by a scalar

okay, so vectors point, and positions place...

And a matrix is made of three vectors telling you which way each of the matrices axis point, and a position to tell you where in space it is.

don't beleive me?

a matrix is really

x2x, x2y, x2z, 02w

y2x, y2y, y2z, 02w

z2x, z2y, z2z, 02w

w2x, w2y, w2z, w2w

which means any vectors (with w of 0) will be transformed by just the rotation part...

and any positions (with a w of 1) will be transformed, then have the position added...

which is right if you think about it for a moment.

so... time to typesafe your math library yet and save cycles while still behaving like a full matrix transform library?

It specifies a direction and a magnitude.

You can use it to describe a velocity, a direction (if it's unit), an acceleration, momentum, displacement between two positions, and angular velocity (using magnitude and direction), but you can never use it as a position.

A 3 dimensional positoin has thre components: x,y,z

It specifies where in space a point exists.

You can use it for the position of a vertex, the centre of a sphere/circle/shape, the target of a camera, but you can never use it as a vector.

Why so strict?

A vector is usually defined by having it's w value set to 0 and a position equally by a w of 1.

This means that to do any mathematics with these two constructs, you must follow some simple rules that can be found out by trying out the maths on the four element vector:

1. a position can never be added to another position

2. the difference between two positions is a vector (displacement)

3. the sum of a position and a vector is a position

4. the sum of two vectors is a vector

5. you cannot dot product a position with anything

6. you cannot multiply a position by a scalar

okay, so vectors point, and positions place...

And a matrix is made of three vectors telling you which way each of the matrices axis point, and a position to tell you where in space it is.

don't beleive me?

a matrix is really

x2x, x2y, x2z, 02w

y2x, y2y, y2z, 02w

z2x, z2y, z2z, 02w

w2x, w2y, w2z, w2w

which means any vectors (with w of 0) will be transformed by just the rotation part...

and any positions (with a w of 1) will be transformed, then have the position added...

which is right if you think about it for a moment.

so... time to typesafe your math library yet and save cycles while still behaving like a full matrix transform library?

## Thursday, 4 September 2008

### Documentation

For code reuse, it's really important that a programmer is able to return to some code and use it without fear of some strange repercussions. Documentation is a necessity in our job. We are all lazy and none of us want to document our code. I'm terrible for not even commenting most of the time, but what I use is something that I think makes returning to my code bearable, and something that I can always follow. There are certain areas where I think I should add comments too, but usually, I think that code that has long identifier names and makes each step in the algorithm as obvious as possible, is better than comments that can go out of date. Instead of commenting in an *.h file about how to use a library, I write some use cases. It's almost like a really watered down test-driven development.

Benefits to relying on readable code: doesn't go out of date, takes less time to write/document, helps you not make stupid mistakes a lot of the time.

Hinderences: no overviews, some people demand comments, can be non-obvious even if you think it is.

Labels:
best practice,
debugging,
pitfalls,
principles,
process

## Tuesday, 12 August 2008

### Using Scope for Fun and Profit

when C++ came along with it's constructors and destructors, it brought with it the idea of better understanding of scoping. We suddenly had to remember that the "int" in your for loop was only there for the duration of the looping. We realised that local variables could have destructors and could cause damage by going out of scope just as we had finished building them.

There are plenty of newbie errors to be had from the bad usage of scoping. There are however a few nice tricks that are enabled through scoping.

I wrote a profiler in an hour using scoped instances that profiled by accumulating the time between their creation and their destruction. This lead to a safe profiling system that automatically collected data without intruding in much of the code base.

Using a scoped handle is useful for ensuring that you don't hold onto something too long such as a file handle. Scoping a resource handle is very useful as it reduces the chance of leaving things dangling. The basic idea of smart pointers is that they are scoped handles to memory.

It may seem like a simple idea, but use scoping for working memory when you can too, it reduces the chance of unused and misunderstood variables.

Examples might be to have a scoped string buffer that you use for formatting stuff before using it. This can save you time and effort because you might notice a memory overrun earlier. This is because if it's scope has been reduced, then what it affects will be more immediately noticable.

There are plenty of newbie errors to be had from the bad usage of scoping. There are however a few nice tricks that are enabled through scoping.

I wrote a profiler in an hour using scoped instances that profiled by accumulating the time between their creation and their destruction. This lead to a safe profiling system that automatically collected data without intruding in much of the code base.

Using a scoped handle is useful for ensuring that you don't hold onto something too long such as a file handle. Scoping a resource handle is very useful as it reduces the chance of leaving things dangling. The basic idea of smart pointers is that they are scoped handles to memory.

It may seem like a simple idea, but use scoping for working memory when you can too, it reduces the chance of unused and misunderstood variables.

Examples might be to have a scoped string buffer that you use for formatting stuff before using it. This can save you time and effort because you might notice a memory overrun earlier. This is because if it's scope has been reduced, then what it affects will be more immediately noticable.

Labels:
best practice,
c++,
patterns,
process,
techniques,
tricks

## Tuesday, 29 July 2008

### Management is hard.

Can you manage more than ten people? They are all doing different tasks, each one reporting back to you once a day about their progress including any reports on difficulties. These difficulties need to be disseminated out to the appropriate ears so that problems can be swiftly dealt with. Add the job of managing resources between these people, requests for assets, requests for features, and you get a hell of a tangled web. Then add in illness, holiday, differing skill sets, rate of work, and you get an incredible amount of information. Can you still manage ten people?

If you think you can, then you're probably wrong.

So, how do large organisations work? Why don't they fail miserably? The answer normally lies in using a hierarchy so that there is no more than four or five people being managed at once, but that means that staff are lost to management that might be able to do other jobs better.

Is there an alternative? Your own comments first before I reveal my opinions.

If you think you can, then you're probably wrong.

So, how do large organisations work? Why don't they fail miserably? The answer normally lies in using a hierarchy so that there is no more than four or five people being managed at once, but that means that staff are lost to management that might be able to do other jobs better.

Is there an alternative? Your own comments first before I reveal my opinions.

### Process

It's important to realise that you have a process, even if it includes no written rules. Everyone has their way of doing things. What most people don't see is that process is one of the more important areas of self improvement. If you have a constantly evolving, improving process that you follow, then you are less likely to commit the same bugs twice and more able to remember what to do next.

Process includes how you code, what tools you use, how you approach tasks and how you manage your time. There are innumerable areas in which any individual can improve, and the same goes for companies too.

Every time you see something go wrong, have a think about what could have been put in place to make it never have happened, this is the safeguard. Also think about other things that could have happened because of the lack of this safeguard. This will let you hunt down other bugs/errors quicker than just waiting for them to happen or be found, and will allow you to move onto bigger things quicker.

Process includes how you code, what tools you use, how you approach tasks and how you manage your time. There are innumerable areas in which any individual can improve, and the same goes for companies too.

Every time you see something go wrong, have a think about what could have been put in place to make it never have happened, this is the safeguard. Also think about other things that could have happened because of the lack of this safeguard. This will let you hunt down other bugs/errors quicker than just waiting for them to happen or be found, and will allow you to move onto bigger things quicker.

## Tuesday, 8 July 2008

### Templates primer

Before C++, many coders thought that even though they were dangerous, #define macros were valuable. They were right in that they allowed code to be more "meta", but their price was high. With the advent of templates however, there is the opportunity for the same level of hard coded efficiency without the same dangerous lack of safety mechanisms.

Templates are compile time macros just like #define, but they are type safe, and can be used with C++ constructs better as they allow for overloading. Using a template can help you set some compile time asserts too, such as making sure that two arguments have the same type (like when you want to do a swap). All this is done at compile time and not with any hand waving either, it's just C++ compiler unrolling the code for you then compiling it as usual, just like a #define macro. The difference really is that the #define macro stuff doesn't know about C,C++,Pascal,Java, so can cause arbitrary damage.

So, if you're thinking that a piece of code might be better templated, but are worried about any overheads, don't.

Templates are compile time macros just like #define, but they are type safe, and can be used with C++ constructs better as they allow for overloading. Using a template can help you set some compile time asserts too, such as making sure that two arguments have the same type (like when you want to do a swap). All this is done at compile time and not with any hand waving either, it's just C++ compiler unrolling the code for you then compiling it as usual, just like a #define macro. The difference really is that the #define macro stuff doesn't know about C,C++,Pascal,Java, so can cause arbitrary damage.

So, if you're thinking that a piece of code might be better templated, but are worried about any overheads, don't.

### Virtuals, primer

Without fully understanding virtual tables, you can make some nasty gaffs. Without understanding the basics, you can get lost in some errors.

Firstly, the tables have to exist somewhere. They are real things, they are actually tables of function pointers. They have to go somewhere in memory, so most C++ compiler/linkers put them in the place where the first virtual function is defined for the most base class. In Geko we made the destructor/deconstructor that function. the simple Shutdown function was left inline.

Virtual tables are pointed to by the invisible first pointer in a class. You see it in the debugger, but not in the code. This pointer points to the array of function pointers that apply to it. Any function calls are made by finding the function's array reference (at compile time) and adding that to the vptr (at runtime) and calling that function instead of a statically bound one.

Multiple inheriters can have more than one vptr, which makes it even more complicated, and different compiler linkers handle it in different ways.

Other things to remember:

There is no reason to inline virtual functions. They can't be inlined, so inlining them just makes them look faster to humans. Virtual functions are slower than standard function calls, so use them for what they are good at and promote them to non virtual or inline whenever you see an opportunity.

You can write templated virtual code and you can write virtualised templates, but generally, you should get good at both before writing them together.

Firstly, the tables have to exist somewhere. They are real things, they are actually tables of function pointers. They have to go somewhere in memory, so most C++ compiler/linkers put them in the place where the first virtual function is defined for the most base class. In Geko we made the destructor/deconstructor that function. the simple Shutdown function was left inline.

Virtual tables are pointed to by the invisible first pointer in a class. You see it in the debugger, but not in the code. This pointer points to the array of function pointers that apply to it. Any function calls are made by finding the function's array reference (at compile time) and adding that to the vptr (at runtime) and calling that function instead of a statically bound one.

Multiple inheriters can have more than one vptr, which makes it even more complicated, and different compiler linkers handle it in different ways.

Other things to remember:

There is no reason to inline virtual functions. They can't be inlined, so inlining them just makes them look faster to humans. Virtual functions are slower than standard function calls, so use them for what they are good at and promote them to non virtual or inline whenever you see an opportunity.

You can write templated virtual code and you can write virtualised templates, but generally, you should get good at both before writing them together.

### Unsorting.

Something I learnt from Ian Gledhill:

If you have a list that you want to shuffle, you can do it very efficiently by using the following technique.

First, define N as the number of items in the list.

n = N

r is a random number between zero and n (not inlusive)

swap element r and element n-1

reduce n by one

if n is greater than 1, then return to the line where you make r at random.

consider the following list:

1,2,3,4

n = 4, r = rand(0,4) = 2

1,2,4,3

n = 3, r = rand(0,3) = 0

4,2,1,3

n = 2, r = rand(0,2) = 1

4,2,1,3

n = 1 (finish)

This is about as optimal a shuffle as is possible. It's even better that it's an in place operation.

If you have a list that you want to shuffle, you can do it very efficiently by using the following technique.

First, define N as the number of items in the list.

n = N

r is a random number between zero and n (not inlusive)

swap element r and element n-1

reduce n by one

if n is greater than 1, then return to the line where you make r at random.

consider the following list:

1,2,3,4

n = 4, r = rand(0,4) = 2

1,2,4,3

n = 3, r = rand(0,3) = 0

4,2,1,3

n = 2, r = rand(0,2) = 1

4,2,1,3

n = 1 (finish)

This is about as optimal a shuffle as is possible. It's even better that it's an in place operation.

Labels:
algorithms,
discrete math,
techniques,
tricks

## Tuesday, 1 July 2008

### Hash functions.

Hashes reduce the amount of information in an object down to some limited form. They take all the bytes that make up the original, and combine them in all sorts of ways to generate a hash value. A good hash function will modify approximately half the bits of the hash value whenever 1 bit of the original byte stream has changed. There are some very good hash functions out there, and rolling your own isn't that difficult to do well.

Using hash functions and values can be a little more demanding though as you always have to remember that a hash code will be able to tell you, for certain, that two objects are different, but never be able to guarantee that they are the same. Also, because of the birthday problem (explained in a moment) the number of possible objects going into a hash that can come out unique is quite small to the size of the possible variants of the hash value. Even a perfect hash (one that has 1 bit difference half hash modification bit per bit 50% variance also) cannot get around the problem of a hash being a proof of non-similiarity. The reason is simple. A hash is a reduced amount of information.

The Birthday problem:

How likely do you think it is that someone at your workplace has the same birthday as you?

That is quite a simple calculation, your take the number of people in the office, minus one for yourself, then calculate 364 over 365 to the power of the number of other workers to figure out the probability that you dont share a birthday.

20 workers = 95% that you wont share.

100 workers = 76% that you wont share.

but, what about the probability that someone in the work place shares a birthday with someone else?

well, if you're out of the picture, then there's the rest of the office against random dude.

20 workers = 95% for you, 95.2% for dude...

what about if we consider the number of people in the office in total with those that they could check against? that's 190 possible birthday comparisons for a room of 20 people...

20 workers = 59.4% chance of not sharing a birthday!

oh dear...

100 workers = 0.00012% chance of not sharing a birthday...

okay, so that's fairly mad. A better view of the combinatoric reasoning for the sudden growth can be found here: http://en.wikipedia.org/wiki/Birthday_paradox

so... for a hash code that that has 4 billion values, we can expect to get quite a few hashes, but we will hit a wall at about 77000 as at that point, the probability of having a collision is just over 50%, and no matter how many times you adjust the seed, once you get to about 200000, it will be almost impossible to find a valid non-colliding hash. That's 4 billion 4,000,000,000 can't support 200,000 different hashes, just because of some silly factorial law.

Using hash functions and values can be a little more demanding though as you always have to remember that a hash code will be able to tell you, for certain, that two objects are different, but never be able to guarantee that they are the same. Also, because of the birthday problem (explained in a moment) the number of possible objects going into a hash that can come out unique is quite small to the size of the possible variants of the hash value. Even a perfect hash (one that has 1 bit difference half hash modification bit per bit 50% variance also) cannot get around the problem of a hash being a proof of non-similiarity. The reason is simple. A hash is a reduced amount of information.

The Birthday problem:

How likely do you think it is that someone at your workplace has the same birthday as you?

That is quite a simple calculation, your take the number of people in the office, minus one for yourself, then calculate 364 over 365 to the power of the number of other workers to figure out the probability that you dont share a birthday.

20 workers = 95% that you wont share.

100 workers = 76% that you wont share.

but, what about the probability that someone in the work place shares a birthday with someone else?

well, if you're out of the picture, then there's the rest of the office against random dude.

20 workers = 95% for you, 95.2% for dude...

what about if we consider the number of people in the office in total with those that they could check against? that's 190 possible birthday comparisons for a room of 20 people...

20 workers = 59.4% chance of not sharing a birthday!

oh dear...

100 workers = 0.00012% chance of not sharing a birthday...

office count | combinations | probability of unshared birthday | ||

5 | 10 | 0.972938 | ||

10 | 45 | 0.88386 | ||

15 | 105 | 0.749712 | ||

20 | 190 | 0.593771 | ||

25 | 300 | 0.439092 | ||

30 | 435 | 0.303184 | ||

35 | 595 | 0.195465 | ||

40 | 780 | 0.117664 | ||

45 | 990 | 0.066135 | ||

50 | 1225 | 0.034709 | ||

55 | 1485 | 0.017008 | ||

60 | 1770 | 0.007782 | ||

65 | 2080 | 0.003324 | ||

70 | 2415 | 0.001326 |

okay, so that's fairly mad. A better view of the combinatoric reasoning for the sudden growth can be found here: http://en.wikipedia.org/wiki/Birthday_paradox

so... for a hash code that that has 4 billion values, we can expect to get quite a few hashes, but we will hit a wall at about 77000 as at that point, the probability of having a collision is just over 50%, and no matter how many times you adjust the seed, once you get to about 200000, it will be almost impossible to find a valid non-colliding hash. That's 4 billion 4,000,000,000 can't support 200,000 different hashes, just because of some silly factorial law.

Labels:
debugging,
discrete math,
numbers,
patterns,
pitfalls

## Friday, 27 June 2008

### My old code is nasty... but still worth keeping.

Don't rewrite it, refactor it. It's very important to assume that any ugly code is ugly for a reason, figure out what the reason is. Sometimes it's because the coder had thought there wasn't enough time to do it right, sometimes its because the code has nasty looking bug fixes in it, sometimes its because the coder actually wanted it to look bad.

http://www.joelonsoftware.com/articles/fog0000000069.html

the important lesson to learn from this is that it's never too late to use all of the old stuff. Find out what was good about it and drag it in.

http://www.joelonsoftware.com/articles/fog0000000069.html

the important lesson to learn from this is that it's never too late to use all of the old stuff. Find out what was good about it and drag it in.

### Debugging insight

How do you debug your code? If you don't at least do the following, you may be costing yourself a lot of time and effort.

Finding:

1. find a way to reproduce the bug cheaply.

2. reduce the number of possible causes (refine the bug's surroundings / divide and conquer)

3. understand the bug, (why it caused it and not other symptoms)

Once you think you have it:

1. make the fix and prove that it fixed it.

2. consider side effects

3. adjust some thinking / proof against future similar error

Finding:

1. find a way to reproduce the bug cheaply.

2. reduce the number of possible causes (refine the bug's surroundings / divide and conquer)

3. understand the bug, (why it caused it and not other symptoms)

Once you think you have it:

1. make the fix and prove that it fixed it.

2. consider side effects

3. adjust some thinking / proof against future similar error

### Compression, fundamental

To compress data you don't just go about fitting the all the numbers that you have in the smallest spacesyou can fit them. Compressing things is usually about understanding what information you have.

Data does not equal information.

consider the data:

int gMyArray[] = { 15,15, 0,0,0,0,0,0,0,0, 1205,30000, 4,4,4,4,4,4, 65000,65001 };

sizeof( gMyArray ) == 80; // (20 elements, 4 bytes each)

If you want to compress that sequence, you could store the sequence as a list of unsigned shorts. That would save you half the data space, 40bytes, but its no-where near as good as you could do. Naivley storing as pairs of ints that supply first the value, then the number of copies will give us a final size of 56bytes, which is not as good as the half data space compression, but once we change to using a short as teh data value and a char as the copy count, we drop down to 21 bytes. that's almost half the size of the first attempt at compressing.

Watch out for what information you have in your data. Sometimes you have a lot of data and very little information.

consider recording button presses for a game, if you store the array of button values at each time there is a change, you will use up quite a bit of memory quite quickly, but if you just store the time until a button changes state, with the button ID, then you can make a saving. The more keys the bigger the saving. Now consider a keyboard. No-one stores a keyboard map per key up or down.

Data does not equal information.

consider the data:

int gMyArray[] = { 15,15, 0,0,0,0,0,0,0,0, 1205,30000, 4,4,4,4,4,4, 65000,65001 };

sizeof( gMyArray ) == 80; // (20 elements, 4 bytes each)

If you want to compress that sequence, you could store the sequence as a list of unsigned shorts. That would save you half the data space, 40bytes, but its no-where near as good as you could do. Naivley storing as pairs of ints that supply first the value, then the number of copies will give us a final size of 56bytes, which is not as good as the half data space compression, but once we change to using a short as teh data value and a char as the copy count, we drop down to 21 bytes. that's almost half the size of the first attempt at compressing.

Watch out for what information you have in your data. Sometimes you have a lot of data and very little information.

consider recording button presses for a game, if you store the array of button values at each time there is a change, you will use up quite a bit of memory quite quickly, but if you just store the time until a button changes state, with the button ID, then you can make a saving. The more keys the bigger the saving. Now consider a keyboard. No-one stores a keyboard map per key up or down.

## Tuesday, 24 June 2008

### Game Patterns 1

There don't seem to be many game specific design patterns, but one that we figured out while working on the oxygen series is applicable to almost every game ever created. We call it the tiers pattern.

Every game has tiers, the first on is the game. in the game tier, there exists data about the game itself, constants such as the levels to load, the defaults for character values and what characters you can have. There are variables too, such as screen positioning, volume settings etc, and then there is the game instance itself that loads up when you run the game.

Tier 2 is the session, this starts once you have commenced a game start / game load. Tier 2 contains variables for the number of lives, the current score, the inventory items, your upgrades, powerups, all the things that you would consider to be play session related.

Tier 3 is the instance tier, this is the in game dynamic instance that interacts with the World, there is one per entity in the world, they can all be different, they are initialised from the game tier and the session tier. Players and NPCs are initialised from the session and game tier.

How each layer interacts with each other is down to the type of game, but seperating your code into these layers helps in almost every game.

If you consider most adventure games, these tiers make enormous sense, what's not obvious is that they apply to games like tetris too. The tetris game doesn't contain much information other than the grid width and height and the descriptions of the shapes that can occur. The session is the game mode, current level, current score, and the instance is the player control device and the spawner that first creates the shape from the set of shapes defined for the game, then passes the shape to the player to control is within the context of the World (the grid and the shapes already laid down). The world is initialised from the Game and Game Session and instanced the player controller instance and the shape generator instance. The Game knew about the World object, and when the session was created, so was the World.

Every game has tiers, the first on is the game. in the game tier, there exists data about the game itself, constants such as the levels to load, the defaults for character values and what characters you can have. There are variables too, such as screen positioning, volume settings etc, and then there is the game instance itself that loads up when you run the game.

Tier 2 is the session, this starts once you have commenced a game start / game load. Tier 2 contains variables for the number of lives, the current score, the inventory items, your upgrades, powerups, all the things that you would consider to be play session related.

Tier 3 is the instance tier, this is the in game dynamic instance that interacts with the World, there is one per entity in the world, they can all be different, they are initialised from the game tier and the session tier. Players and NPCs are initialised from the session and game tier.

How each layer interacts with each other is down to the type of game, but seperating your code into these layers helps in almost every game.

If you consider most adventure games, these tiers make enormous sense, what's not obvious is that they apply to games like tetris too. The tetris game doesn't contain much information other than the grid width and height and the descriptions of the shapes that can occur. The session is the game mode, current level, current score, and the instance is the player control device and the spawner that first creates the shape from the set of shapes defined for the game, then passes the shape to the player to control is within the context of the World (the grid and the shapes already laid down). The world is initialised from the Game and Game Session and instanced the player controller instance and the shape generator instance. The Game knew about the World object, and when the session was created, so was the World.

## Monday, 23 June 2008

### Quaternions - intro

instead of using matrices, we can use quaternions to describe orientations. It's quite important to be aware that a quaternion just represents a mirroring along an axis, and therefore cannot represent a scale or translation components. That's why you see Quaternions decomposed into TRS rather than just Quaternions.

Some people think that quaternions represent rotations. They don't. They represent mirroring. The mirroring can be used to effect a rotation, but it's a side effect, not a natural capacity.

The s,i,j,k values tell you in what way the plane of mirroring has been manipulated. values in i,j,k tell you how much the mirror rotates about each x,y,z axis. This is a simultaneous rotation though and is difficult to mentally represent. If you imagine a 2D version of the quaternion, you can see that with no rotation, pre-mirroring, then mirroring with the same mirror gets you back to where you started. If you rotate 180 degrees, the same will happen because the mirror is still effectively the same angle as before. If, however, you rotate by 90 degrees, you get a 180degree rotation back because you are mirroring in two different axes (which is effectively the same as a 180 degree rotation).

For general use however, you can assume that a quaternion is a rotation value. One very useful benefit of using quaternions is that you can interpolate between two of them without losing the fact that the transform is a rotation. When you linear interpolate a matrix, you are just interpolating basis vectors and an offset. When you interpolate quaternions, you are effectively interpolating a rotation. This is used in animation, both hierarchical (for characters) and camera (for swinging sweeps), and can also be used to define the rotation of entities in the world (so they can be manipulated freeer and easier).

Some people think that quaternions represent rotations. They don't. They represent mirroring. The mirroring can be used to effect a rotation, but it's a side effect, not a natural capacity.

The s,i,j,k values tell you in what way the plane of mirroring has been manipulated. values in i,j,k tell you how much the mirror rotates about each x,y,z axis. This is a simultaneous rotation though and is difficult to mentally represent. If you imagine a 2D version of the quaternion, you can see that with no rotation, pre-mirroring, then mirroring with the same mirror gets you back to where you started. If you rotate 180 degrees, the same will happen because the mirror is still effectively the same angle as before. If, however, you rotate by 90 degrees, you get a 180degree rotation back because you are mirroring in two different axes (which is effectively the same as a 180 degree rotation).

For general use however, you can assume that a quaternion is a rotation value. One very useful benefit of using quaternions is that you can interpolate between two of them without losing the fact that the transform is a rotation. When you linear interpolate a matrix, you are just interpolating basis vectors and an offset. When you interpolate quaternions, you are effectively interpolating a rotation. This is used in animation, both hierarchical (for characters) and camera (for swinging sweeps), and can also be used to define the rotation of entities in the world (so they can be manipulated freeer and easier).

## Friday, 20 June 2008

### Z buffer and Alpha

Things close in the camera projection will obscure things further away. The Z buffer makes sure that this happens. Without the Z buffer, we would see everything that was rendered ordered by the order in which it was rendererd (which is called "the painterly method" if you're interested). This is not nice because you'd then have to sort all your polygons by z before rendering them back most first.

Sometimes when things that have alpha are rendered, they render with the z buffer write turned on. This is a problem if the alpha is low (transparent) and there are more things to be rendered in the world.

From this, you can see the back side of the world through the transparent arms and legs of the chair. This is because the Z buffer has been written to early on in the render, and caught the destination colour (clear colour in some cases, the chair material in others where previous polygons have rendered before the glass sections) wierdly in this circumstance, it looks like the back of the chair (the bit with material) has been rendered with Z writes turned off.

Using the z buffer right can get you some nice effects for free. The health bar in SnowQueen was done using a zwriting pure alpha texture, then rendering a full red bar at a slight angle and bringing it forward to show progressively more of the bar (sliding along the opposite angle to the adjustment angle), and only after that rendering the bar overlay that hid any edges.

Sometimes when things that have alpha are rendered, they render with the z buffer write turned on. This is a problem if the alpha is low (transparent) and there are more things to be rendered in the world.

From this, you can see the back side of the world through the transparent arms and legs of the chair. This is because the Z buffer has been written to early on in the render, and caught the destination colour (clear colour in some cases, the chair material in others where previous polygons have rendered before the glass sections) wierdly in this circumstance, it looks like the back of the chair (the bit with material) has been rendered with Z writes turned off.

Using the z buffer right can get you some nice effects for free. The health bar in SnowQueen was done using a zwriting pure alpha texture, then rendering a full red bar at a slight angle and bringing it forward to show progressively more of the bar (sliding along the opposite angle to the adjustment angle), and only after that rendering the bar overlay that hid any edges.

### Do you know why globals are bad?

Some people just spout that globals are bad. They don't know why they say it. They say it the same way they say that gotos are bad.

my reasons for not enjoying globals:

* Globals are inherently dangerous because they imply temporal coupling.

* Globals are accessable everywhere and therefore restrict encapsulation

Temporal coupling is bad because you have to keep remembering in what order to set variables or call functions otherwise things go wrong.

Encapsulation is good because it reduces the number of access paths and therefore possible damage routes.

Just these two things make a massive difference.

my reasons for not enjoying globals:

* Globals are inherently dangerous because they imply temporal coupling.

* Globals are accessable everywhere and therefore restrict encapsulation

Temporal coupling is bad because you have to keep remembering in what order to set variables or call functions otherwise things go wrong.

Encapsulation is good because it reduces the number of access paths and therefore possible damage routes.

Just these two things make a massive difference.

## Thursday, 19 June 2008

### Finding out if its getting closer, or running away.

Hostiles heading towards you are more threatening than ones heading away from you. To figure this out, take the dot product of the velocity vector with the normliased position difference.

( Me.pos - Them.pos ).Normalised() * Them.vel

the larger the positive value the more threatening it is. If it's negative, then its not really a threat at all.

unless... it's changing direction....

find out if it's intention is you by taking it's acceleration...

( Me.pos - Them.pos ).Normalised() * Them.acc

If that's positive, you should probably start running. But if it's negative, then maybe it's frightened of you and you can stand your ground, or even advance, because it's trying to back off.

( Me.pos - Them.pos ).Normalised() * Them.vel

the larger the positive value the more threatening it is. If it's negative, then its not really a threat at all.

unless... it's changing direction....

find out if it's intention is you by taking it's acceleration...

( Me.pos - Them.pos ).Normalised() * Them.acc

If that's positive, you should probably start running. But if it's negative, then maybe it's frightened of you and you can stand your ground, or even advance, because it's trying to back off.

## Monday, 16 June 2008

### Bit masks and shifting

bit masking is fun.. but sometimes dangerous if you aren't careful.

to se the Nth bit of a variable:

var |= (1 << N)

to unset the Nth bit of a variable:

var &= ~(1 << N)

you can make an N bit mask by taking the Nth bit and subtracting 1

for b111, use (1<<3)-1

for b11111111, use (1<<8)-1

for bMASK_LENGTH(1)s use (1<<MASK_LENGTH)-1

you can then shift this mask up to wherever you need to mask off some values. If you have certain areas of a variable designated as sub ints, then you can set them thus:

setup:

MASK_START

MASK_LENGTH

MASK = ((1<<MASK_LENGTH)-1) << MASK_START

setting sub int:

var &= ~MASK

var |= newValue<<MASK_START

getting the value back out again is quite simple:

value = ( var & MASK ) >> MASK_START

have fun, but be careful.

to se the Nth bit of a variable:

var |= (1 << N)

to unset the Nth bit of a variable:

var &= ~(1 << N)

you can make an N bit mask by taking the Nth bit and subtracting 1

for b111, use (1<<3)-1

for b11111111, use (1<<8)-1

for bMASK_LENGTH(1)s use (1<<MASK_LENGTH)-1

you can then shift this mask up to wherever you need to mask off some values. If you have certain areas of a variable designated as sub ints, then you can set them thus:

setup:

MASK_START

MASK_LENGTH

MASK = ((1<<MASK_LENGTH)-1) << MASK_START

setting sub int:

var &= ~MASK

var |= newValue<<MASK_START

getting the value back out again is quite simple:

value = ( var & MASK ) >> MASK_START

have fun, but be careful.

### Keywords you should forget

In C++ there are some really naughty keywords that you should ignore

auto, register, export.

auto and register are bad because you are probably not as clever as the compiler for doing micro scale optimisations. Compilers are really good at their job, and don't need to be told to make a variable exist mostly in a register for speed, nor do they need to be told that a local variable needs to be thrown away at the end of the function scope.

export however is hedious, and if you ever come across it, run away. export allows the body of a template to exist outside the declaration. This means that the template is no longer a fast typesafe macro system, but a generics system, which is a bad thing for speed. Avoid it.

auto, register, export.

auto and register are bad because you are probably not as clever as the compiler for doing micro scale optimisations. Compilers are really good at their job, and don't need to be told to make a variable exist mostly in a register for speed, nor do they need to be told that a local variable needs to be thrown away at the end of the function scope.

export however is hedious, and if you ever come across it, run away. export allows the body of a template to exist outside the declaration. This means that the template is no longer a fast typesafe macro system, but a generics system, which is a bad thing for speed. Avoid it.

## Sunday, 15 June 2008

### Primes

You should know already, but in case you don't, prime numbers are numbers (positive whole numbers) that are divisible only by themselves and one. Any other number is a factor. A factor is made up of the product of a series of prime numbers. There are only Primes and Factors in the whole number series (called N but with a double lined middle bar)

If it's not a prime, it's a factor.

31 is prime. 32 is not (2x2x2x2x2). 33 is not (11x3). 34 is not (17x2).

one intersting thing to do with factors includes finding their greatest common divisor, and lowest common multiple (a kind of hand in had calculation)

the GCD( 32, 34 ) is 2 as 2 is the largest number that can divide both 32 and 34 without leaving a fraction.

the GCD( 12, 18 ) is 6. You can see this by checking the factors of 12 (2,2,3), and the factors of 18 (2,3,3) and seeing that the common factors (2,3) produce 6.

the LCM( 12,18 ) is 36. this is the smallest number that the two values can be multiplied to using only whole numbers. use 12x3 or 18x2 to get 36.

notice the similarity of the solution?

LCM( x,y ) = x * y / GCD( x,y )

or, another way of thinking about it is that you only include the common factors in the GCD, and in the LCM you inlcude all factors, but only the common factors once. (dividing by the GCD effectively removes the common factors).

some factors have a GCD(x,y) of 1. These are co-prime. Neither number needs to be prime, for example, consider 8 and 9. the only factor they share is 1, their least common multiple is 72.

This property can be used in creating pseudorandom sequences like mentioned in an earlier post. The important thing to remember is that the two values share no factors. Some sequences don't have very good coverage though, look at the example below.

start at 1

1*8 mod 9 is 8

8*8 mod 9 is 1

start at 2

2*8 mod 9 is 7

7*8 mod 9 is 2

start at 3

3*8 mod 9 is 6

6*8 mod 9 is 3

start at 4

4*8 mod 9 is 5

5*8 mod 9 is 4

so, you can make a sequence, but be careful how it loops around, and be sure it has a good long sequence by hand checking it.

If it's not a prime, it's a factor.

31 is prime. 32 is not (2x2x2x2x2). 33 is not (11x3). 34 is not (17x2).

one intersting thing to do with factors includes finding their greatest common divisor, and lowest common multiple (a kind of hand in had calculation)

the GCD( 32, 34 ) is 2 as 2 is the largest number that can divide both 32 and 34 without leaving a fraction.

the GCD( 12, 18 ) is 6. You can see this by checking the factors of 12 (2,2,3), and the factors of 18 (2,3,3) and seeing that the common factors (2,3) produce 6.

the LCM( 12,18 ) is 36. this is the smallest number that the two values can be multiplied to using only whole numbers. use 12x3 or 18x2 to get 36.

notice the similarity of the solution?

LCM( x,y ) = x * y / GCD( x,y )

or, another way of thinking about it is that you only include the common factors in the GCD, and in the LCM you inlcude all factors, but only the common factors once. (dividing by the GCD effectively removes the common factors).

some factors have a GCD(x,y) of 1. These are co-prime. Neither number needs to be prime, for example, consider 8 and 9. the only factor they share is 1, their least common multiple is 72.

This property can be used in creating pseudorandom sequences like mentioned in an earlier post. The important thing to remember is that the two values share no factors. Some sequences don't have very good coverage though, look at the example below.

start at 1

1*8 mod 9 is 8

8*8 mod 9 is 1

start at 2

2*8 mod 9 is 7

7*8 mod 9 is 2

start at 3

3*8 mod 9 is 6

6*8 mod 9 is 3

start at 4

4*8 mod 9 is 5

5*8 mod 9 is 4

so, you can make a sequence, but be careful how it loops around, and be sure it has a good long sequence by hand checking it.

### Mutable. Oh for goodness sake.

So, you have const, which declares things as being const, or at least immutable by you. An example is that a class API will often have functions declared as "this const":

void GetValue() const;

The trailing const refers to the this in the method (this is why you can't have const trailing global procedures or static methods).

Once in the method body, you'll find that you can acces the values but you can't modify them (they've all become const).

mutable lets you declare member variables as being not const even when the this is const. This sounds silly because you've just consted the class, and now you're unconsting members of the const this.

A really good examples of why this can be valid and useful is as a cache.

for example, in your special Library class, you have a method "int GetBookIDByName() const;". It gets the book ID of a book by looking through all the books it has and then returning true if the name matches. This is an expensive process, so, because the coder knows that the calling code is going to access the same book a few times, he stores the last book ID in a mutable, and then checks that first.

mutable int mLastBookID;

this allows him to quickly check to see if the book name is good, then saves the run through the whole library, but still allows the library to remain const because the caller isn't allowed to modify the library itself.

void GetValue() const;

The trailing const refers to the this in the method (this is why you can't have const trailing global procedures or static methods).

Once in the method body, you'll find that you can acces the values but you can't modify them (they've all become const).

mutable lets you declare member variables as being not const even when the this is const. This sounds silly because you've just consted the class, and now you're unconsting members of the const this.

A really good examples of why this can be valid and useful is as a cache.

for example, in your special Library class, you have a method "int GetBookIDByName() const;". It gets the book ID of a book by looking through all the books it has and then returning true if the name matches. This is an expensive process, so, because the coder knows that the calling code is going to access the same book a few times, he stores the last book ID in a mutable, and then checks that first.

mutable int mLastBookID;

this allows him to quickly check to see if the book name is good, then saves the run through the whole library, but still allows the library to remain const because the caller isn't allowed to modify the library itself.

## Friday, 13 June 2008

### Morphing and Tweening

When we animate characters, there are two main ways of animating them. One is to get the meshes of each pose and animate by morphing between the different meshes. I think this is how quake2 did it. Another method is hierarchical bones. This is the normal way it is done now, and the hierarchy's transforms are used to affect a mesh via some skinning data (which verts to affect and by how much).

Animating that hierarchy is usually called playing an animation. If you play two anims at the same time and interpolate between them, it's called tweening.

We use morphing tech now for facial control because it allows more fine control without the costs when the number of verts per bone gets really low.

Animating that hierarchy is usually called playing an animation. If you play two anims at the same time and interpolate between them, it's called tweening.

We use morphing tech now for facial control because it allows more fine control without the costs when the number of verts per bone gets really low.

## Tuesday, 10 June 2008

### Power of 2 textures.

The artists are told time and time again that they can only have power of two textures. Do you know why?

Simple bit shifting will get you any of the valid texture sizes:

1<<n

{ 1,2,4,8, 16,32,64,128, 256,512,1024,2048, 4096,8192,16384,32768, 65536,... }

What's so special about these values? Their ability to be simply masked for wrapping and tessellation. How is this possible?

take any of the numbers on the list, their binary representation has one and only one bit set (because they are the 0th bit (1) shifted up by n) because of this, if you take one away from them, they become all the ones up to and excluding their own. e.g.

8 = b1000 and 7 = b0111

256 = b100000000 255 = b11111111

this is useful because if you use these bits as a mask, you are effectively doing a safe modulus on the incoming value (as mentioned in a prior post)

if the textures are all power of two, then the texture look up engine needs only apply a mask to find the texel address for a wrapping texture.

consider a large square polygon with uvs { 0,0,4,4 } these are floats, and 4 represents 4 * pixel width/height of the original image. At some point on the polygon the uv is going to be outside the range of simple unit texture look up. If we look at one random part, say { 2.5, 0.5 }, we can see that the vertal (V), has a safe looking value, but the horizontal (U) is at 2.5 units. The base image in our demo is 32x32 pixels, when the rasteriser comes across this value, it doesn't look up the texture coordinate at { 80,16 }, it firstly takes the pixel width and height, makes masks and then ands with them on any texture look up. "80 & 31 == 16", so it actually looks up the texture colour at {16,16}.

Simple bit shifting will get you any of the valid texture sizes:

1<<n

{ 1,2,4,8, 16,32,64,128, 256,512,1024,2048, 4096,8192,16384,32768, 65536,... }

What's so special about these values? Their ability to be simply masked for wrapping and tessellation. How is this possible?

take any of the numbers on the list, their binary representation has one and only one bit set (because they are the 0th bit (1) shifted up by n) because of this, if you take one away from them, they become all the ones up to and excluding their own. e.g.

8 = b1000 and 7 = b0111

256 = b100000000 255 = b11111111

this is useful because if you use these bits as a mask, you are effectively doing a safe modulus on the incoming value (as mentioned in a prior post)

if the textures are all power of two, then the texture look up engine needs only apply a mask to find the texel address for a wrapping texture.

consider a large square polygon with uvs { 0,0,4,4 } these are floats, and 4 represents 4 * pixel width/height of the original image. At some point on the polygon the uv is going to be outside the range of simple unit texture look up. If we look at one random part, say { 2.5, 0.5 }, we can see that the vertal (V), has a safe looking value, but the horizontal (U) is at 2.5 units. The base image in our demo is 32x32 pixels, when the rasteriser comes across this value, it doesn't look up the texture coordinate at { 80,16 }, it firstly takes the pixel width and height, makes masks and then ands with them on any texture look up. "80 & 31 == 16", so it actually looks up the texture colour at {16,16}.

### Signed and Unsigned

I'm a signed int supporter because i don't like lots of types cluttering up my code making warnings where there needn't be any, but I just came across one of those reasons to use an unsigned type.

I'm squishing data down small and my indexes run from 0-54000. That fits in an unsigned short, but not a signed short.

Its the only time I'd advocate using anything other than an int/float/char/bool. I don't like optimizing this way, but I think it's correct for this.

I'm squishing data down small and my indexes run from 0-54000. That fits in an unsigned short, but not a signed short.

Its the only time I'd advocate using anything other than an int/float/char/bool. I don't like optimizing this way, but I think it's correct for this.

### Vector Length

There are many ways to calculate the length of a vector, some are based on the idea that the length function can use powers other than 2 to do it's calculations

for 2:

sqrt( x^2 + y^2 + z^2 ... ) == ( x^2 + y^2 + z^2 ... ) ^ 1/2

this is the standard length.

for 1:

( x+y+z ... )

this is called the Manhatten distance.

for infinity:

( x^inf + y^inf + z^inf ... ) ^ 1/inf

this is the maximum value in the vector.

there is also the technique of quick length which is the sorted abs vector (where the largest value is first) calculated as the dot of { 1,1/2,1/4, ... } e.g.

{ 2,4,1 } -> 4,2,1 -> 5.25

This isn't a bad approximation in most uses, i think i remember someone saying that its about 6% out at worst but that sounds quite conservative to me.

for 2:

sqrt( x^2 + y^2 + z^2 ... ) == ( x^2 + y^2 + z^2 ... ) ^ 1/2

this is the standard length.

for 1:

( x+y+z ... )

this is called the Manhatten distance.

for infinity:

( x^inf + y^inf + z^inf ... ) ^ 1/inf

this is the maximum value in the vector.

there is also the technique of quick length which is the sorted abs vector (where the largest value is first) calculated as the dot of { 1,1/2,1/4, ... } e.g.

{ 2,4,1 } -> 4,2,1 -> 5.25

This isn't a bad approximation in most uses, i think i remember someone saying that its about 6% out at worst but that sounds quite conservative to me.

### Always compare the squares of the lengths.

And old trick but a good one. If you're trying to find out if a vector is over or under a certain length, then you don't need to do a whole GetLength on it, you can instead use GetSquaredLength, which doesn't include the sqrt function call. All you need to do then is compare it against the squared comparison value to find the same answers as you would have done if you had used the normal length function.

But the title of this post should be "simple things you should know about vectors" because that's not the only trick I'm going to mention.

If you want to know if two vectors are facing away from each other, or if they are facing the same direction, then all you need to know is the sign of their dot product. If it's positive, then they are facing the same direction, if it's negative, they are facing opposite directions. if it's zero, then either one or both of the vectors is zero length, or the two vectors are facing 90 degrees to each other.

If you want to find out the angle for how far up or down a vector is looking, and you know it's a unit vector (that is a vector of length 1), just use the vertical component of the vector in an arcsin. For small values, the arcsin will be virtually the same as the ingoing vertical component, so you might find that the angle is simply "vec.y". Using a simple power series you will get really close without having to do any arcsin at all for reasonable values of the vertical component.

angle = x - x^3/6 + x^5/120

this series will be accurate almost all the way to 45 degrees. Subtract x^7/5040 and you will be accurate to 90 degrees.

But the title of this post should be "simple things you should know about vectors" because that's not the only trick I'm going to mention.

If you want to know if two vectors are facing away from each other, or if they are facing the same direction, then all you need to know is the sign of their dot product. If it's positive, then they are facing the same direction, if it's negative, they are facing opposite directions. if it's zero, then either one or both of the vectors is zero length, or the two vectors are facing 90 degrees to each other.

If you want to find out the angle for how far up or down a vector is looking, and you know it's a unit vector (that is a vector of length 1), just use the vertical component of the vector in an arcsin. For small values, the arcsin will be virtually the same as the ingoing vertical component, so you might find that the angle is simply "vec.y". Using a simple power series you will get really close without having to do any arcsin at all for reasonable values of the vertical component.

angle = x - x^3/6 + x^5/120

this series will be accurate almost all the way to 45 degrees. Subtract x^7/5040 and you will be accurate to 90 degrees.

## Monday, 9 June 2008

### Default argument values in classes.

They can be dangerous if you're overriding due to inheritance. If you inherit a class and override a function that has a default value, you should not use a default value. If you do, someone may come along and change the base class default, or worse, you might think it's okay to have a different value on your function.

For you to understand why this is a problem, you must understand that default values are compile time, but function calls on inherited classes are runtime. For example, if you call the base class without an argument, the new inheriting class gets the default value from the base class, not it's own default argument. Funny huh?

Worry about it. If you need a default argument, try splitting the function into overloads instead.

For you to understand why this is a problem, you must understand that default values are compile time, but function calls on inherited classes are runtime. For example, if you call the base class without an argument, the new inheriting class gets the default value from the base class, not it's own default argument. Funny huh?

Worry about it. If you need a default argument, try splitting the function into overloads instead.

### How many bits?

Sometimes you need to know how many bits there are in a binary representation of a number.

0 = 0

1 = 1

2 = 1

3 = 2

4 = 1

5 = 2

127 = 7

etc...

Anyway, there is a quicker way than just counting them in a loop.

first of all, break the problem down. To find the number of bits in a 32 bit int, find the sum of the bit counts in two 16 bit ints.

to find the number of bits in a 16 bit in, find the sum of the bit counts in two 8 bit ints.

to find the number of bits in a 8 bit in, find the sum of the bit counts in two 4 bit ints.

to find the number of bits in a 4 bit in, find the sum of the bit counts in two 2 bit ints.

to find the number of bits in a 2 bit in, find the sum of the bit counts in two 1 bit ints.

to find the number of bits in a 1 bit in, just use it's value.

oh, is that all... how exactley is that faster than just counting the bits? Well, you can count all the ones at the same time and sum them into 16 2 bit numbers.

0x55555555 == b01010101010101010101010101010101

0xAAAAAAAA == b10101010101010101010101010101010

count = ( value & 0x55555555 ) + ( ( value & 0xAAAAAAAA ) >> 1 )

This gives us the sums of all the pairs of 1 bit ints.

now we do the same with the pairs of 2 bits

count = ( count & 0x33333333 ) + ( ( count & 0xCCCCCCCC ) >> 2 )

then onward

count = ( count & 0x0F0F0F0F ) + ( ( count & 0xF0F0F0F0 ) >> 4 )

count = ( count & 0x00FF00FF ) + ( ( count & 0xFF00FF00 ) >> 8 )

count = ( count & 0x0000FFFF ) + ( ( count & 0xFFFF0000 ) >> 16 )

and value now contains the bit count of the whole 32 bit intial value.

counting the bits in a 64 bit value:

count = ( value & 0x5555555555555555 ) + ( ( value & 0xAAAAAAAAAAAAAAAA ) >> 1 )

count = ( count & 0x3333333333333333 ) + ( ( count & 0xCCCCCCCCCCCCCCCC ) >> 2 )

count = ( count & 0x0F0F0F0F0F0F0F0F ) + ( ( count & 0xF0F0F0F00xF0F0F0F0 ) >> 4 )

count = ( count & 0x00FF00FF00FF00FF ) + ( ( count & 0xFF00FF000xFF00FF00 ) >> 8 )

count = ( count & 0x0000FFFF0000FFFF ) + ( ( count & 0xFFFF0000FFFF0000 ) >> 16 )

count = ( count & 0x00000000FFFFFFFF ) + ( ( count & 0xFFFFFFFF00000000 ) >> 32 )

Ooh, wasn't that easy?

0 = 0

1 = 1

2 = 1

3 = 2

4 = 1

5 = 2

127 = 7

etc...

Anyway, there is a quicker way than just counting them in a loop.

first of all, break the problem down. To find the number of bits in a 32 bit int, find the sum of the bit counts in two 16 bit ints.

to find the number of bits in a 16 bit in, find the sum of the bit counts in two 8 bit ints.

to find the number of bits in a 8 bit in, find the sum of the bit counts in two 4 bit ints.

to find the number of bits in a 4 bit in, find the sum of the bit counts in two 2 bit ints.

to find the number of bits in a 2 bit in, find the sum of the bit counts in two 1 bit ints.

to find the number of bits in a 1 bit in, just use it's value.

oh, is that all... how exactley is that faster than just counting the bits? Well, you can count all the ones at the same time and sum them into 16 2 bit numbers.

0x55555555 == b01010101010101010101010101010101

0xAAAAAAAA == b10101010101010101010101010101010

count = ( value & 0x55555555 ) + ( ( value & 0xAAAAAAAA ) >> 1 )

This gives us the sums of all the pairs of 1 bit ints.

now we do the same with the pairs of 2 bits

count = ( count & 0x33333333 ) + ( ( count & 0xCCCCCCCC ) >> 2 )

then onward

count = ( count & 0x0F0F0F0F ) + ( ( count & 0xF0F0F0F0 ) >> 4 )

count = ( count & 0x00FF00FF ) + ( ( count & 0xFF00FF00 ) >> 8 )

count = ( count & 0x0000FFFF ) + ( ( count & 0xFFFF0000 ) >> 16 )

and value now contains the bit count of the whole 32 bit intial value.

counting the bits in a 64 bit value:

count = ( value & 0x5555555555555555 ) + ( ( value & 0xAAAAAAAAAAAAAAAA ) >> 1 )

count = ( count & 0x3333333333333333 ) + ( ( count & 0xCCCCCCCCCCCCCCCC ) >> 2 )

count = ( count & 0x0F0F0F0F0F0F0F0F ) + ( ( count & 0xF0F0F0F00xF0F0F0F0 ) >> 4 )

count = ( count & 0x00FF00FF00FF00FF ) + ( ( count & 0xFF00FF000xFF00FF00 ) >> 8 )

count = ( count & 0x0000FFFF0000FFFF ) + ( ( count & 0xFFFF0000FFFF0000 ) >> 16 )

count = ( count & 0x00000000FFFFFFFF ) + ( ( count & 0xFFFFFFFF00000000 ) >> 32 )

Ooh, wasn't that easy?

### Optimise early

Woo, gotcha. But seriously, don't fanny around in endless debug cycles that take an hour a shot. Where you can, make sure that your debug iterations are fast so that you don't lose your train of thought about the debugging. This can mean that you will want to optimise your code or data before it works. As long as it is a simple to understand optimisation, then you're probably better off with it than without it.

I'm sure some people will argue with this, but my point is this: If you spend ages waiting to get to the moment that you can debug, you might only get to it 10-20 times a day, but if you optimise the code a little, especially if it's an approach optimisation, you might get to look at the same bug 100-200 times.

My caveat then is don't ever make silly little optimisations that get you back 10-20%, attempt very large ones that you know should make an order of magnitude difference.

I'm sure some people will argue with this, but my point is this: If you spend ages waiting to get to the moment that you can debug, you might only get to it 10-20 times a day, but if you optimise the code a little, especially if it's an approach optimisation, you might get to look at the same bug 100-200 times.

My caveat then is don't ever make silly little optimisations that get you back 10-20%, attempt very large ones that you know should make an order of magnitude difference.

### Stiching Skinning Texturing

Texturing is about making a blank, lit, grey model look cool due to the artists' 2D efforts. This has nothing to do with skinning. We were all told a long time ago about skins for characters and applications that were just textures to replace the old ones, but skinning is not that. In games, skinning is about taking a static mesh (a static posed character with his arms outstretched for some reason seems to be the norm), and bending it using weighted sections that we call bones.

The vertices of that mesh are affected by sets of bones by different amounts. The weights of each bone and the bone used are stored next to the vertices so that come render time, the renderer knows which bones to use to transform the vertex, and how much of each bone transformed vert to add into the final output vertex value.

If you only use one bone per vertex, and always have a weight of 100% for that bone, then you end up doing stitching, this is where sections of the mesh are moved solidly, and some polygons stretch to cover the gaps at the joints. This method worked well on the ps1, and is used for some ps2 titles, and using blended joint sections, this method is good for the Wii also.

So, Skinning is about bending a mesh to make it look like it's a soft animated body, stitching is about using solid sections with some polys stretching to cover the gaps, and textureing is about usig the 2D stuff. Hopefully we won't have to go over this again :)

The vertices of that mesh are affected by sets of bones by different amounts. The weights of each bone and the bone used are stored next to the vertices so that come render time, the renderer knows which bones to use to transform the vertex, and how much of each bone transformed vert to add into the final output vertex value.

If you only use one bone per vertex, and always have a weight of 100% for that bone, then you end up doing stitching, this is where sections of the mesh are moved solidly, and some polygons stretch to cover the gaps at the joints. This method worked well on the ps1, and is used for some ps2 titles, and using blended joint sections, this method is good for the Wii also.

So, Skinning is about bending a mesh to make it look like it's a soft animated body, stitching is about using solid sections with some polys stretching to cover the gaps, and textureing is about usig the 2D stuff. Hopefully we won't have to go over this again :)

## Saturday, 7 June 2008

### Better than debugging

When you finally get why a bug is what it is, sometimes it's because you didn't understand how something worked. Debugging helps track down what's not working in your code, but also helps track down what's lacking in your knowledge, and what's wrong with your process.

When you come across something that doesn't work the way you expect it, figure out why you didn't expect it and it may increase your knowledge.

When you find that your bug comes from some oversight or laziness, then you need to figure out if you can adjust your process so that it makes it harder to happen again.

Debugging leads to clean code, not just code that exists now, but code that you will write in the future too.

When you come across something that doesn't work the way you expect it, figure out why you didn't expect it and it may increase your knowledge.

When you find that your bug comes from some oversight or laziness, then you need to figure out if you can adjust your process so that it makes it harder to happen again.

Debugging leads to clean code, not just code that exists now, but code that you will write in the future too.

## Friday, 6 June 2008

### Floats...

They are approximations. Never forget that they are approximations. Sometimes they are perfectly accurate, but only for values that you hardly ever find used at run time.

ordered in a sane way, they are made up of 1 bit of sign, 8 bits of exponent and 23 bits of mantissa. You can think of this as 1+(mantissa) * 2^(exponent-127) * sign

this means that the value 1.0f is actually "1+(0.0) * 2^(127-127) * +1" (which works out as 0x3F800000)

if we have -1.0f, then the only bit that changes is the sign bit.. (which means that the hex value gets + 0x80000000 and becomes 0xBF800000)

being able to recognize these values can be useful during debugging, especially if your pointers have these values, as its probably a data overrun of some sort.

ordered in a sane way, they are made up of 1 bit of sign, 8 bits of exponent and 23 bits of mantissa. You can think of this as 1+(mantissa) * 2^(exponent-127) * sign

this means that the value 1.0f is actually "1+(0.0) * 2^(127-127) * +1" (which works out as 0x3F800000)

if we have -1.0f, then the only bit that changes is the sign bit.. (which means that the hex value gets + 0x80000000 and becomes 0xBF800000)

being able to recognize these values can be useful during debugging, especially if your pointers have these values, as its probably a data overrun of some sort.

### Vectors: dot product.

The dot product is a really handy single return value operation that is applied to two vectors at the same time. The result of the dot product is (in technical terms) the product of the length of both vectors and the cosine of the angle between them.

How can this seemingly awkard combination of three different values be useful?

Well firstly, unless we're retarded, we will use get the dot of vectors that are already in a usable form for the task at hand. For example, if we want to know the angle difference between two vectors, we would normalise the input vectors (or divide the dot result by the lengths of the two vectors, take your pick, the math's all works out the same), then we can do an acos on the return value to find out the angle of difference (although you shouldn't really use angles in professional code ;)

Secondly, sometimes the combination of values is more useful than the separated triplet.

Dot product is really useful in lighting. Doing a dot of the surface normal with the angle of the incoming light source gets us an angle attenuated value (just like you want for diffuse mapping).

When we talk about dot3 bump mapping, we're talking about doing a per pixel dot product of the light vector and the normal in the bump map.

Specular is a little more complicated, but its mostly using the dot of a calculated new lighting normal that is created from the incoming lightl vector plus the product of the dot of the normal and the light vector and 2 and the normal itself. (written often as L+N*2*(L.N)) which is then dotted with the normalised vert to camera vector to give an angle attenuated value for the specular.

How can this seemingly awkard combination of three different values be useful?

Well firstly, unless we're retarded, we will use get the dot of vectors that are already in a usable form for the task at hand. For example, if we want to know the angle difference between two vectors, we would normalise the input vectors (or divide the dot result by the lengths of the two vectors, take your pick, the math's all works out the same), then we can do an acos on the return value to find out the angle of difference (although you shouldn't really use angles in professional code ;)

Secondly, sometimes the combination of values is more useful than the separated triplet.

Dot product is really useful in lighting. Doing a dot of the surface normal with the angle of the incoming light source gets us an angle attenuated value (just like you want for diffuse mapping).

When we talk about dot3 bump mapping, we're talking about doing a per pixel dot product of the light vector and the normal in the bump map.

Specular is a little more complicated, but its mostly using the dot of a calculated new lighting normal that is created from the incoming lightl vector plus the product of the dot of the normal and the light vector and 2 and the normal itself. (written often as L+N*2*(L.N)) which is then dotted with the normalised vert to camera vector to give an angle attenuated value for the specular.

### Vectors are cool, angles are not.

When you are learning about maths at school, you think that trigonometry will be the best thing ever to learn to help you code games. You get taught about sins and cosines, tangents and pi. These things seem like the way to go.

Once you've spent a little time working in a games company though you'll notice that no-one actually uses them for much. Why? Vectors can do almost anything an angle can do, and when they do, they cost a lot less.

One of the things I still see people doing on occasion is figuring out what angle something is at so that they can fire off and angle oriented event. An example of this would be an entity trying to align itself to face another entity. Vectors can do this really well, there is no reason to resort to nasty gimbal locking euler angles. If you take the vector that is the difference of the position of the target and the position of the thing that is trying to reorient, you can normalize that vector and find it's difference from your forward vector to quickly calculate an error vector, or multiply it by the inverse model matrix to get a localised direction vector. This new vector can be then used in a number of different ways to do the work.

P(thingy), T(target)

L = T-P

error = L - P.forward

local direction = L * P.inv

the error vector is a global space vector, useful for modifying an entity on a global scale.

the local direction vector is useful for working out what an entity has to do to itself to affect its orientation. This is really useful in flight sims and space games where there is no real "up", and gimbal lock would kill you.

Happy hacking.

Once you've spent a little time working in a games company though you'll notice that no-one actually uses them for much. Why? Vectors can do almost anything an angle can do, and when they do, they cost a lot less.

One of the things I still see people doing on occasion is figuring out what angle something is at so that they can fire off and angle oriented event. An example of this would be an entity trying to align itself to face another entity. Vectors can do this really well, there is no reason to resort to nasty gimbal locking euler angles. If you take the vector that is the difference of the position of the target and the position of the thing that is trying to reorient, you can normalize that vector and find it's difference from your forward vector to quickly calculate an error vector, or multiply it by the inverse model matrix to get a localised direction vector. This new vector can be then used in a number of different ways to do the work.

P(thingy), T(target)

L = T-P

error = L - P.forward

local direction = L * P.inv

the error vector is a global space vector, useful for modifying an entity on a global scale.

the local direction vector is useful for working out what an entity has to do to itself to affect its orientation. This is really useful in flight sims and space games where there is no real "up", and gimbal lock would kill you.

Happy hacking.

### real modulus and fake

there are a lot of different meanings to the word modulus with respect to computers and maths. The mathematical definition is virtually worthless in C, as the C modulus operator is a lie, it is not modulus, it is remainder.

True modulus can be explained as the remainder of a divide, but it's best to think of it as a small space in which numbers go round the corner back to the bottom every time they get to the end.

the small spaces are name Nx where x is the modulus.

for example, 4 mod 5 is 4 because the field N5 consists of { 0,1,2,3,4 }.

6 mod 5 is not 6 however because it doesn't fit in the list of available numbers. When a number doesn't fit in the list, you take the x value away until it fits. so, 6 becomes 1, which fits, and therefore 6 mod 5 is 1.

Even very very very large numbers can be modded thus.

126785913649525 mod 2 is 1.

126785913649525 mod 10 is 5.

interesting though is the case of negative numbers. 6 mod 5 is 1 because we took away 5 to get 1. -4 mod 5 is also not in the field, even though it looks like it might be because we see a 4. This is a lie. Computers will return the value -4 as a valid result of a mod operation, but modulus does not have negative numbers because it is based in N (not Z for integers).

the correct way of arriving at a proper modulus of N5 is to add until you fit. in this case, -4 becomes 1. Just like 6. You can even see simply that this is true by taking 6-4 = 2 as a exemplary proof that (6)N5 + (-4)N5 == (2)N5

computer modulus is a remainder.

BUT:

bit masking gives us a limited set of real modulus capability.

if you want to mod (2^x), then you're in luck.

9 mod 8 is 1.

-7 mod 8 is 1.

9 % 8 = 1

-7 % 8 = -7

but, using the bitwise AND (&), and the mask of the modulus we're after (x-1):

9 & (8-1) = 1

-7 & (8-1) = 1

this is useful in other ways too.

multiplication in fields is strange and enjoyable, especially if the multiplier is co-prime with the number space.

e.g.

1N5 = 1

1*3 N5 = 3

3*3 N5 = 4

4*3 N5 = 2

2*3 N5 = 1

so... multiplication takes you around a number space in a predictable, but odd pattern...

this is the basis of many pseudo random number generators.

this is also the basis of some sweet hacks...

example

x = 0x40000000;

loop:

x *= 3; // note implicit modulus due to size of int ( same as mod 0x100000000 )

if( x > 0 ) // will to false if highest bit is true

{

// is true every other time

}

happy hacking.

True modulus can be explained as the remainder of a divide, but it's best to think of it as a small space in which numbers go round the corner back to the bottom every time they get to the end.

the small spaces are name Nx where x is the modulus.

for example, 4 mod 5 is 4 because the field N5 consists of { 0,1,2,3,4 }.

6 mod 5 is not 6 however because it doesn't fit in the list of available numbers. When a number doesn't fit in the list, you take the x value away until it fits. so, 6 becomes 1, which fits, and therefore 6 mod 5 is 1.

Even very very very large numbers can be modded thus.

126785913649525 mod 2 is 1.

126785913649525 mod 10 is 5.

interesting though is the case of negative numbers. 6 mod 5 is 1 because we took away 5 to get 1. -4 mod 5 is also not in the field, even though it looks like it might be because we see a 4. This is a lie. Computers will return the value -4 as a valid result of a mod operation, but modulus does not have negative numbers because it is based in N (not Z for integers).

the correct way of arriving at a proper modulus of N5 is to add until you fit. in this case, -4 becomes 1. Just like 6. You can even see simply that this is true by taking 6-4 = 2 as a exemplary proof that (6)N5 + (-4)N5 == (2)N5

computer modulus is a remainder.

BUT:

bit masking gives us a limited set of real modulus capability.

if you want to mod (2^x), then you're in luck.

9 mod 8 is 1.

-7 mod 8 is 1.

9 % 8 = 1

-7 % 8 = -7

but, using the bitwise AND (&), and the mask of the modulus we're after (x-1):

9 & (8-1) = 1

-7 & (8-1) = 1

this is useful in other ways too.

multiplication in fields is strange and enjoyable, especially if the multiplier is co-prime with the number space.

e.g.

1N5 = 1

1*3 N5 = 3

3*3 N5 = 4

4*3 N5 = 2

2*3 N5 = 1

so... multiplication takes you around a number space in a predictable, but odd pattern...

this is the basis of many pseudo random number generators.

this is also the basis of some sweet hacks...

example

x = 0x40000000;

loop:

x *= 3; // note implicit modulus due to size of int ( same as mod 0x100000000 )

if( x > 0 ) // will to false if highest bit is true

{

// is true every other time

}

happy hacking.

Subscribe to:
Posts (Atom)