Friday 28 December 2012

Self-Drive Engage

Lately I've been thinking a lot about self-driving cars.

You see, the whole point of a self-driving vehicle is that the occupants of the vehicle are absolved from all the responsibility and all of the joy of operating the motor vehicle.

In such a scenario, which is currently playing out in both California and Nevada, the part that I've been thinking about the most is: Who should be responsible for paying the speeding tickets?

It brings in a number of thorny questions, not the least of which is the difference between driving safely and driving legally.  I hope that we can assume that the car will be authorized to drive safely first, and legally second. (Please let me know in the comments below if this is not the case!!)

It also calls into question the goal of the speeding ticket program in general.  If the goal is to genuinely limit the Kinetic Energy of the vehicle (= mass x sqr(velocity)/2), then lets forget about speeding, and instead record this computed kinetic energy quantity in a continuous manner, along with the GPS co-ordinates, and at the end of the month compare it with the local authority's database of speed kinetic energy limits.

Behavior Modification

In gaming terms, a (speeding) fine is a way of modifying behavior by producing a sharp negative feedback at random intervals.  This is among the most effective ways we know of reducing an undesired player behavior.

Unfortunately this technique simply does not work against computer software.  The only people qualified to change the software are the developers, and it requires active participation on the vehicle owner's part to update the software on a regular basis.

Insurance

I hereby propose an Insurance based licensing scheme for self-drive vehicles.  I propose that in order for a vehicle to (legally) use a self-drive mechanism, the owner of the vehicle must purchase insurance from an organization that is both state licensed, and independently audited.  Eligibility for any given insurance policy will be based on the make and model of the vehicle, plus the software package, version and database of the self-drive mechanism.  At the end of the month, all of the occasions when vehicles with the same policy have exceeded the posted speed kinetic energy limit are summed up, and it's the insurance policy fund which pays out to the state, with no additional per-vehicle owner expenses.

This creates a market for insurance policies.  You can purchase cheaper insurance by buying more conservative software, or pay more in insurance but arrive at your destination sooner with more aggressive software.  As technology and software changes and improves, so too will the market for your self-drive insurance match the current conditions in your state.

And if the price of the insurance is too high for your particular vehicle (e.g. it's too old, or too unsafe, or you're currently out-of-state), you can always opt-out and disable the self-drive feature of your vehicle.

Incentives


This proposal create the right incentives, the software developer must use the best software engineering techniques, the vehicle owner must keep their vehicle updated with the latest software, the insurance socializes the speeding costs amongst all vehicle owners of the same class, and the market ensures an efficient allocation of policies and choice of software programs across all the vehicles in the state's fleet.

The one piece of the puzzle that's missing is the state.  Suppose that a speed kinetic energy limit on a particular stretch of road is changed, but the software developers aren't notified in a timely manner.  In this case, the state itself has been negligent, and it's the state itself which should be fined for putting motorists at risk.  In the same way that the state must adequately signpost the speed limit, so should be it's responsibility to notify the state licensed self-drive software developers.

Speeding?

Of course, I've used speeding as an example of unsafe vehicle behavior, but this regulatory framework extends in a natural way to all vehicle behaviors - stop signs, following distances, red light rules, yielding to buses on residential roads.  Even accident compensation, emission standards, and fuel usage.

The only exceptions I can see are when a vehicle is attempting to drive safely rather than legally.  Without getting all Carl Sagan here, it seems that we could use the black-box data to evaluate all collisions (few) and near-misses (many) to improve the software and improve safety over time.

Failure To Yield

Interestingly, the large majority of vehicle collisions are caused by one simple mechanism, "Failure To Yield".   That's what stop signs and traffic lights and turning circles are all about. A self-drive vehicle, equipped with appropriate sensors, has no reason to stop at stop signs, nor yield at yield signs (if it can negotiate with another self-drive vehicle to yield instead), other than to avoid startling other human drivers.

Reality?

Will it happen?  An insurance based self-drive licensing scheme? I don't know..  If anyone knows of the actual proposed self-drive licensing situation, please post it in the comments below!



Saturday 15 December 2012

The Infinite Blogpost

There's an Indie project floating around at the moment, that's being touted as infinite.

For some reason, it really bugs me when people take a perfectly good word, a word like "infinite", and then apply it incorrectly.

You see, a desktop computer is finite.

Suppose your desktop computer is a Commodore VIC-20, with a whopping 3.5 kilobytes of memory.  Then there are only 2563,583 different states that your desktop computer can be in.

Sure, that's a lot of states, but it's certainly not infinite. You could, at least in principle, enumerate them all.  And you'd find that there are exactly 2563,583 of them.  That's the very definition of finite.

Finite software runs on finite computers


Lets take a closer look at those finite states on the VIC-20.  We know that the computer is finite, but maybe there is some magical technique in which we could write a computer program to have an infinite amount of state?

Unfortunately, no, we cannot.  The pigeon hole principle forbids it.

Fast forward to the Modern Era


Oh? Your computer has more memory than a VIC-20? 4 Gigabytes perhaps?

Well that's still just 2564,294,967,296 states.  It's still not infinite.

Oh, you have a 3TB hard drive as well?

Okay, so now you have access to an additional 2563,298,534,883,328 different states.

That's a lot of storage.  These numbers are large, but they're all still finite.

The problem is that infinity is just so mindbogglingly larger than any number you could possibly store on your hard drive.

You'd need a technology shift to be able to store infinite state.

Bandwidth, over time, is Infinite

So hopefully I've managed to convince you that your computer, and by extension, the software running on it, is finite. Regardless of what that hardware is.

But now consider, the curious case of your internet connection.

If you're like me, you have a bandwidth cap of 4GB per month.  Then it is true, that for any particular month, your bandwidth is finite.

But consider your 4GB bandwidth extending over time.

I can send 8GB in 2 months.  Or 40GB in 10 months. Or 400GB in 100 months.

Here's the curious thing, if we assume that time is infinite (a big assumption, granted), then for any amount of state, we can calculate how many months it would take to send that state on your internet connection by dividing by 2564,294,967,296 .


Let me repeat that, given any amount of state, we could send that state in a finite amount of time, over your internet connection.

And that is what is meant by "Bandwidth, over time, is infinite".

Sunday 9 December 2012

Don't Repeat Yourself

When making Indie games, there's a mantra that bears repeating over, again and again:

“Don't Repeat Yourself”

It's actually a corruption of a much deeper truth. If software development excites you, I urge you to read all the gory details about that deeper truth over on wikipedia. (If you want to go read it now, I'll still be here when you get back.)

When making Indie Games, however, “Don't Repeat Yourself” means something different. It's rooted in the notion that (calender) time is the most precious resource. It's the free variable with the highest opportunity cost. The longer amount of time it takes you to do something directly translates into a lesser amount of time you could be working on your next goal.

So what happens if you have to repeat a previous step? Replace an image that's no longer working? Change a sound effect? Rebuild a level? What happens when you have to repeat yourself?

For every repeat, the time you wisely invested into the previous version of that asset is effectively wasted.

You would have been better off using that initial time reading game development blogs. Or meditating. Or playing Dino Switch II.

So what does "Don't Repeat Yourself" really mean? It means that as an Indie, (almost) every piece of content you put time into, needs to ship at some point in the future. It means that each time you touch an asset, you should treat it as the last time you'll touch it before it ships.

Your asset pipeline (as an Indie) needs to go :

Concept -> Placeholder -> Shippable Asset


Cappucino


Contrast that with the apocryphal AAA game producer, “Make three cappucinos... then bring me the best one!”

Of course, what he really means is “Have three baristas separately make three different cappuccinos... then discard the two which aren't as good.”

So here's a Pop-Quiz, are those two discarded cappuccinos wasted?

Some would say “Yes”, referring to the ingredients and skill which went into the preparation of content which will never be consumed.

Others would say “No”, because the producer couldn't know ahead of time which barista would prepare the best coffee. Three times the amount of resources have gone into the production, in exchange for an improvement in quality and a substantially reduced chance (risk) of getting a bad coffee.

Don't Repeat Yourself


As an Indie, where (calendar) time is the most precious resource, “Don't Repeat Yourself” means shipping every piece of content you produce. If you somehow find yourself with 3 cappuccinos, then you're going to be drinking them all!

But does that also mean you need to ship the pieces which didn't work out? Not at all. Because the overarching process goes like this:

  • Without repeating yourself, systematically lift every asset in the game up to shippable quality.
    (Upon completion, your game as a whole ought to be shippable)
  • Do a polish pass where you replace only the assets which are (a) quick to improve, (b) have a big impact on quality
  • Ship it


And that's indie game development...

Saturday 3 November 2012

Launch: Dino Switch II

The MissingBytes Blog is very proud to announce the launch of Dino Switch II, available today for iPhone, iPod touch and iPad!


It's a free download from the Apple App Store, so what are you waiting for?
Download it today!


Release

These days, everybody seems to be asking me, "When is your game going to be done?" or even more bluntly, "When is it going to be finished?"

In complete sincerity, I don't actually mind the question.  It's perfectly natural for people to be curious.  But there's a deep assumption underlying that question that I really struggle with.  It's the idea that a game might ever be finished.


You see, back in the "packaged goods era", we used to make video games (i.e. software) that would be burnt into silicon ROM chips, or etched onto optical discs.  This media would then be wrapped in alternating layers of plastic and cardboard, and ultimately stickered and sold at an outrageous mark-up at a magical place called the "point of sale".

Back in the old days, it made sense.  We talked about finishing games, sometimes even had to rush them, to get them on time to that magical faraway land - the "point of sale", where they could be released.

  I like to live in more modern times.

These days, when a game is ready, it is "Launched."  Much like a ship on its maiden voyage on the open ocean, a game is launched onto the open internet, into unfamiliar territory.  There will be changes.  There will be problems.  Maybe even icebergs.  When we come across these unknowns, we are no longer surprised. We take them in our stride and carry on.

More important than that, and just like the ship on its maiden voyage, there will be an ongoing dialogue between the new captains and the old owners.  And that dialogue will evolve and change over time.

Launch

The biggest difference between Releasing a game and Launching a game is that somebody still cares.  Long after that initial purchase is made.

In this brave new online world, as long as you're still playing my game, as its creator, I still have an obligation to listen to your opinions.

So when someone asks me, "When is your game going to be finished?" what I'm hearing in my head is, "When are you going to be finished with your game?" i.e. "When are you going to stop caring about your players?"

And the answer to that is "Never."


Monday 24 September 2012

Beta

In the world of stuff, there's things that you know, and things that you don't know.  It can sometimes feel like all of us are simply questing to try and know more and more stuff.

But there's another way of looking at the world of stuff.  We can also split stuff into the things that other people know, and also things that other people don't know.  It can sometimes be confusing thinking about what other people know.  Fortunately we live in a social world so we should all be good at it.

In any case, Mathematicians, like me, sometimes like to use Venn Diagrams to describe these kinds of relations:


We can then build fun subcategories, like "Secrets", which contains all the stuff that we know, but that noone else does.

Or "The unknown" - that's all the stuff that noone knows about.  Yet.

But there's another interesting subcategory that I wanted to talk about today.  I'm sure there's a great German word for this category, but I don't think it has a name in English. It's that category of stuff that other people know (about us), but that we don't know about ourselves.  It's all the stuff like:

  • What does the back of my head look like?
  • What was everyone laughing about just before I walked in to the room?
  • Why do I always struggle to open doors?
And the all-consuming:
  • Does this pair of jeans make my anorak look out of proportion?
Even in principle, this stuff is unknowable to us. Why? Because even if we were to ask a trusted friend, there's a complex web of social contracts which ensures the information they relay to us be 100% believable, yet have no basis whatsoever in reality.

Beta Testing


When we're making video games, there's a number of tools we can use to improve the quality of the game prior to launch.  One of the most useful is Beta Testing.  Of course, we do all manner of testing when making video games, but Beta Testing is the one which can really make or break a game.

In a true Beta-Test, all of the intended features are actually present in the game in a playable state.  It might even be one of the first builds where this is true.  The game is then played by people who are outside of the development team.

In a (true) Beta-Test, it's the development team looking for feedback about all the stuff that other people know.


So if you're ever asked to Beta Test some software, what does that look like?


Well first things first, remember the social contract?  That's not what a Beta Test is about.  The dev team isn't looking for praise or apologies, they want solid, genuine feedback about the features in the game. Be that an emotional response ("it felt great when..."), or something actionable ("I couldn't reach the exit because...")

Naturally they're looking for confirmation about the stuff they (think they) already know, so be sure to include some high level comments about the stuff that seems obvious to you, the stuff that "...everybody knows...."

More valuable to the dev team is going to be feedback about stuff they don't know.  Try and tell them about the bugs that only you encountered.  Try and share anecdotes that are relevant, but that will be unknown to the dev team.  Try and give them access to the areas they can't otherwise see.


The Beta Blog Post


So all of this is stuff that I know, and after reading though, now it's stuff that you know too.  I happen to think this is important stuff that everyone should know, but I may be in the minority on that one.

However, there is a chance that this blog post contains serious defects.  For example, I know that this post still has lots of room for improvements.

For that reason, I'm officially declaring this blog posting to be a "Beta Blog Post".  What that means is, I think it contains most of the good ideas, but there might be stuff about this topic that I don't know (yet) or haven't covered adequately.

Why not help me out and use the comments section below to give me some feedback, maybe an emotional response, or even something actionable?   ;)

Wednesday 12 September 2012

The Big Picture

"It's just behind that one over there!"
I always have this kind of mental picture in my head about all the things that need to happen before I can launch a game.

It's filled with lots of tiny little things :

  • Putting in a 'squish' sound when you press a button on the main menu. (5 minutes)
  • Re-export a piece of artwork with different quality settings. (10 minutes)
  • Building out the website with a FAQ section. (15 minutes)

For each one of those little user stories, I'm thinking : "Oh, I can do that, that's easy."

There's also a whole bunch of medium sized things :


  • Make the draw routines faster so it doesn't skip quite so much. (1 hour)
  • Add a 'Load' screen to, umm, cover the loading. (1.5 hours)
  • Make the 'Undo' button work properly. (2 hours)

I guess if I had to, I could cut those user stories, but in the back of my head I'm thinking, "Oh, I know how to do that too, it's not that hard..."


Of course, there's a whole bunch of difficult things to do too, but even those I'm not too worried about.  I know in principle that they're solvable, and I've solved similar tricky problems in the past.

The problem comes when I try and find all the things that need to happen before I can launch a game, and then add up how much time it is going to take to get all of those things done.  It's a huge chunk of time. It can be a little daunting.

The Plan

My original plan was to have an app out on the appstore by now.  So I guess that didn't happen :)

But I hope you'll allow me the indulgence of quoting former US president, Dwight D. Eisenhower:
         "Plans are worthless, but planning is everything."


The easy thing to do right now would be to just keep chipping away at the original plan.  I am making good progress after all, even if there has been considerable discovered work.  It definitely feels like I'm getting closer to something.


"... That one over there looks easier to get to..."
But maybe there's a smarter way through here.  Maybe I can find a smaller game to make.  One that I can launch sooner.  Something to at least establish a presence, and then use that as a base to launch bigger games.

...hmmmm...

What would you do in this situation? Any advice? What do you think I should do?

    ... watch this space ...



Sunday 26 August 2012

Officespace

What makes a great work environment?

I've had a lot of time to think about this one over the years, mostly when working in cramped, overcrowded, smelly, noisy, unsafe or otherwise distasteful locales.

I always used to think it would be in a context of setting up my own team in a new office.  Now that I've gone Indie, that's still true, but it's a team of one.

Space

The most critical element in the work environment is space.  You need enough space to have all your working equipment easily accessible.  I like to have everything already plugged in and ready to go, but switched off at the wall.  And don't forget to move around and stay active, so I've setup a typing station, a coffee station, a thinking station, a reading station, a print/scan/copy station, an audio recording station etc, etc,  each already prepared for a particular type of activity.

If you're a fan of John Cleese, you'll know that low ceilings encourage closed thinking modes, and high ceilings facilitate open thinking modes.  Factor that in the next time you book a meeting room for a brainstorming session (high ceiling) or triage session (low ceiling).

Time

How many times have you heard, "It's done when it's done!"

For many creative tasks, you can estimate how long something will take, but not very accurately.  I find it useful to work in blocks of around ~2 hours. I pick a task I think will take an hour to do, and work on it, until it's done.  If there's still 2 hours left before the next planned disruption, I'll pick another ~1 hour task, and work on that.

It's crucial to finish every day with a win.  Ernest Hemmingway says it much better than I ever could:

"The best way is always to stop when you are going good and when you know what will happen next. If you do that every day … you will never be stuck. "

Some employers believe in billing for hours worked, and still others believe in crunch.  If you're in a creative industry and you're being micro-managed like that, and want to have a career rather than a burn-out,  I strongly recommend considering your options.

Location

The day doesn't stop when you walk out of your office.  You still need to get home (or to the gym, or to the pool, etc)  How much is that commute costing you in terms of money, time and stress?  Is the parking lot safe? Can you even find a park when you need to? Can you commute when it's raining/snowing without getting wet?

Working from home is great, but be sure to separate work from home with a doorway or even a staircase.

If you're forced to reuse equipment, then create separate user accounts, facebook accounts, email accounts etc, and be sure to log in and out every time you transition from home to work and vice versa.

If you're a real digital nomad, why not setup separate home and work environments on different USB sticks using portable apps?

Ergonomics

There's a lot of literature about setting up your chair, desk, keyboard, monitor for comfort and usability.  It really does make a difference.  If you're still a skeptic (like I was up until recently), just try it for a week.  If you don't see an improvement, you can always go back to your old bad habits.

...And don't be shy, help out a neighbor.  If you see a colleague struggling with RSI, or squeezed in to the wrong sized chair, bring it up, maybe it's something you can help with.

Sound

Headphones are not enough.  You need a reasonably quiet, connected space.  We humans are social animals, so complete silence can be too isolating.  We need to feel connected with our colleagues, and with the wider environment too, but not distracted from the task at hand.

Oh, and a pet-peeve of mine, a place to take private phone calls.  I hate reciting credit card numbers in a hallway frequented by programmers with eidetic memories.

What else?

What makes your work environment great?  How could it be even better?


Friday 3 August 2012

Status: Blocked

I'm blocked.

It's not writer's block. It's a different kind of blocked, one that's considerably harder to unblock.

Efficiency and Effectiveness


When measuring (or estimating) the output of a team, it's sometimes convenient to look at two different axes:

  • Efficiency - How much work/effort/resources are expended to produce a given amount of output. 
  • Effectiveness - How much output is produced in a given amount of calendar time.

Our natural instinct is to try and optimize efficiency, trying to reduce the cost.  Or alternatively, increasing output for the same amount of fixed cost.

I'm not sure this is the best strategy when it comes to video games. The market moves so fast, I think it makes more sense to push for more effectiveness - maximizing the output per unit of calender time, while staying within our cost constraints.

Diurnal Cycles


I find that at different times of the day, I'm better at certain types of tasks. Sometimes more analytical, other times more integrative. Sometimes more strategic. Some times of the day are better suited to striking new ground, and others are better for polishing and evaluating existing work.

With this in mind, I keep a number of different lists, each based around a type of work. For example, when I have all the microphones and speakers setup to do audio work, I want to get as much done as possible (efficiency), but I also want to make progress on my current goals (effectiveness).

Keeping these in balance is essential, and I allocate up to 20% of my time just in planning and co-ordination to ensure that I'm working on the right things at the right time.


Transition


But the problem I'm facing right now is a shortage of time. I'm currently in the process of moving home from one continent to another, so I'm averaging maybe 10-30 minutes per day for productive work. Compounding this, most of my equipment is in transit and won't be available until late August.

Sharpening the Tools


So what do you work on, when you can't make progress on your goal?

You sharpen your tools – get the latest versions of your software, defrag the hard drive, verify your backup procedures are working. All the things you won't have time to do later.

I know a lot of my friends out there in corporate land are currently caught between changing requirements, and now is your opportunity to do the same thing too. Sure you could sit around and play video games all day, but maybe now is the time to finally learn python? Or figure out rigging? Or fix the ergonomics on your monitor, keyboard and chair?


How do you remain effective, even when you can't be efficient?

Saturday 21 July 2012

Finding The Fun

When making a game, one of the things I like to do early, and often, is 'finding the fun'.  It's the process where you take a long hard look at your game to answer one simple question:

"What makes it fun?"

What is fun? How do we know when we've found it? I struggle with these kinds of questions because I don't play games the same way gamers do.  I'm a deconstructivist.  I tear games apart trying to understand how they were made, what were their authors motivations, what choices and alternatives did they shy away from?  That's fun for me, but I don't think that's much fun for you.

So instead, I hand a prototype of my game over to a gamer, and I watch them play. Mostly I'm looking for facial expressions - joy, wonder. But also their comments. What do they keep coming back to? If I'm trying to 'find the fun', I pretty much ignore all negative feedback, and just focus on the good.

One question I ask a lot is, "How could I make that part better?"  When I ask that question, what I'm really asking is, "What part of that do you want more of?"  If I hear multiple people want more of one thing, then I'll go and brainstorm about how to add more of that thing, using the actual gamers suggestions as a starting point for the brainstorm.

A Quick Note On Design Styles

If you've been reading this blog for a while, you'll know I'm a pretty hard-core programmer that's been fortunate enough to work with a great many super talented designers.  I've found they tend to have different styles and approaches.  Some, like me, are obsessed with finding the fun.  Others are much more interested in telling a story, or bringing the player along an emotional path.  Still others want to make elaborate systems for the player to explore, requiring a delicate zen moment on the player's behalf to catch glimpses of it's beauty.

And many others besides.

It's convenient for me to split designers into two camps - those who try to find the fun earlier, and those who are confident with their tools and process, and know the fun will come later.

Take a character designer for example, often in the second camp.  Someone who is building up the backstory for a character, figuring out their traits and abilities in-game.  Where they got that scar, and who was their childhood friends, and how all of that will affect their interactions with the player.  In isolation, these choices might seem meaningless and trivial, but when woven into a broader narrative, this minutiae can become intensely compelling and enjoyable.

Now, I'm not saying either camp is more or less effective. There's definitely room for both.  I'm just saying I find it more comfortable to follow along with the first camp as I find reassurance in their processes' measurability measurableness ability to be measured.

Which is important, because as a programmer, when I'm designing, I'm constantly have to ask myself, "What would [censored] choose in this situation?"

One advantage from working closely with so many designers, is the freedom to design in the style of those great designers.  I'm not stuck in any one particular school or with any baggage.

And now that I'm Indie, when I don't know the answer, I can just call them up on the phone and ask.

Before the Prototype

The approach above works fine when you have a prototype.  But what do you do before that?

Cardboard is great for this.

Scribble on cardboard, chop it up into pieces and move it around a page.

Make flowcharts, roll dice.  Get some trusted friends over and say "Hey, let's pretend we're playing a video game right now."  And you go through the exact same (watching) process outlined above, but without the actual prototype.

Before the Cardboard

Well that may be fine if you know what game you want to make, but what do you do before that?
Here's where you have to use the power of imagination.

 That's where I'm at right now.

I'm imagining what it would be like for my trusted friends to be playing a cardboard cutout of a prototype of a final game.  And then asking my imaginary designer buddies in my head :

"What makes it fun?"


Saturday 14 July 2012

Community

I think technology was always about bringing people together.  You can look at all the greatest inventions we humans have, things like mobile phones and automobiles, and I think you'll find their most sweeping enduring legacy will be in social terms - how they changed the way we as individuals interact with our friends, peers, families etc.

I think that used to be true of video games too.  The (enduring) impact of a video game is cultural.  You could frame it as, "How does [the game] let different players interact, come together, be playful, etc. in a social sense."

The Master and the Student

Take the arcade classic, Super Street Fighter II.  It has this simple mechanic where 1 credit buys 2 players a best-of-three match.

The master camps out on the machine - he plays for free the whole day.  One by one, the students come up to feed the machine a quarter.

Round 1: The master refuses to attack.  He ducks, dodges, feints and runs.  But without attacking, ultimately the score will be Master: 0, Student: 1.

Round 2: The master attacks, but only using one technique.  Punches, or fireballs.  Whatever is the student's weakness.  Of course, Master: 1, Student: 1.

Round 3: It's on!  In the deciding round they have a real fight, with the master either exploiting or avoiding the student's weakness depending on his temperament.

What's fascinating to me about this, is the community of players that it builds up around the game.  There's this implicit trust that builds up around the master/student relationship that carries forward when the student becomes skilled enough to provide a challenge to the master.

Players can have a dialog about the game, even when they're not actually playing.  If you're on the bus, you can still have a social moment with another SFII player.

The Metrics

These days, we all love our metrics. MAU - Monthly Average Users.  How many people (in numbers) are playing your game?

I think sometimes we get so blinded by the ability to measure what our players are doing, we forget to look to the quality of the interactions between our players, and (to some degree) to the qualities of the players themselves.

Take a closer look at those SFII masters.  They played that way because that's how they learned to play.  It was cultural.  But they were also self-selected because they were the players who wanted that kind of experience.  They're the kind of players who foster new players and (through their actions) create a strong community.

Your can't write a metric to capture those kinds of qualities.  (And even if you could, what would you use it for? It could only tell you about the community you had in the past, not about the one you're going to have in the future.)

A recipe?

Bringing it full circle, I believe that video games are still about bringing people together.  The games that do that well are the kinds of games that I want to make, and the kinds of games that I'm trying to make.

Of course I want a large community around each of my games, as measured by MAU.

But before that, and more importantly than that, I want a strong community around my games, as measured by good-vibes and actually talking with real people.

Now it's your turn to help me help you : How do you build the right community around your game?

Let me know in the comments below, or drop me an email and lets talk!



Thursday 12 July 2012

I Make Video Games


For me, it all started back in 1985 with the Commodore Vic-20.  I'd while away the hours typing in games from the magazines and storing them on magnetic tape.


Fast-forward to 1995 and the Amiga.  Me and some buddies launched Super Skidmarks to outstanding critical acclaim.



I love the process of making video games.  It's a series of puzzles.  Solving each puzzle unlocks even more puzzles.  As you get deeper and deeper, the puzzles get more and more intricate, and it becomes harder and harder to distinguish the best solution amongst all the correct solutions.  Always the fascination remains.


File:Black & White 2 Coverart.pngI love making games for gamers. I love passing the gamepad over to a gamer - passing the gamepad over to you - to see how you'll react.  There's this one moment that I really love in game development.  It's that moment when I try to probe you for feedback on my game, but you're so engrossed in the gameplay, you're physically unable to stop playing long enough to engage in meaningful conversation.


In 2005, I launched Black&White 2 with Lionhead Studios on the PC.  The game was a technical masterpiece and wildly ambitious.


Over the last few years, I've worked on many, many, many, many unreleased projects.  Those are the projects during which you grow the most.


I've been incredibly fortunate to work with, and learn from, so many amazingly talented people.  From programmers and artists, from QA and production.  Gifted musicians and mocap performers.  Everyone.  Thank you so much!  It's from you I learned everything.



Most recently I've been fortunate enough to work on the Mass Effect franchise with BioWare and on the Rainbow 6 franchise with Ubisoft.


Also the surprise hit at this year's E3, Watch Dogs.


 
But when I sit back and reflect, it feels like I've been working on increasingly smaller and smaller pieces (with ever increasing detail) of increasingly larger and larger games.  I'm always truly excited to be a part of a AAA blockbuster... but I miss that visceral connection with the gamer that comes with smaller teams and shorter development cycles.


It's taken me a while to realize, but the thing I love the most about video games, the reason I got into all of this in the first place, is when your delicate, fragile little game, (or big game!) that you've put so much effort into, finally makes it out to the gamers - to you.   Well... that's why I Make Video Games.


And while I never stopped making mini-games (and playful spaces) along the way, almost all of them I've been prevented from finishing because of contractual obligations.

That's why, as of today, I've returned to Indie Game Development To make video games in their entirety.  To make every little piece, from top to bottom, everything custom crafted with gamers in mind.  To make the best games for gamers.


To make video games, for you.

Saturday 7 July 2012

The Compressed Reconstituted Potato Container Dilemma

  There's a particular brand of compressed reconstituted salty potato snack that comes packaged in a distinctive cylindrical container.  It appears to enjoy a certain popularity amongst the kids these days.

  I mention it, because it neatly divides the human population into two mutually exclusive groups.

  • Those whose hands are small enough to reach inside the container to retrieve a salty snack.  
  • Those who are unable to reach inside the container to retrieve a salty snack.


Small hands, Group A.


When there are only a few salty snacks present in the bottom of the container, those with small hands can reach deep down inside to obtain one more.

Delicious!

Large hands, Group B.


By contrast, when there are only a few salty snacks remaining, members of Group B must steady the container with one hand whilst simultaneously securing the container's mouth with the other.  They then gently tip the container so as not to risk undue spillage.


The Paradox

I can't reach!

  Consider what happens when two friends of the same group attempt to share when there is only a few salty snacks remaining in the bottom of the container.  Friends belonging to Group A will proffer the container at a slight angle, allowing their compatriot the opportunity to reach inside and obtain a satisfyingly tasty morsel.

  Likewise, friends belonging to Group B will pass over the entire container, to allow their compatriots the use of both hands to regulate flow control, minimizing spillage.

  For sharing involving purely Group-A, or purely Group-B, no conflict will arise, so let us not consider such cases any further.

  The paradox occurs when a member of Group A and a member of Group B attempt to share a container. 

In this instance, the container will be alternately tilted or passed, causing dissatisfaction, confusion, and lamentation for both participants.

The Dilemma


The paradox is readily observed and thereby quickly resolved in the case of the almost-empty container.

But consider the case of a freshly opened cylindrical container of compressed reconstituted salty potato snacks.

Here we have a much more subtle problem of etiquette :
  • Group A members have the expectation that the container will be offered at an angle.
  • Group B members have the expectation that the container will be passed to them.
When the container is full, both the member from Group A and the member from Group B are able to obtain snacks using either technique.  And yet each will be subtly offended by the other (being alternately greedy or lazy with each exchange), with neither knowing why the other is indulging in such bizarre counter-intuitive behavior in such a trivial matter.

Indeed, even in principle, it is not possible for such a mixed group to discover the cause for the dissatisfaction until some fraction of the salty snacks have been consumed, the first observable behavioral differences arise, and the damage has already been done.

The Designer


I see this happening all the time in Game Design. A designer will come up with a system based on their personal approach to gaming.  They might be a Group A, or a Group B.  It doesn't matter.  They'll take their design to focus testing, and these preliminary results will show, unanimously and unambiguously, that the feature is working as designed.  But I know differently.  I can tell when I'm watching the focus test in progress.  Half of the respondents will have this one particular twitch that I've come to recognize signifies something isn't quite working the way they expect, but they don't have the language to report the dissonance.  And the other half will be fine.

Occasionally, when the two groups are of vastly different sizes, and the designer is a member of the minority, I see it on the faces of all of the respondents.  But the feature invariably makes its way into the game.  And why not?  After all, it's been focus-tested and there were no problems reported.  What more can a designer do?

(As an aside, if you ever try to dissuade a self-righteous designer based on a few observed facial tics, you'll learn just how quickly you can lose credibility... even when your initial observation is subsequently confirmed independently.)


The Perils of Focus Testing

Has this happened to you? A feature focus tested without problems, but gets slammed in the marketplace?  Why not tell me more about it in the comments below.


Sunday 17 June 2012

The Power Iteration

In the middle of the last post, we needed to find the largest eigenvector of a symmetric 3x3 matrix.  I used the Power Iteration to find the largest eigenvector, mostly because it's such a forgiving algorithm:  Even a trivial implementation will still get a correct result.



In this post, I'm going to talk about how to actually write the Power Iteration, along with a few caveats along the way.

Eigenvectors and Eigenvalues


Okay, okay, okay, so what actually is an Eigenvector/Eigenvalue?

Lets take an even further step back, what is a matrix?  In mathematics, we think of a matrix as representing a Linear Transformation.  Intuitively, we might think of it like this :  A matrix takes an incoming vector, and forms an output vector, where each output term is just a linear combination of the input terms.

[a, b,           (3,             (3*a + 5*b,
 c, d]    *       5)      =      3*c + 5*d) 


Lets consider a really simple matrix, called Scale79:

[7,  0,
 0,  9]

You can see Scale79 takes a vector like (1,0) and returns (7,0).  We say that (1,0) is an eigenvector of Scale79, with an Eigenvalue (λ) of 7.  Notice that (2,0) is also an eigenvector, as is (3,0), (4,0) , (-4,0) and also (7,0) and (49,0)

Scale79 also takes the vector (0,1) to (0,9).  We say that (0,1) is an eigenvector of Scale79, associated with the eigenvalue (λ) 9.

Our example matrix, Scale79, has two eigenvalues, 7 and 9.  Its eigenvectors associated with the eigenvalue 7 look like (1,0), and its eigenvectors associated with the eigenvalue 9, look like (0,1)

Do all (square) matrices have eigenvalues?  Well, yes and no.  It's true that all n-by-n square matricies have n eigenvalues, but only if we allow ourselves the luxury of eigenvalues taking on complex values.  Complex valued eigenvalues can sometimes be subtle to deal with, especially when using floating point math, so lets restrict ourselves to real numbers.

In the real-number-only case, it turns out that n-by-n square matrices have n eigenvalues, iff the matrix is symmetric.

This beautiful result, and so much more (including some technical details regarding multiplicity and repeated eigenvalues) is well beyond the scope of this blog, but I urge you to read about Jordan Normal Form for an introduction to this fascinating topic if you're interested to know more.

Let look at Scale79 a little closer.  We know it has two eigenvalues, 7 and 9, and it scales eigenvectors like (1,0) and (0,1) by those eigenvalues.
 
Scale79 * (x,0)T = 7 * (x, 0)T

Scale79 * (0,y)T = 9 * (0, y)T




In some sense, we can replace a costly operation (matrix times vector) by a much simpler operation (scalar times vector), but only for some vectors!

Eigenvectors are like a sort of super matrix compression technology, but it only works some of the time!

Singular Value Decomposition

Before we go much further, lets get something out of the way.  If ever you actually want to compute eigenvalues and eigenvectors for a matrix, you just need to use the library function called "Singular Value Decomposition" or S.V.D.  Any half-way decent matrix library will provide an implementation which will factorize your matrix into a length-preserving matrix (U), a matrix of (generalized) eigenvalues for scaling (S), and another length-preserving matrix (V).

M = U . S . V    (via S.V.D.)

Using this decomposition, it's trivial to pick off the largest eigenvalue (it's almost always the first one!), and its associated eigenvector (it's the first row of the V matrix)

For example, in GNU octave :
octave> scale79 = [7,0; 0,9]; [u,s,v] = svd(scale79)
u =
   0   1
   1  -0

s =
   9   0         <---- Largest Eigenvalue (λ = 9)
   0   7

v =
   0   1         <---- Eigenvector associated with 9
   1   0

Update: The SV Decomposition is only equivalent to the Eigen Decomposition if the eigenvalues are positive - best to use your library's eigen solver instead...

The Power Iteration


The s.v.d. is great for if we want all of the eigenvalues and eigenvectors.  But sometimes the matrix M is too big to fit in memory, or perhaps we don't have handy access to a reliable s.v.d. implementation (e.g. on a GPU or an embedded controller)

The Power Iteration works by doing the simplest possible thing - it mindlessly multiplies the matrix by a vector (any vector), repeatedly, again and again, over and over until the vector becomes very close to an actual eigenvector:

M*v
M*(M*v)
M*(M*(M*v))
M*(M*(M*(M*v)))
M*(M*(M*(M*(M*v))))

Or, in Eigenvalue terms

 λ*v
λ*(λ*v)
λ*(λ*(λ*v))
λ*(λ*(λ*(λ*v)))
λ*(λ*(λ*(λ*(λ*v)))) 


After many iterations, the vector v will tend to become closer and closer to an eigenvector associated with the largest eigenvalue.  How quickly?  Well after 5 iterations, we have λ5*v.  And that's where the power method gets its name, we are raising the eigenvectors/matrix to a really large power, say, 10000 or more and then seeing what remains. 

In theory, we can form  λ10000*v, and it will be just as good an eigenvector as any other.  In practice, we need to deal with floating point overflow.  For this reason we normalize the vector after each iteration to prevent overflow.

In code, that looks like this :

    Vector3 v(1, 1, 1);
    for(int i=0; i<10000; i++)

    {
        v = (M * v).GetUnit();
    }

    return v;

Convergence

How fast will we find the eigenvector?  In the limit, it actually depends on the ratio between the largest eigenvector and the second largest eigenvector. For the matrix scale79, this ratio is 7/9 or 0.7777...

This is because the smallest eigenvalues will quickly disappear into the round-off error, and in the end, it will just be the last two eigenvectors battling for supremacy.

In code, that looks like this :

    Vector3 v(1, 1, 1);
    Vector3 lastV = v;
    for(int i=0; i<10000; i++)

    {
        v = (M * v).GetUnit();
        if(DistanceSquared(v, lastV) < 1e-16f)

        {
            break;
        }
        lastV = v;
    }


If you've been playing along at home, you'll notice I've been talking about "the largest eigenvalue".  In fact, this algorithm converges to the eigenvalue with the largest magnitude. e.g. if the two eigenvalues were -7 and -9, it will converge to an eigenvector associated with -9.

So the full name for the above code snipped is really:

FindEigenvectorAssociatedWithEigenvalueWithLargestMagnitudeAssumingSquareSymmetricMatrix(m);

Ouch! That's quite a mouthful!  Please don't let the terms throw you off.  All of this stuff is built on simple concepts that have deep roots in physics, chemistry, economics, etc etc.  Because these are still relatively newish concepts, historically speaking, it makes sense that they don't (yet) map well to traditional natural languages such as English.  Trust me, it's totally worth putting in the time to get an intuitive feel for these concepts because they come up again and again in so many different contexts.

Ask questions, break it down, ignore the clumsy verbiage and focus on the concepts, and it'll all make sense.


Preconditioning

Preconditioning is a technique we use when we want to make math problems easier to solve.  We say, "rather than solve this difficult problem, lets solve this easier problem that has the same answer."

In our case, if iterating M*v will eventually converge to an eigenvector, then (M*M)*v will converge twice as fast to the same eigenvector (but to a different eigenvalue - the square of the original one!)

 
    float scale = FindLargestEntry(m);
    Matrix33 matrixConditioned = m / scale;
   
matrixConditioned = matrixConditioned * matrixConditioned; 
    matrixConditioned = matrixConditioned * matrixConditioned; 
    matrixConditioned = matrixConditioned * matrixConditioned;

We have to be careful that the matrix multiply itself doesn't overflow!  By dividing through by the largest entry, we have good confidence we can raise the matrix to the 8th power and still keep the structure of the matrix.

At this point you might be tempted to skip the normalization step, and divide through by the determinant (or some multiplier) instead.  Be careful - these two techniques, while having the same effect, are actually solving two different problems.  If you try and solve both at once you may find you solve neither.  In particular, if your matrix has a particularly fiendish condition called "mixing of the modes", it can wreak all sorts of havoc with your computation.

 

Putting it All Together

So now we have all the pieces for our Power Iteration.  The primary motivation for this algorithm is ease-of-coding, so keep it simple - if you find yourself putting time into making this algorithm better, you're almost always better off using a library implementation of SVD instead.

And remember, this function only works well for symmetric matrices!



// From http://missingbytes.blogspot.com/2012/06/power-iteration.html
// note: This function will perform badly if the largest eigenvalue is complex
Vector3 FindEigenVectorAssociatedWithLargestEigenValue(const Matrix33 &m){
    //pre-condition
    float scale=FindLargestEntry(m);
    Matrix33 mc=m*(1.0f/scale);
    mc=mc*mc;
    mc=mc*mc;
    mc=mc*mc;
    Vector3 v(1,1,1);
    Vector3 lastV=v;
    for(int i=0;i<100;i++){
        v=(mc*v).GetUnit();
        if(DistanceSquared(v,lastV)<1e-16f){
            break;
        }
        lastV=v;
    }
    return v;
}




Saturday 9 June 2012

Fitting a plane to a point cloud

A buddy of mine is trying to find the plane that best describes a cloud of points, and, naturally, my very first thought is, "Wow... that would make an awesome blogpost!"


The Problem

Given a collection of points in 3D space, we're trying to find the plane that is the closest to those points.


minimize(  sum ( distance²(point, plane),  point),   plane)


If you're anything like me, you're probably wondering why we sum the squares of the distances.

The reasoning is a little backwards, but basically it's so that we can solve it by using the method of Linear Least Squares.  It turns our approximation problem into a so-called "Quadratic-Form", which we know from theory we can solve exactly and efficiently using linear algebra.

There are other ways we could have phrased the problem.  One of my favorites involves using Robust Statistics to ignore outliers. Another involves using radial basis functions to try and trap the plane within a region - this works great when we're not sure what type of function best approximates the points.  Yet another approach involves perturbing and clustering the points to re-weight the problem.

Something I find fascinating is that most of those methods eventually require a linear least squares solver.

[Update 2015-11: I have a suspicion that finding the 'maximum likelihood estimator' for this particular problem doesn't require the use of LLS - can any kind reader confirm or deny?]

So lets start by forming the linear least squares approximation, and then later we can see if we still need to extend it.  (Of course, if you'd like to know more, why not leave a comment in the comments section down below?)


Linear Least Squares


Let's start by defining our plane in implicit form:

C = Center of Plane
N = Plane's normal

C + X.N = 0  

Then for an arbitrary point P, we can write it in 'plane' co-ordinates like so :

P = C + μN + pN

Here μ is the distance from the point to the plane, and N is a 2-by-3 matrix representing the perpendicular to the plane's normal, and p is a 2-vector of co-factors.

We are trying to minimize the following :

E = sum( μ², points)

With a little math, we can show that

C = sum ( P, points ) / count ( points )

With a lot more math, we can show that N is the eigenvector associated with the smallest eigenvalue of the following matrix :

M = sum [ (Px-Cx).(Px-Cx), (Px-Cx).(Py-Cy), (Px-Cx).(Pz-Cz),
                (Py-Cy).(Px-Cx), (Py-Cy).(Py-Cy), (Py-Cy).(Pz-Cz),
                (Pz-Cz).(Px-Cx), (Pz-Cz).(Py-Cy), (Pz-Cz).(Pz-Cz)]


Finding the Center


Lets find the C first, that's the Center of the plane. In the mathematical theory, it doesn't really make sense to talk about the center of an infinite plane - any point on the plane, and any multiple of the plane's normal describe the same plane. Indeed, it's tantalizingly seductive to describe a plane by it's normal, N, and its distance to the origin, C.N.

But for those of us fortunate enough to be trapped inside a computer, we must face the realities of floating point numbers and round off error. For this reason, I always try to represent a plane using a Center and a Normal, until I've exhausted all other available optimizations.

 def FindCenter(points):
    sum = Vector3(0, 0, 0)
    for p in points:
       sum += p
    return sum / len(points)

Finding the Normal


Now lets find the normal.  You'll see this type of computation fairly often when dealing with quadratic forms, lots of square terms on the diagonal, and lots of cross terms off the diagonal.

As it's a symmetric matrix, we only need to compute half of the off-diagonal terms :  X.{X,Y,Z}    Y.{Y,Z}     Z.{Z}

 def FindNormal(points):
    sumxx = sumxy = sumxz = 0
    sumyy = sumyz = 0
    sumzz = 0
    center = FindCenter(points)
    for p in Points:
        dx = p.X - center.X
        dy = p.Y - center.Y
        dz = p.Z - center.Z
        sumxx += dx*dx
        sumxy += dx*dy
        sumxz += dx*dz
        sumyy += dy*dy
        sumyz += dy*dz
        sumzz += dz*dz
    symmetricM = Matrix33( \
           sumxx,sumxy,sumxz, \
           sumxy,sumyy,sumyz, \
           sumxz,sumyz,sumzz)
    return FindSmallestMumbleMumbleVector(symmetricM)



Note that we've had to make two passes through the points collection of points - once to find the center, and a second pass to form the matrix.  In theory, we can compute both at the same time using a single pass through the points collection.  In practice, the round-off error will obliterate any precision in our result.  Still, it's useful to know it's possible if you're stuck on a geometry shader and only have one chance to see the incoming vertices.


The smallest Eigenvalue


So now we just need to find an eigenvector associated with the smallest eigenvalue of a matrix.

If you recall, for a (square) matrix M, an eigenvalue, λ, and an eigenvector, v  are given by:

Mv = λv

That is, the matrix M, operating on the vector v, simply scales the vector by λ.

Our 3x3 symmetric matrix will have 1, 2 or 3 (real) eigenvalues (see appendix!), and we need to find the smallest one. But watch this:

M-1v = λ-1v

We just turned our smallest eigenvalue of M into the largest eigenvalue of M-1!

In code :

def FindEigenvectorAssociatedWithSmallestEigenvalue(m):
   det = m.GetDeterminant()
   if det == 0:
       return m.GetNullSpace()
   mInverse = m.GetInverse()
   return FindEigenvectorAssociatedWithLargestEigenvalue(mInverse)

The largest Eigenvalue


Computing eigenvalues exactly can be a little tricky to code correctly, but luckily forming an approximation to the largest eigenvalue is really easy:

def FindEigenvectorAssociatedWithLargestEigenvalue(m):
   v = Vector(1, 1, 1)
   for _ in xrange(10000):
       v = (m * v).GetNormalized()
   return v


Sucesss!


So there we have it, linear least squares in two passes!  All that's left to do is write the code!


Appendix - Linear Least Squares plane for a Point Cloud in C++

The following 3 functions for linear least squares are hereby licensed under CC0.

// From http://missingbytes.blogspot.com/2012/06/fitting-plane-to-point-cloud.html
float FindLargestEntry(const Matrix33 &m){
    float result=0.0f;
    for(int i=0;i<3;i++){
        for(int j=0;j<3;j++){
            float entry=fabs(m.GetElement(i,j));
            result=std::max(entry,result);
        }
    }
    return result;
}

// From http://missingbytes.blogspot.com/2012/06/fitting-plane-to-point-cloud.html
// note: This function will perform badly if the largest eigenvalue is complex
Vector3 FindEigenVectorAssociatedWithLargestEigenValue(const Matrix33 &m){
    //pre-condition
    float scale=FindLargestEntry(m);
    Matrix33 mc=m*(1.0f/scale);
    mc=mc*mc;
    mc=mc*mc;
    mc=mc*mc;
    Vector3 v(1,1,1);
    Vector3 lastV=v;
    for(int i=0;i<100;i++){
        v=(mc*v).GetUnit();
        if(DistanceSquared(v,lastV)<1e-16f){
            break;
        }
        lastV=v;
    }
    return v;
}
 

// From http://missingbytes.blogspot.com/2012/06/fitting-plane-to-point-cloud.html 
void FindLLSQPlane(Vector3 *points,int count,Vector3 *destCenter,Vector3 *destNormal){
    assert(count>0);
    Vector3 sum(0,0,0);
    for(int i=0;i<count;i++){
        sum+=points[i];
    }
    Vector3 center=sum*(1.0f/count);
    if(destCenter){
        *destCenter=center;
    }
    if(!destNormal){
        return;
    }
    float sumXX=0.0f,sumXY=0.0f,sumXZ=0.0f;
    float sumYY=0.0f,sumYZ=0.0f;
    float sumZZ=0.0f;
    for(int i=0;i<count;i++){
        float diffX=points[i].X-center.X;
        float diffY=points[i].Y-center.Y;
        float diffZ=points[i].Z-center.Z;
        sumXX+=diffX*diffX;
        sumXY+=diffX*diffY;
        sumXZ+=diffX*diffZ;
        sumYY+=diffY*diffY;
        sumYZ+=diffY*diffZ;
        sumZZ+=diffZ*diffZ;
    }
    Matrix33 m(sumXX,sumXY,sumXZ,\
        sumXY,sumYY,sumYZ,\
        sumXZ,sumYZ,sumZZ);

    float det=m.GetDeterminant();
    if(det==0.0f){
        m.GetNullSpace(destNormal,NULL,NULL);
        return;
    }
    Matrix33 mInverse=m.GetInverse();
    *destNormal=
FindEigenVectorAssociatedWithLargestEigenValue(mInverse);
}



Appendix - Citation Needed


Our 3x3 symmetric matrix is Hermitian, therefore all it's eigenvalues are real and it's Jordan Normal form looks like this:

[ λ1,  0,  0,    
  0, λ2,  0,  
  0,   0, λ3 ]

Assume w.l.o.g. that λ1 >= λ2 >= λ3.

If λ1 == λ2 == λ3, then we have only one eigenvalue.

If λ1 > λ2 > λ3, then we have three eigenvalues.

The only remaining cases both have two eigenvalues.