Friday, 14 April 2017

Basic Income, Better Living Through Video Games.

If we assume as given we'll eventually live in a society with a UBI (all eligible citizens receive an Unconditional Basic Income, enough to cover their food, clothing and shelter), then the most pressing question is: How should we roll it out?

Years of making Video Games suggest two quick answers:

The easy way is by lottery. Suppose Gary is a winner in the monthly UBI Lottery! Congrats Gary! Gary no longer has to deal with our mess of confusing taxation and welfare regulations. He wins a much simplified UBI and a flat tax. Of course, any change can be scary and difficult, so Gary also has the option to just stick with the old system if he wants.

More interesting is the notion of a Dual Currency. It's a little bit like enrolling in the food stamp program, where he's issued with tokens that can be exchanged for food items at a 1-1 ratio. In a food stamp program, those tokens would normally expire after a set period of time.

Food stamps are really old. Like, 1930's America old. We live in a digital world, so lets make those tokens work more like an energy mechanic in Candy Crush or League of Legends. Those tokens now accrue *continuously* rather than appearing all at once on a Thursday. We'll cap Gary's balance at a maximum of 1 months worth of tokens. Any balance more than 2 weeks of tokens would also have a penalty applied.

Finally pricing. Staples like bread, milk, laundry detergent and will have a heavily discounted price when purchased using tokens. Healthy options like fruit and vegetables too. Fast food and chocolates might have a premium pricing attached. Lets make it easier for Gary to make good decisions.

Friday, 11 November 2016

Brexit, Elections, and population in 2016

Define: “effective political unit”

If politics is the name we give to a group of people making decisions that affect all the members of that group, then we can use “effective political unit” (EPU) as a catch-all name to reference that group.

Your household is an EPU. Your local sports team is an EPU. Your neighborhood and your city are both EPUs, as is your country, and each of your online communities.

We can get a rough feel for the relative size of an EPU by adding the search term "population" and hitting the "I'm feeling lucky" button on google:

EPUSize (million people)
London (England)8
London (Ontario)0.5
You & Me together0.000002
North America (*)580
OECD (*)560
Eve Online0.4
New Zealand4

(*) The OECD includes all of North America, so as with any "I'm feeling lucky" google search, the error bars are large.

A natural question to ask: "Given each Effective Political Unit is a group of people making decisions, what size of EPU is the most successful?" It's hard to pick an exact number, but like many trends associated with people, it's increasing over time, and the rate of increase is increasing:

EPUSize (million people)Year
Toba Catastrophe0.0770,000 BCE
Nomadic tribe0.001prehistory
Ancient Greece5400 BCE
Ptolemaic Egypt7300 BCE
Han dynasty572 CE
Ancient Rome (peak)60160 CE
Mayan city0.1700 CE
Walmart Employees22015 CE


A vote for a protectionist like #Trump favors smaller (USA, 320) over #Clinton's larger (World, 7500).

A #brexit vote favors smaller (UK, 65) over #remain's larger (EU, 500).

A #califrexit (California, 35) is even smaller still.

Which brings us back to the core question of this blogpost: What size of EPU is the most successful?

Historically, every EPU has had a maximum size, once it extends past that point, it is doomed to collapse. At the same time, history is filled with EPUs that were too small, and were out-competed by slightly larger EPUs which were more effective.

It's a classic value judgement.

As social animals, we weigh the perceived risks and benefits between larger EPUs and smaller EPUs, and make a call, then find a post-hoc rationalization for our decision.

What I find fascinating is the schism between younger voters and older voters. If you look into the various exit polls around the world, a clear trend starts to emerge: Older voters seem to be favoring the 10MM-50MM range, while younger voters seem to be consistently voting in support of larger and larger EPUs.

What does it all mean? At the risk of rampant speculation, do younger voters have more confidence in technology to enable larger and larger EPUs? Do older voters have more hands on experience with large EPUs getting out of control and collapsing? I really have nothing to back up either of those statements, but it sure is fun to make sweeping generalizations :D

Let me know your thoughts in the comments down below!

Sunday, 9 October 2016

Cheapest 3D Printer

My latest obsession is trying to build a 3D printer for as cheap as possible.

Partly it's because I believe 3D printing is a disruptive technology. The lower the cost for making a 3D printer, the more people will have access to the technology, and sooner the disruption will take place.

And partly, it's because I'm just really really cheap.

Low Cost

What does low cost really mean? One obvious way is to look at the price of something if we were to buy it new in a shop. If we only source new parts and new materials, we're going to have a difficult time creating something truly low cost.

My strategy is different. I'm going to try and get as many of the source materials as possible for "zero dollars."

Consider old car tyres. Any time you can recycle the rubber from an old car tyre into a seesaw or a swing, or into building materials or to protect a wharf, then the cost of that rubber is effectively "zero dollars."

That's why the core design elements of my 3D printer are going to be fishing line and lego. Two very cheap substances if you source them the right way.

Fishing Line

Nylon fishing line is an amazing substance. It's strong. Durable. Inexpensive. It's readily available everywhere around the globe. And if you need small quantities, you can often obtain it for "zero dollars". You probably already have some.


Lego is an amazing substance. It's available everywhere. It's manufactured to extremely high tolerances. It's consistent across time and place. It comes in a variety of colors. It's durable.
While lego might not be cheap, you can often *borrow* lego for "zero dollars" by using the magic words "I'm trying to make a 3D printer out of lego."
Once your print run is complete, you can simply disassemble the lego and return it to it's previous state.

Calibration Problem

When I look at the designs for existing 3D printers, one of the biggest design considerations seems to be finding out where the extrusion point is in relation to the "bed". Existing designs carefully measure the motion of the motors, try really hard to make the frame rigid, and then have lots of complicated software to try and calculate where exactly the filament is being deposited.

Ack, too difficult.

Why go through all the calculation, when you can measure directly?

My plan is to use the camera on an Android tablet to see where the bed is, and, at the same time, to see where the print head is. If it needs to move to the left, well, the tablet will keep the motors spinning until it lines up. Too far to the right? no problem, spin the motors the other way until it matches. Checkmate calibration problem!


Oh, and remember our lego? We know exactly how large a block is in the real world, so we can measure off distance in our 2D camera space by placing a known lego calibration object made with a few different known colors.

This way it doesn't matter if our fishing line stretches during the course of the print, or our lego gets bumped half way through, or the ambient temperature changes which make the layers a tiny bit thinner.. no problem, the camera on the android tablet sees all.

And how much does it cost for an Android tablet? "zero dollars." You just have to use the magic words: "Can I borrow your Android tablet to make a 3D printer?"

Next Steps

I've already starting on version 1 of the prototype. Watch this space.

Saturday, 1 October 2016

ELI5: What are the differences between the C programming languages: C, C++, C#, and Objective C?

"Hello World" in C
C, C++, C# and Objective-C are all programming languages. They're all special ways of writing where a programmer can ask a computer to solve problems for the programmer.

Don't be fooled by the letter “C” in their names, the 4 languages are actually quite different.

C is the oldest of the 4. It was one of the first really popular programming languages because it was good at solving the types of problems that programmers had way back in the 1980's. Things like “portability”, “memory management” and “correctness”.

C is quite a simple language, which means you need to do a lot of writing to ask the computer to do complicated things.

C++ is actually C with lots and lots of extra stuff added in. It's name is a pun, where to the computer, C++ means something like “better than C”. And yeah, there's other computer languages with pun names like “D” and “F#” too. Because C++ is a lot more powerful than C, you don't need to write quite so much stuff to get the computer to do complicated things.

Objective-C is also C but with different stuff added into it. Both Objective-C and C++ try and help programmers solve tricky problems using something called “Object Oriented Programming” (OOP). That's where the “Objective” part in Objective-C comes from. OOP was really good at solving the kinds of problems we had back in the 1990's.

OOP is so successful because it helps teams of programmers work together and co-ordinate. Any time you have a large group of programmers working together, especially on the very largest software projects, you'll find that they're using some version of OOP to help them all co-operate.

Because both C++ and Objective-C have a shared history in C, if you wanted, you could take a C program and pretend that's it's C++ or Objective-C and most of the time that might even work!

What really happens though, is that because the languages are so different, it changes the way that programmers think about their problems. This means that C programs, C++ programs and Objective-C programs all end up looking quite different from each other, even when programmers are trying to solve the same problem. ( See also: Sapir–Whorf hypothesis. )

Which brings us to C#. C# isn't really a C language at all. There's actually another programming language called Java that used to be really popular around the year 2000 because it helped with the OOP problem much better than anything else. A company called Microsoft wanted to make something that was kind of like Java, but kind of different too. So they created C# to work a lot like Java, but changed things up a little bit so that it looks kinda like C if you squint.

Well here we are in 2010's, and the kind of problems programmers are facing have changed again. It turns out that using OOP can sometimes combine with other problems to make them more complicated, graphics problems like “threading” and “latency” or the special problems that come up with Artificial Intelligence for example.

While we have newer languages like “Cg”, “R” or “Python” that try and address some of these newer problems straight on, it turns out the simplicity of C allows individual programmers to focus more clearly on the problems that are important to them. That's why C is still popular today, even though it's the oldest of the 4.

TL;DR: C is really simple. C++ and Objective-C are kind of similar because they're both C with extra stuff for “Object Oriented Programming” (OOP). C# is the oddball because it isn't really a C language at all, it's more like Microsoft's version of “Java”.

Source: Am programmer.

Thursday, 26 November 2015

The Cap Theorem and Quantum Gravity

Apologies in advance, this post is both extremely technical in multiple fields, and woefully incomplete, and not nearly as humorous as it ought to be. Dragons be here. I'm incredibly sorry.

CAP Theorem for distributed systems

Brewer's CAP Theorem tells us that every distributed computer system must sacrifice at least one of these properties:
  • C: Consistency
  • A: Availability
  • P: Partition Tolerance.
Astonishingly, if we view the universe as a distributed system, then Quantum Field Theory appears to have (analogues of) each of the three properties from the CAP theorem. But at what cost? Paradoxes. So many paradoxes. The double slit experiment, the twin paradox, the EPR “spooky action at a distance” paradox. Many many more. What if QFT adds to the CAP theorem a fourth property we could sacrifice:
  • C: Consistency
  • A: Availability
  • P: Partition Tolerance
  • T: Time Never Flows Backwards (!!!)

One Million Boulders

Lets look at that last one, Time Never Flows Backwards. Suppose, inside a computer, we're trying to simulate one million boulders rolling down a mountain side. At every time step, we need to generate all the potential collisions between those million boulders, and then process them in the order in which the collision occurs. You're familiar with Newton's cradle? Every one of those collisions can change the magnitude, and order, of any subsequent collision. And worse, round-off error when dicing the time steps means that a collision over *here* can affect a collision over *there*.

(All the gory details can be found here.)

Starting to sound a little bit like quantum dynamics right?

So how do we solve it efficiently? By briefly reversing the arrow of time. We find all the collisions between those boulders in a given timestep, then, optimistically, we solve each boulder independently (“in parallel”) based on it's known potential collisions, as if the order of collisions didn't matter. Then we do a “Fix-Up” phase where we wind the time step backwards and correct any of the boulders where (A) the collision order was incorrect, and (B) the energy of the correction is above a certain tolerance. (In practice the tolerance is very small, this tolerance only serves to prevent certain pathological worst-cases)

Starting to sound a *lot* like quantum dynamics...


So imagine the spinfoam. In my mind, I visualize it something like this:

Spinfoam sketch, incomplete

In Quantum Chromo Dynamics terms, every face you can see is “Colourless” = (Red + Green + Blue == Red + Red + AntiRed + Green + Blue). In this diagram, the past is down. It's the rigid fixed lattice and appears unchangable. The future is a soup of these faces to the top of the diagram, and the “present” is the coalescing region where the mobile soup phase-transitions into a fixed lattice. Naturally, each edge is the Planck length, equivalently Plank time.
You can even see what we'd call a 'particle', maybe an electron or a neutrino, zipping along at close to the speed of light. In the spinfoam, it appears as a disturbance in the otherwise orderly lattice.

  • <technical> In this diagram, the colours satisfy the Pauli exclusion principle. To represent bosons, simply write integer values at every vertex, and require every cycle-over-edges to sum to zero.
  • This 2D diagram with vertices, edges and faces represents {1xspace+1xtime} dimensions. If we axiomatically accept the Holographic Principle, then it might be possible to represent {3xspace+1xtime} dimensions using only vertices, edges, faces and solids.</technical>
Notice too that, at least in the bottom of the diagram (“past”), the laws of physics are symmetric, and invariant under rotations through both time and space. Despite this local invariance, the time dimension can still be identified by it's global properties. The arrow of time, entropy etc, really does exist and has physical meaning.


What would happen if we tried to simulate this spinfoam in a computer? Well, most obvious to me, is that 'time' in the simulation does not correlate with the amount of computation required to run the simulation. Indeed, the computation required to run the simulation depends primarily on the search activity to coalesce the soup, and it should be easy to find a computation model where that search activity has a cost that matches Einstein's General Theory of Relativity. i.e. the curvature of a region of space is related to the amount of mass in that region, G = m . r-2


Now lets take that simulation, and instead of running it on one single computer, we instead run it on a distributed computer system. Suddenly, the CAP theorem applies, and our simulation must sacrifice C, or A, or P.... or.... or..... or... T? What if we could sometimes run our simulation backwards just for a moment, the same as we did when "Fixing up" the simulate of those million boulders. When something doesn't fit, just for a little bit, we'd dissolve that fixed lattice of the past and turn it back into the mobile soup of the future, then reform the lattice into a consistent whole.
From inside the simulation, we'd never be able to send information back into the past (That would be a violation!), and yet we'd still get “spooky action at a distance” and all those other paradoxes.
But at what cost? Well, surprisingly, only a performance hit. Again, it should be easy to find a model of distributed computation overhead where this performance hit is in exact agreement with Einstein's *Special* Theory of Relativity. Specifically, it's the Lorentz Transform, γ = 1 / sqrt(1-v2.c-2)


Okay, big deep breath. The plot-twist is coming up soon. Brace yourself.

String Theory (Science Fiction)

Almost everything I've written above isn't new or novel. It's just a rehash of various discarded String-Theory ideas from the 90s, but with different names and labels. From an experimentalist physicists point of view, String Theory is just not that interesting. In terms of knowing more about the universe we live in, String Theory is pretty much at a dead end. Why? Because it's not *testable*. We can't devise an experiment in the lab to determine if any one of the thousands of competing String Theories makes predictions which match our unique reality. If your theory isn't testable, if there's no way to determine if you theory approximates our universe better than the alternatives, then that's not "Science" with a capital 'S', it's more like Science Fiction with a whole lot more math.

Plot Twist

So here's the plot-twist: CAP Theorem + QFT is testable.
Here's how: Take that exact same familiar double slit experiment we all faithfully reproduced when we first found out about Quantum Mechanics.

  • Setup-1: Use one slit, fire the wave/particle, measure the diffraction. (Gaussian)
  • Setup-2: Use two slits, fire the wave/particle, measure the diffraction. (Interference pattern)

Now, if we compare Setup-1 with Setup-2, if CAP + QFT is true, then Setup-2 will suffer a tiny time-dilation associated with resolving the CAP constraints. If CAP+QFT is true, we could toggle between Setup-1 and Setup-2 and measure the tiny difference in time dilation.

How tiny? So tiny no-one has ever noticed it before. tiny, it would be much much smaller than the time-dilation associated with the mass of the photon itself. tiny, but, at least in theory, so measurable.

What happens if we go into the lab and measure the time dilation difference between Setup-1 and Setup-2, and that difference turns out to be non-zero?

Quantum Gravity and Friends.

So yeah, that's a testable theory of quantum gravity. It neatly explains why gravity is so weak compared with the other forces (aka the Hierarchy problem), and dramatically simplifies the particle zoo.

Furthermore, this theorem is fully consistent with the Copenhagen Interpretation, and even builds on it! By contrast, In this formulation, the many-worlds alternative, however appears to have a vanishingly strict interpretation.

Black holes? Yip.. (I'll let you puzzle that one through, it's actually quite cute :) Naked singularities? Nope.

It neatly explains the uncertainty principle. It's truly a quantum theory from the get-go. The randomness is real ("no hidden variables"), it's even required, but it's certainly not arbitrary or capricious.

All those crazy dimensions from String Theory? Oh yeah, the dimensionality is there, but they're no longer spatial in nature, they're more like properties stacked on the spinfoam.

There's even some tantalising hints on the nature of dark matter and dark energy and inflation in the early universe..

Anyways, I've probably said way too much, as always, if you have any questions, queries or opinions, please let me know in the comments section below!

Saturday, 12 September 2015

Keeping our kids safe, with better level design and video games.

Our local bus stop used to have a safety problem. All the school kids would line up, frantic to be first on the bus.

The front kid would stand with their toes hanging over the curb. The next one behind them, peering over their shoulder, and so on and so forth... They would stand that way in pseudo-formation, for agonizing minutes at a time, as the cars zipped past on the morning commute. Finally the enormous school bus would swing in and stop mere centimeters away from the nose of the kid in front.

Just one tiny fumble, or even just one loud boisterous dog, could have spelled tragedy.

I spoke about it with the other Mums and Dads. I know from designing levels in video games that there's an easy fix we use for these kind of problems. I told them someone could simply paint a yellow “Do Not Cross” line on the ground, and the kids would naturally do the rest, even when the parents weren't around.

For the record, I've never defaced public property, nor would I encourage anyone else to do the same.

Yet some anonymous do-gooder has gone and done just that:

Vigilante safety engineering - a yellow "Do Not Cross" line has been painted at this local school bus stop by an anonymous parent, obviously over the concerns about child safety.

All the kids now line up a safe distance from the road, and the possibility of tragedy at our local bus stop has been dramatically reduced.

Well sure, this act of civil disobedience might not be able to protect the neighbourhood kids from the harmful rays of the sun, mindless advertising, unvaccinated kids or bad language... but at least now the kids at my local bus stop line up further away from the traffic.

If you have a concerns about traffic safety at your bus stop, here's one small thing that any anonymous do-gooder can do, that will actually make a difference, all thanks to better level design and video games.

Saturday, 28 March 2015

Pixel Scaling

I'm converting the maps from the amiga version of Super Skidmarks to work on mobile devices for Skidmarks 2015.

Normally when we scale an image, we treat it as a continuous tone image, like a photograph:
One option is to just leave the pixels as is. It gives it a retro feel, but doesn't really capture what computer monitors in the 90s used to look like. We used to call this mode "fatbits":


And yet a third way is to use a pixel-art rescaling algorithm, like xBRZ.

Click through on the images to see full res.

Which do you think I should use? Any other techniques I should consider?