I met an interesting chap last week who is trying to run a fully distributed agile project, with development teams and business owners scattered across the UK, and a bit of sub-continent off-shoring thrown in for good measure. The conversation turned to scaling problems in general, and I wondered if there are some common issues that may have common solutions, such as:

– trying to scale by doing the same things with more people

– trying to scale by doing the same things in different places

– trying to scale by doing different things with the same people

So at the same time I’m wondering about  Stephen Wolfram’s work on computational systems and complexity, and wondering if there is a defined limit on scale that makes these problems complex. Let’s look at a quick example.

Suppose we run a project with ten people following a simple process defined as ABCDEFGHIJ. We like the output, so we scale the number of people to 100 and ask them to follow the same simple process. At what point does the first error creep in, changing the process slightly, subtly to ABCDEFGHiJ or ABCDEFHGIJ? When does the pressure from the additional ninety people suggest that you can change that process, because the first three letters are always the same – let’s just use xDEFGHIJ instead…

What would happen if we decide to outsource the repeating part of the process, which we’ve managed to shrink down to just x = ABC anyway. It must be cheaper to get this x from elsewhere – maybe a specialist x production company, who can produce x’s and X’s in a range of shapes and sizes. So now we’re getting a range of outputs that look like:

xDEFGHIJ / XDEFGHIJ / x DEFGHiJ

and the divergance continues.

The original ten people are now not involved in the first three steps of the process, and the volume of additional team members is starting to introduce random errors just through the scale of the new team.

Is there not a better approach to scaling that keeps everything at the level of scale that you know works, then repeating that structure rather than trying to increase the size or distribution or complexity of the underlying process?

Put another way, is 10 x 10 better than 100?

Advertisements