Search
  • Schematiq

Automatic for the People?

Updated: Apr 28


By Neil Talbott.

More automation must be better, right? Well, not necessarily. The modern trend is to

automate everything. Kettles heat water to the exact temperature you specify, head torches

self-adjust based on ambient light and customer service is now the domain of robots.

Automation: faster, more accurate, less risk – and fewer humans.


But there's a catch.


Because the more advanced an automated procedure, the more critical the human involvement. When there are fewer humans involved, those that remain become much

more important.


This is the paradox of automation.


It's a familiar scene in the office environment: teams of data processors churn through their work, Excel visible on every screen. Progress is slow and the work is repetitive, but the procedures are long established and outcomes are predictable. Importantly, major errors are few: these employees are intimately familiar with their small sphere of work and are likely to spot any significant departure from the norm. Sooner or later, "transformation" is demanded.

Manual processes are taken over by robotics, the work becomes largely computerised and

the pedestrian humans are surplus to requirements. The teams will be reduced in size until

only a few supervisors remain. Through automation, the company has accelerated its

processing, reduced the risk of manual error and streamlined its workforce – a huge win for

efficiency.


Or is it?


There's a common instinctive assumption amongst business strategists that any use of robots must reduce risk because of the elimination of manual error. Computers don't have fat fingers, don't make typos and don't get fatigued (or come in with a hangover, except maybe after a Windows update).


But the problem is that computers do make mistakes. Not of their own making, at least not usually, but because of the way they've been programmed. And when a mistake is present it might be a long time before it's spotted, not least because users are rarely minded to check calculations performed by computers. As a result, the risk, perceived to have been reduced, could actually have increased.


By and large, there is no means to test what a robot does. No review, no sanity check, no regression testing and – perhaps most relevantly – no understanding whatsoever on the end-user's behalf of how the automation was implemented.


And therein lies the problem: with lack of understanding comes an inability to spot mistakes. That 'sixth sense' to detect anomalies in data, especially numeric data, develops only with painstaking practice and experience. Automate that process and all you see is the end result – often just a bunch of numbers on a screen. The instinct for recognising patterns is lost, and with it the opportunity for a 'gross error check'. And with no automated testing, the computer isn't checking for you either. A tiny error in the algorithm could lead to disaster down the line – and frequently does. Just ask Fidelity, Fannie Mae or Barclays.


Not convinced? Try these case studies:


  1. In August 2012, Knight Capital set up an automated application to buy and sell shares automatically on the New York Stock Exchange's new electronic trading platform. Unfortunately, there was a problem with the application, leading to the algorithms buying high and selling low. The result? A £6 million loss every minute. For 45 minutes.

  2. In 2009, Toyota – then arguably the world's leading car manufacturer – identified a serious issue with the accelerator pedal in several top-selling models. The subsequent recall cost over £7.5 billion. The construction of the pedals had been automated to perfection, so perfectly that the same fault was ingrained in each and every one, and the flaw had been missed by the humans in charge.

  3. On May 31st 2009, Air France Flight 447 took off from Rio de Janeiro for a transatlantic crossing. It ended up on the ocean floor, with the loss of 228 lives, after a catastrophic failure on behalf of the flight crew. In the words of William Langewiesche "most of their experience had consisted of sitting in a cockpit seat and watching the machine work". They had forgotten how to fly the aircraft for themselves.


So by all means follow the edict of "the first time, do it; the second, automate it". For sure, look for inefficiencies and repetition in your processes and seek a better way. But remember, every time you automate something, the humans involved become that little bit more critical. As Elbert Hubbard said: "One machine can do the work of fifty ordinary men. No. machine can do the work of one extraordinary man.".


If you are going to automate, remember this: leave ordinary men in change of automated processes at your peril.

0 views
  • LinkedIn
white logo (1) (1).png