Last Modified: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?



Homepage / Publications & Opinion / Archive / Daily Telegraph: Harddrive

When machines break the law
Let's say the Inland Revenue discovers your system has been committing crimes to make money: who is responsible? Peter Cochrane investigates

OUR civilisation and world of commerce is founded on the processing of atoms - making and shipping things. It is a human-scale world moving at a modest pace, where control and laws are key ingredients to success and sustainability. But we are in the midst of a transformation from a reasonably well-behaved, understood and comfortable world of randomness to one of mind-boggling complexity and chaos. Hierarchy and control now have little to offer, and constitute a threat to those people and systems that adhere to them.

The reason? A slow, paper-based world of irrelevant processes and backward-looking rules is being rapidly overtaken by computers. Geography and international boundaries have long been bypassed by radio, television and telephone, and are now transcended by computer, optical fibre and satellite. This is a fast-moving world dominated by bits, and it is not on a human scale.

There are already mountains of legal and ethical problems presented by the new realms of telemedicine, telecare, tele-education, teleworking, publishing, electronic commerce and simply communicating what and how we wish. Like radio licences for cars in the past, we may have to abandon all efforts at control on a Web that is naturally out of control.

Soon a new mechanism, in the form of artificial intelligence, will introduce a further degree of freedom (or irresponsibility). It is already difficult to detect an electronic crime, define where it was committed, whose laws (if any) were broken, and by whom. Even worse, it will be difficult to decide what was responsible. People may not even be involved.

Suppose I sell you an artificial intelligence system that trades on the stock market and manages your bank account. It makes you progressively richer by trading on short-term marginals, and negotiates the best price with the utilities and food suppliers. But the programme is truly intelligent and I keep upgrading it for you as part of your purchase agreement.

Over a period of years, it gets much smarter, gets to know you and your needs intimately. In response, you encourage it to be more efficient, take more risks and make more money. Well, let's say the Inland Revenue remotely logs on to conduct an audit. To our amazement, it discovers the system has been committing crimes to make money. Who is responsible? The hardware, software, you or me?

If the law has changed and I have not upgraded your software in accordance with the new legislation, I am culpable. But what if I have gone bankrupt in the intervening period? If you have reloaded an older version of the software because it made more money, or if you have tampered with it, are you guilty, and could it be proved?

If, unbeknown to both of us, the software has evolved by mutation, and is communicating with other systems worldwide to achieve a better performance by circumventing the law, is it alone guilty? Or are we still culpable - me as the supplier, and you as the user - for not checking up on the system regularly enough?

This may all seem a remote prospect, but the most radical thinkers are already considering peopleless corporations. In such a world, we will need soft auditors, police forces and lawyers.

Peter Cochrane holds the Collier Chair for the Public Understanding of Science & Technology at the University of Bristol. His home page is:
http://cochrane.org.uk

All materials created by Peter Cochrane and presented within this site are copyright ? Peter Cochrane - but this is an open resource - and you are invited to make as many downloads as you wish provided you use in a reputable manner