car steering wheel

Driverless Cars – Easy Law Complex Morality?

Driverless cars – who will be responsible for an accident – driver or manufacturer? Can you choose to injure but avoid liability?

The arrival of driverless cars on our roads is now inevitable. Testing has already begun in this country and around the world. The government confirmed an intention to proceed when announcing their legislative programme in the Queen’s Speech.

There are some fascinating collaborative projects between car manufacturers, town planners and high tech companies aimed at delivering a safe product in an safe environment. There is even an insurance policy available in the UK to cover the “driver” of a driverless car against the usual liabilities, which puts us a step ahead of the rest of the world.

After initial speculation amongst insurers and lawyers it now seems to be accepted that although there will need to be extensive regulation concerning the production and use of driverless cars, the existing law relating to liability for accidents is largely sufficient to deal with claims that will arise from the use such vehicles. The thinking is clear, if the driver is in control at the time of the accident, the driver will be responsible if he has breached his or her duty of care to fellow road users in the normal way. If the accident occurs when the car is controlled by it’s on board systems and those systems fail, the manufacturer will be responsible for the accident under existing laws relating to product liability.

That simplistic approach has at least two obvious flaws. The first is the “state of the art” defence to a product liability claim, a technical argument which will be covered in later articles. The second is the question of who bears responsibility if the vehicle is programmed to deliberately collide with another vehicle, a pedestrian or even property?

Why should a manufacturer pre-programme a vehicle to crash? It is here we enter the realms of a moral dilemma and one which has been discussed in forums in the USA. Time for a little homespun philosophical introspection!

Your driverless car could, for example, be programmed never to cross an unbroken double white line in the road and thus to never contravene that aspect of the Highway Code. Yet we all know that in the real world, such unfailing adherence to the law would lead to chaos. You encounter a lorry parked in the carriageway unloading goods in an area where there is a double white line in the middle of the road. As a driver you assess the situation and make a value judgment that you have to cross the white lines in order to go round the lorry and continue your journey. You make a second judgment as to when to do so safely, taking into account the road conditions, volume of traffic, visibility and the reaction of other road users. Unless a driverless car is to sit unmoving at an obstruction, bringing traffic to a halt, it must be programmed to make the same judgment – when is it safe to ignore a rule of the road – and how should it execute the manoeuvre? Many rules of the road are treated as flexible guidelines, it would be impractical to do otherwise. The consequences of breaching a rule must be extreme to make it a rule never to be broken.

Extending the argument, do you programme a driverless car to always stop or take avoiding action rather than collide with a pedestrian? Is that a rule so significant as to be regarded as one never to be breached? What if the effect of avoiding running down a pedestrian takes the driverless car into the path of an oncoming vehicle, risking collision with that vehicle and death or injury to the occupants of both? Does the driverless car calculate that the potential loss of the life of the pedestrian is a better outcome than the potential loss of life of the occupants of two vehicles? When is one life valued at more than another? Is it merely statistical, would it make a difference if hitting the pedestrian prevented forcing an oncoming bus off the road killing all the passengers? Would it vary the equation if the pedestrian were elderly and the bus full of schoolchildren or the pedestrian a mother carrying a child and the bus full of terminally ill patients?

You may well say that we are now into the realms of the nonsensical but the point is clear. Driverless cars will need to be programmed to make judgments. Those judgments are in essence moral judgments. When a human driver is involved in an accident, that driver will make a snap decision on what to do. He or she may have been calm and weighed up the options but the chances are that they will merely act instinctively in a split second on what they see before them. The decision may be wrong or right. A court may judge their conduct after the event and consider them culpable or blameless or possibly apportion blame, with the benefit of hindsight.

If the decision is taken not by the human driver but by the driving system and is the pre-programmed choice of an IT systems engineer, who then is responsible? If it is the manufacturer, should that decision process be subject to the same scrutiny as the human driver? Will the collision not then have occurred as a result of a negligent breach of duty but a conscious balancing of risks and outcomes? Does the injured party then fail to recover damages on the basis that the driverless car has reacted in such away that there has been no breach of duty to other road users?

The factual scenario is not new, it is reminiscent of the questions put to law students when studying the law of tort. The possibility that liability might be apportioned between pedestrian, driver or manufacturer may not be novel but the issue of a prescribed decision made in advance controlling the behaviour of one of the active participants in the drama, and that participant being an autonomous vehicle is a undoubtedly new and may well require further jurisprudential consideration than has yet been realised.

Clarke Willmott are leading the way in exploring the legal issues arising from the introduction of driverless cars for the motor industry and the individual.