A report to the European Parliament is going to lead to a vote by MEPs about robots – particularly on their legal status.
Experts believe robots will “unleash a new industrial revolution”, and have an impact on almost every aspect of society.
But does this new world, which has so many echoes of Sci-Fi classics, pose some real and genuine problems – what happens if something goes wrong? What if a human suffers harm at the hands of a robot?
Could a robot cause injury?
The report presented to MEPs says people should be able to use robots “without risk or fear of physical or psychological harm”. However, when it is also accepted that it may be possible that the artificial intelligence of robots might surpass human intellect over time – how easy will that be too achieve? Even the presence of a recommended “kill switch” will only be effective if the human has the physical and mental capacity to know when and how to use it.
Rules being formulated for if and when robots become self aware will include:-
- Robots should not injure a human being or, through inaction, allow a human being come to harm
- Robots must obey orders given by human beings except if this would conflict with the first rule
- Robots must protect their own existence so long as this does not conflict with the first two rules
- But will robots make judgements in a different way to humans? Would an apparent lack of emotion lead to decisions being taken which could cause harm if a robot assessed this as the least worst option?
Even putting the ethical issues to one side, where would legal liability arise if you were injured by a self aware robot? The report suggests that liability should be proportionate to the level of instructions given to the robot and its autonomy – specifically that “The greater a robot’s learning capability or autonomy is, the lower the other parties’ responsibilities should be and the longer a robot’s “education” has lasted, the greater the responsibility of its “teacher” should be”. This could be a potential minefield in assessing who bears responsibility for the actions of a robot – will it be the designer, manufacturer, owner or teacher? No doubt careful regulation will be required, and a whole new area of insurance will open up.
Care Robots – the future of providing support after serious injury?
One area the report will consider is issues of privacy and human dignity. Experts are predicting that many functions of care, whether from a nursing care perspective, or even social care to enable and protect people – will become functions that robots can be proficient at.
If we take the example of someone who has sustained a serious injury following an accident, or who has a serious illness or a disabling condition, how will those people feel about receiving support from a robot? Will in fact it be thought of as less an invasion of privacy than humans providing the support? Will in some ways it be seen as more enabling and creating greater independence – when support is provided by a “possession” who acts under the instruction of the person being cared for? Or would the lack of human empathy and emotional response of a carer in fact reduce the quality of the care and indeed potentially be psychologically harmful?
With virtual reality systems, ever more instant communication, driverless cars and artificial intelligence in robots – there is no doubt that our world is likely to undergo one of its most rapid periods of change. Inevitably, at this stage, this leads to many more questions than answers. Of course, many benefits will follow – but issues of ensuring safe systems, and dealing with legal liability if things do go wrong, and of considering the ethics of allowing robots to carry out care functions previously and exclusively handled by humans – will require careful thought.