The promise of artificial intelligence (AI) is that it could remove the possibility of human error. Indeed, it is already showing great potential across a number of disciplines.
A recent study found that the technology is already rivalling the accuracy of healthcare professionals; an AI system used to predict breast cancer based on X-ray scans performed, on average, 11.5% better than radiologists – even when the humans were given additional information like patient and family histories.
The question is, would patients ever be willing to put their faith entirely in AI to perform medical procedures? What’s more, how would society respond if any mistakes were made by machines?
Let’s go with an example to put this dilemma into perspective. Picture this: a patient must undergo routine surgery and is presented with two options – the operation is either carried out by a human doctor or the surgery is performed entirely by AI. To inform this decision, the patient is also told that the human doctor has an 89% success rate, while the AI performing the operation has a 95% success rate.
Rationally, it would make sense to choose the AI based on its higher success rate. However, it is natural for people to still hold apprehensions when it comes to trusting machines over humans. Many will argue that AI simply can’t replace inherent ‘human’ qualities, such as empathy, which inspire trust and confidence.
Moreover, there is still a risk of something going wrong even with a higher likelihood of success. Would humans expect AI to function faultlessly, and thus be less forgiving if it made a mistake?
Why we should forgive AI
From a philosophical standpoint, humans might be less willing to forgive crucial mistakes made by machines. After all, in our minds they are designed to improve accuracy and efficiency. This poses an interesting dilemma, and something we as a society will need to confront as AI becomes integrated into our daily lives.
Let us play devil’s advocate for a moment. It is true that mistakes made by AI can cause unintended consequences. But if they are far and few between, is this better in the long run as we attempt to mitigate human error? After all, how else can we refine the technology if we do not let it learn from its mistakes?
The World Health Organisation has indicated that 1.35 million people die in road traffic accidents every year. Driverless cars have subsequently been deemed an important technology in reducing a portion of those deaths caused by human error.
A 2017 study from RAND Corporation found that, in the long term, deploying driverless cars that are just 10% safer than the average human driver will save countless lives– preventing thousands, if not hundreds of thousands of casualties. Importantly, this demonstrates that we do not have to wait for the autonomous vehicles to be 75% better, or even 90% better than humans for a huge impact to be made.
Despite the compelling evidence, we must work to gain public support for solutions which set out to improve safety standards. As we have seen with the tragic pedestrian fatality in 2018 at the hands of an autonomous vehicle, mistakes made by technologies that we consider relatively safe will incur a major backlash.
While this is an extreme example of an AI failure, it highlights the reality that machines will make mistakes before they are perfected. The machine learning (ML) element of many AI toolsets enables algorithms to constantly learn and improve. Each time new data is inputted – whether this is a new X-ray scan or traffic situation – ML algorithms will fine-tune their decision-making and demonstrate better results the next time around. So, while at first self-driving cars might struggle with unexpected circumstances on the road, every encounter is a new opportunity to learn. In time, they will reach new levels of ‘perfection’ and become more capable at interacting with, and understanding, the world.
That is why it is important to be able to forgive AI. While it might be difficult to excuse a machine, like humans it will make mistakes as it learns. Only by letting AI fail will it truly progress.
New research coming soon…
In a society where humans and AI are beginning to coexist, our relationship with machines is forever being transformed. The objective is to ensure we are compassionate and willing to let AI fulfil its true potential, which will ultimately benefit society.
Here at Fountech.ai, we will soon be releasing some exciting new research which will show just how willing people are to trust and forgive AI. Stay tuned for upcoming announcements…