On 19 February 2020, the European Commission published a White Paper, which set out its vision for fostering a ‘European ecosystem of excellence and trust in AI’.
Whilst a well-meaning initiative, the guidelines and regulations ultimately do little to nurture the growth of Europe’s AI sector. On the contrary, the new strategy introduces huge hurdles for tech companies to surmount – and risks stifling innovation in Europe.
As we speak, the US and China are engrossed in an AI race. According to a recent report by Centre for Data Innovation, the former currently leads in four out of the six fields measured – talent, research, development and hardware – while China is the global AI leader in adoption and data. The question is, where does Europe fit in?
With the introduction of new regulations, Europe is in danger of eradicating any competitive advantage it might have, despite currently boasting an impressive startup ecosystem.
In light of this, there are two main problems with over-regulation that could prevent Europe from catching up with its competitors…
The curse of red tape
Firstly, over-regulation naturally comes with a whole host of red tape. For instance, the EU has said that all ‘high-risk’ AI applications could be subject to a compulsory assessment before entering the market.
This means that companies, particularly startups, will face prohibitive costs when it comes to being able to afford compliance checks, despite potentially holding the key to innovative new applications of AI. By enforcing unnecessary barriers for those working to introduce clever new solutions to the market, fields from healthcare to cybersecurity could face massive setbacks to their progress. As a timely example, it is possible that AI has the ability to detect and map the spread of Covid-19; however, its development might have to be delayed, until it is approved in accordance to European regulations.
Equally important is the question – how do we define ‘high-risk’? Indeed, an application that appears innocent at first might have drastic consequences, while AI in a field like transport, which might seem high-risk at first, might actually set a new gold standard for safety.
Two key case studies can highlight this disparity; the famous case of Amazon’s ‘sexist’ recruitment software, which prioritised male hires over female prospects, initially didn’t set any alarm bells ringing. On the other hand, the use of AI during the course of long-haul flights, which might unnerve some passengers, has supported safe air travel for decades. Such examples suggest that the over-regulation, and indeed the mis-classification of risk levels, could in fact prove problematic for societal progress.
A negative message
On the other hand, the introduction of new policies aimed at curtailing AI development sends a dangerous message: namely, that AI is evil and should be tamed.
However, this is simply not the case. As AI develops, it will become part and parcel of daily life, changing our lives for the better. Indeed, most people won’t even realise how much they rely on this technology on a regular basis. Drivers, for instance, wouldn’t be able to find the quickest way to get from A to B without an AI-powered Sat Nav, while busy parents wouldn’t be able to get auomatic news updates and reminders from Alexa without this technology.
The overarching point is that we should let AI develop more naturally, without excessively curbing its development. Indeed, we risk setting industries back from reaching their potential by over-regulating.
Despite its good intentions, the suggested new EU framework for AI runs the risk of holding Europe back. Of course, it’s important to ensure that the technology is developed ethically and sustainably. However, we must also make sure that we are able to effectively utilise, and build upon, the benefits it can offer.