Reed Smith Client Alerts

Earlier this month, the EU published a set of guidelines on how those involved in deploying AI-powered solutions should go about doing so in an ethical manner. This follows the publication of the guidelines’ first draft in December 2018 and a consultation process during which the expert group working on the document received over 500 comments. The guidelines propose a set of seven key requirements that AI systems should comply with in order to be deemed trustworthy. The document is likely to heavily influence discussions surrounding the EU’s future regulatory landscape for AI.

Autoren: Sophie Goossens

Binary code - AI network

Why are the AI Ethics Guidelines important?

In its work programme for 2019, the European Commission stated that it wants to be the “effective standard-setter and global reference point on issues such as data protection, big data, artificial intelligence and automation”. The values enshrined in the GDPR are already shaping the global economy and laws of other countries (see, for example, California’s Consumer Privacy Act). It is clear that the European Commission has the same ambitions for its AI-related initiatives. In fact, it has been already reported that Brussels’ “ethics-first approach has already attracted attention from outside Europe, including Australia, Japan, Canada and Singapore”.1

No other technology raises comparable ethical concerns (or even outright fear) and technical challenges quite like artificial intelligence. For example, how should a music recommendation engine react to an individual who is depressed or even suicidal (assuming that the device in question can measure this) and who chooses to continue listening to melancholic music? Would it be acceptable for the machine to refuse to play the desired music or should it nudge the user to listen to something more upbeat? As humans delegate more and more decisions to machines, there is a serious concern as to what this will mean for human autonomy and well-being.

Take another, more abstract, example: how should an AI solution handle the infamous ‘trolley problem’? This involves a trolley heading down railway tracks towards five people (tied up and unable to move) but which can be diverted to a different track to kill just one person. If the machine does nothing, five people will die. If it acts, just one dies. There are a number of variations of this problem (for example, by replacing the trolley with an autonomous vehicle) but the fundamental questions are the same. These include: (i) how to programme AI-powered solutions to uphold ethical values; (ii) how and who should decide what these values should be; (iii) who should be liable for such AI-agent’s decision; and (iv) should a human be able to step in and exercise a degree of oversight?