The UK’s Information Commissioner’s Office (ICO) published the text of its draft guidance on its artificial intelligence (AI) auditing framework (the Guidance) on 19 February 2020. Differing from other recent ICO publications, this Guidance remains exactly that - guidance - and not a statutory code of conduct that must be followed. Given the increased prominence of AI across many sectors, the data protection community has been eagerly awaiting some clarity on how to reconcile principles of data protection law with AI’s competing values and interests when the two seem to clash (the ICO refers to these as ‘trade-offs’). This is the ICO’s first substantial consolidated publication on the topic of AI, which focuses mainly on the risks brought about by AI’s increased use, and controls to mitigate these risks. There are also a significant number of practical governance suggestions made throughout to ensure businesses are accountable for the way in which AI is developed, trained and deployed.
In this article, we summarise the key points set out in the Guidance. The consultation closed on 1 April 2020, but the Guidance nonetheless gives important tips that can be put into practice now.
So, what’s it all about?
The Guidance provides practical advice on how to resolve apparent tensions between data protection compliance on the one hand, and improving efficiencies and accuracy by deploying AI on the other. By its nature, AI uses large datasets and can result in questions around accuracy and data minimisation that some may see as at odds with privacy requirements. The Guidance explains how these issues can be resolved. It focuses mainly on machine learning (ML); however, it acknowledges that whether ML or not, the risks and controls highlighted will be of use to any development or deployment of AI.
Who is the Guidance aimed at?
The Guidance is, of course, aimed at those working in compliance (DPOs, GCs and so on), but, importantly, it has also been designed with tech specialists and privacy teams in mind, such that developers and IT teams will be able to use the Guidance to see what practical steps they need to take in their day-to-day interactions with AI and in longer-term project planning.
Hasn’t the ICO already released guidance on AI?
Yes. The Guidance is part of a longer-standing initiative, and it relates to other documentation on AI. This includes the ICO’s explAIn guidance, produced in collaboration with the Alan Turing Institute, which sets out the key considerations to be taken into account by organisations when explaining the processes, services and decisions delivered or assisted by AI, to the individuals affected by them. Separately, the ICO recently produced a series of blog posts providing updates on specific AI data protection challenges and on how its approach to AI is developing. The amount of buzz around AI is unsurprising given that it is one of the ICO’s top three strategic priorities for 2018-2021.