In the context of the Article 29 Working Party’s Opinion on Anonymisation Techniques (WP216), Kate Brimsted, of Counsel at Reed Smith LLP (with Katalina Chin contributing) discusses why developments in Open Data and Big Data are driving an unprecedented need for reliable anonymisation techniques, whilst at the same time eroding their effectiveness.
It seems to be a truth universally acknowledged that if some data sharing is good, then the sharing of so called ‘Big Data’ must be even better. Doing so can allow us to identify and act upon trends, plan smarter cities, reduce energy consumption, and enhance disease prevention and public health.
Incredibly fast computer processing speeds mean that data crunching can be accomplished on a scale never before achieved, and this continues to rise in line with Moore’s Law (the observation that, over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years). Clearly, this process can benefit humanity collectively — but what about individuals’ rights to be ‘left alone’, and to protect their privacy? Some of the information feeding the Big Data engine will be about people, or relate to them in some way. How can we achieve a balance between openness and privacy when it comes to information relating to living, identifiable people (i.e. ‘personal data’), not forgetting that the scope of what constitutes ‘personal data’ is expanding?
Download the PDF to read more!