GDPR Challenges
From a compliance point of view AI poses specific problems to data controllers and to us as the data subjects. One key issue is the centrality of transparency to data protection. Every controller must be able to set out, clearly and using plain language, how personal data will be used and for what purpose, at the point at which the personal data is gathered. This is sometimes referred to as the ‘no surprises’ rule.
However, if a controller is using AI to make an automated decision and that AI is self-learning, can the controller ever really explain how an individual’s personal data will be processed? Certainly, this is unlikely before the processing takes place, and it is unclear whether a controller will have the capability to explain how the AI arrived at the decision afterwards.
Another major issue revolves around AI and its increasing use for profiling of nearly every aspect of our daily lives. We see this in supermarkets, banking and internet usage simply because all this personal data is gathered through lots of separate systems and can now be integrated. This raises issues around fairness and proportionality.
Some organisations have taken the creative compliance approach holding that statistically inferred data is not really personal data and that any
data they use is publicly available anyway. Others have relied on an argument that the individual was not profiled but rather it was their property or the professional role they carry out. The GDPR is quite clear, however, and defines profiling as any use of data that can be used to identify
a living person, i.e. personal data, by a computer system to evaluate, analyse or predict aspects about them. It particularly references an
individual’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements. In other words, all the really interesting information! The fine issued against Österreichische Post AG (€18 million) reinforced this point, making it clear that the use of personal or non-personal data, even when public, to assign attributes to data subjects, is subject to GDPR.
Where AI is processing data that includes biometric data, the challenges increase still further. A particular case is the use of AI for pattern
creation from images and the matching of this pattern against different reference databases. Recent examples have seen the London Metropolitan Police using AI for real time mass surveillance on the streets of London despite fears about the proportionality and accuracy of the system. Another is the controversial Clearview AI Facial Recognition Technology that has scraped an unknown number of photos of millions
of individuals from the web. The tool purports to be able to identify almost anyone from these sources. The photos were likely never taken, or made available, with the expectation of this biometric processing after the fact, in some cases several decades after they were first published. Another example closer to home is the biometric capability built into cameras of the new Children’s Hospital.