FOI Man reports on the ICO’s new Code of Practice on anonymisation.
FOI Officers tend to be caught between a rock and a hard place on a pretty much continual basis. If it isn’t navigating between the Scylla of senior management and the Charybdis of requester ire, then it’s trying to balance the often competing demands of the Freedom of Information and Data Protection Acts (DP).
So new guidance from the Information Commissioner on the important subject of anonymisation is very welcome. Though at over 100 pages, some FOI and DP Officers may struggle to find the time to read it between fielding requests and CMP notices. But, ever at your service, I attempt to extract the key points for you here.
The Code notes DPA does not require anonymisation to be completely risk free – the role of the Code is to help organisations mitigate the risks involved with anonymisation. Similarly, it points out that – in line with R (on the application of the Department of Health) v Information Commissioner [2011] EWHC 1430 (Admin) – anonymised information ceases to be personal data. So if your data is truly anonymised, section 40 of FOI won’t apply to it, and the sort of large datasets that that nice Mr Maude likes Government departments to publish can be unleashed without concern.
But that’s the trick. We’ve got to be very careful that what we put out there is truly anonymised. The Code summarises the problems with that neatly – firstly, there are a number of ways that an individual could be identified, so just taking a name out may not be enough. And secondly, we have no way of knowing what information you folks out there might already have access to.
There are well documented examples of how individuals have been identified from supposedly anonymised datasets once put together with information available on the internet or with personal knowledge. The ICO point out that organisations aren’t omniscient – they can’t know for sure what is, and what will be, available to people. So what do they say about how FOI and DP Officers should reach the judgment as to whether or not it is safe to disclose an anonymised dataset?
Effectively – and I hate to throw a buzz word at you – it’s a risk assessment. They cite a Tribunal concept of the “motivated intruder”. Basically this is someone who will do anything short of commit crime to identify individuals where there is some motive, eg the information is newsworthy, of interest to the village gossip, perhaps politically sensitive. We need to consider whether someone like that could identify people using libraries, archives, the internet, social media. In other words, we’re talking about those people who you see on TV sometimes tracking down people for an inheritance. Or the producers of Who Do You Think You Are. Could they identify individuals from the data?
Of course, this is better than nothing, but it still relies on FOI and DP Officers or their colleagues to have the time to work out whether someone could be identified from all of these sources. If they haven’t got that time, then there is a risk that the Code just leaves us where we started – with authorities reluctant to release information for fear of individuals being identified.
Thankfully the ICO do recognise the difficulty of this with large datasets – the desire for publication of which is pretty much what prompted this Code. They say:
“It will often be acceptable [with larger datasets] to make a more general assessment of the risk of prior knowledge leading to identification, for at least some some of the individuals recorded in the information and then make a global decision about the information.”
But it still means that many FOI and DP Officers will be left feeling uncomfortable whenever considering disclosure of anonymised datasets. Have I checked enough sources? What if I’d tried that other search engine? Should I subscribe to that genealogy site to check what someone could find there? It’s difficult to see what else the ICO could have advised, but FOI Officers will take limited comfort from the Code on this point.
There is some useful practical advice in the Code such as the best ways to present personal and spatial data (eg in crime maps). The case studies that form the last half of the publication will be helpful as well.
Overall, the Code is a useful guide to the issue of anonymisation for FOI and DP Officers and anyone working with datasets containing personal data. But it won’t be the last word and it will be interesting to see what comes out of the new UK Anonymisation Network announced yesterday by the Information Commissioner.