Information Commissioner's Office
Printable version |
Beesley Lecture - ‘Regulating the tech giants in the digital age’
ICO Deputy Commissioner (Operations) James Dipple-Johnstone’s speech at the Institute of Directors in London, 31 October 2018.
Mr Dipple-Johnstone and Tommaso Valletti, the European Commission’s Chief Competition Economist, addressed the subject from the perspectives of data protection and competition respectively
Original script may differ from delivered version.
Hello, and thank you very much for that introduction.
It is, of course, Hallowe’en night and the obvious temptation would be to fill this talk with you with associated puns.
You will no doubt be extremely relieved to hear that this is a temptation I have largely avoided, except to say the following – Big Tech is not something we should be frightened of when it comes to enforcing the law, in order to protect the public.
Our new powers under the General Data Protection Regulation, the new Data Protection Act 2018 and the Security of Network & Information Systems Regulations mean no organisation is too big to be effectively regulated, whatever their size or scope. We have ‘fangs’ and will show them if we think we need to.
So now that’s Hallowe’en out of the way, we can move on to look at the laws we enforce, the challenges we face and how we aim to meet those challenges from a data protection perspective. We will look at why we regulate, when and how. And finally I will float some questions about ‘what’ this raises for the future of big tech regulation.
Why we regulate
So, to the ‘why’. Well, Big Data and Big Tech are central to many positive aspects of modern life. As technology evolves, that influence and importance is only likely to grow. It has brought many undoubted benefits, from greater connectivity and convenience, to driving new economic growth and prosperity, while also giving us new insights and new ways of looking at the world.
We can instantly connect and share special moments with our loved ones, no matter where in the world they are. We can shop in global marketplaces or do our banking with a few clicks of a screen. We can build and run businesses online. Information, knowledge and learning that would once have filled vast, entire ‘bricks and mortar’ libraries can now be accessed in seconds from the phone in your pocket.
And yet how many of you are really aware of how these organisations are using your personal information? How many of you have actually read the privacy policies or fully understand what you – or in some cases, your friends - signed up for when you created your social media account or downloaded that app?
We know that there are deep public concerns about the business models employed by some of the technology giants and, more generally, about some of the increasingly opaque uses of personal data. There is increasing concern about online crime, cyber-attacks, harmful content and how securely people’s information is being protected. Of the seeming ease with which bad actors can find and use our data against us.
Sales and usage of new technologies such as wearable devices or connected ‘Internet of Things’ devices in homes is booming. Other products and services which are emerging on the horizon, like connected cars for example, will undoubtedly present new privacy challenges. And all these things rely on personal data.
As technology advances, the use of Artificial Intelligence and algorithms will increase, alongside greater use of predictive analytics. The more complicated and automated the process of data use and targeting becomes, the more difficult it will become for people to give informed consent, to ask for a clear explanation about how their data is used – or even to know when it is being processed in the first place.
Our report on the impacts of this on the democratic process – Democracy Disrupted? – shows just one of the many significant implications of this going unchecked.
The public need to know that their legal rights to privacy – which, after all, derive from global human rights principles – are being protected. They need to know that technology is not racing ahead of their rights. Perfection is unobtainable but a system that that recognises the need for balancing privacy and freedom of expression is needed. This is hard but not impossible.
But this is not a new challenge. I and several colleagues, including the Information Commissioner Elizabeth Denham, were in Brussels last week at the International Conference of Data Protection and Privacy Commissioners – a global gathering of regulators.
There we saw Apple CEO Tim Cook give a fascinating speech in which said: “It's time to face facts. We will never achieve technology's true potential without the full faith and confidence of the people who use it.”
That very much echoes our position as regulator. Our primary goal, as set out in our Information Rights Strategic Plan, is to increase the public’s trust and confidence in how their personal data is used.
Our latest annual research recently suggested that the number of people overall having trust and confidence has increased over the last year from one in five to one in three. But while that increase is welcome, it is still not good enough, we would say.
It is also noteworthy that citizens trust the public sector more than the private sector in their responses.
When we regulate
So we have discussed why we regulate. But when do we decide we need to get involved?
In an ideal world, problems would be identified and dealt with before they even arise. The principle of Privacy by Design and Default, encouraging data protection considerations to be built into new products and services from the very outset, is long established.
This is not always the case, however in practice. And so we get involved when we identify emerging potential risks. We have an Intelligence and Research Department for this purpose; scanning the horizon for emerging potential risks to our citizens and formulating ways to deal with them.
This has been the case in issues as diverse as connected toys and their implications for children’s privacy, to the growing use of facial recognition technology in both the commercial and law enforcement spheres. I’m sure many of you will remember the issue back in 2009 of Google’s Street View cameras capturing the images of identifiable individuals. We took a common sense approach and instructed the company to blur the images of people’s faces, while allowing the service to continue.
A more recent example was that of the Royal Free NHS Trust, which agreed to allow access to approximately 1.6 million patient records to DeepMind, an AI organisation owned by Google. We found that in doing so, the hospital - as the data controller in this case - was in breach of four principles of the Data Protection Act 1998.
In response to what we found we issued an undertaking requiring the Royal Free to commit to a range of measures to safeguard public data, including enhanced transparency and accountability processes, and a third party audit supervised by the ICO. This allowed innovation to proceed while protecting privacy.
At the ICO, we felt this was a situation where we had to intervene, to stand up for the privacy rights of UK citizens. After extensive investigation and discussions, including with our EU colleagues, we obtained an undertaking from WhatsApp in March stating that it would not share personal data with Facebook until and unless its privacy statements and consents aligned with the GDPR. A company’s business model or market share should not conflict with the individual’s right to privacy or reduce their power to choose how their personal data is used.
We are increasingly seeing an interest from our economic market regulatory colleagues in this space and as DPAs it is our role to work with them hold big tech accountable
A forward-looking inclusive approach to working with other regulators, or with governments where regulation is absent, is key to maintaining the free flow of data – both within the EU and across the globe.
And of course, then there are the more day to day, reactive elements of regulation – investigating reports from organisations themselves, such as cyber attacks or security breaches, and responding to complaints from members of the public.
This is the life-blood of regulators. These provide the background to regulation and add to the comprehensive picture of risk we seek as a regulator. We use the feedback from these to feed into our decisions on the proactive elements, whether that is in respect of guidance provisions, enforcement, compliance monitoring or auditing. This will continue to be part of our model in future.
How we regulate
And so this leads us on to how we regulate. What exactly do we do when we decide we have to step in?
Like many of the organisations represented here in the room, we at the ICO are bound by the Regulators’ Code. This means we will use our powers to enforce the law when it is necessary – and will do so reasonably and fairly; targeting the harm posed. You will no doubt already be aware of some of the enforcement action we have taken recently. And under the GDPR, these enforcement powers have been strengthened significantly.
The GDPR gives us a range of new tools to investigate and to enforce. We can demand algorithmic transparency, we can carry out ‘no notice’ inspections of premises, and we can demand the destruction of data and call a halt to processing we are not happy with in terms of legal compliance. These are powerful tools.
The action we can take when we discover a breach ranges from observation and monitoring, through to reprimands, audits, assessments and inspections, information notices and, when needed, formal investigations, civil financial sanctions or criminal prosecutions.
The tech giants are treated the same as any other organisation in this respect. The GDPR and its predecessor are principle-based laws and those principles apply equally.
However, we are also a risk-based regulator. Although our budget and staffing is now significantly bigger than it was even 12 months ago, we can’t investigate and enforce every single case, no more than the police can investigate and prosecute every allegation of criminal activity.
We have to look for where the biggest risk to the rights of the public could come from. And so it’s inevitable that the bigger organisations, which process the biggest volumes of data, will present some of the biggest risks to the legal rights and freedoms of our citizens.
And so how does this work in practice?
In terms of enforcement, we have GDPR cases in the pipeline, but we have not yet issued a fine under the new regime, so let’s instead look at a recent case where we issued our maximum fine under the 1998 Act. Although the law has now been changed, there are consistent factors in play involving principles which remain relevant.
The case involved Equifax, the credit reference agency, which holds significant amounts of personal data and which had a breach at its US business, exposing the information of 15 million UK residents and which was fined £500,000 as a result.
So how did this cases differ from the Royal Free case I mentioned earlier? Why was Equifax given the maximum financial penalty while the Royal Free was only asked to sign an undertaking?
The answer can be found in our Regulatory Action Policy. The overarching, general principle, is that “the more serious, high-impact, intentional, wilful, neglectful or repeated breaches can expect stronger regulatory action”.
The Royal Free Hospital was working with DeepMind to test a new app which could potentially detect serious conditions. Our investigations found shortcomings in the Hospital’s approach to transparency and in safeguarding personal data. But that’s all they were - shortcomings. The Royal Free had also taken steps to improve its practices in response to our work.
And there’s another consideration: since its eventual launch, the app had delivered measurable clinical benefits. We support innovative products, services or concepts. The potential for creative uses of patient data is huge, and we do not want to stifle that.
Compare that to the other case, involving a company with vast resources and huge legal and technical expertise in handling personal data.
Equifax became aware of a critical vulnerability in its systems in March 2017, but it was not rectified effectively. That vulnerability was exploited from May 2017 and was not spotted and closed until the end of July of that year.
The breach exposed systemic inadequacies, a number of which appeared to have been in place for some time. Equifax, a large, well-resourced company which routinely dealt in sensitive data, should have spotted those inadequacies sooner and fixed them.
Given its scale of the breach and the nature of the personal data involved, the incident would be likely to cause substantial distress to many people given their concerns about any potential impact on their credit rating.
It all boils down to the guiding principles of data protection legislation - fairness, transparency and accountability.
But my talk this evening would not be complete without reference to today’s other news in this area, with a call for senior Facebook leaders to give evidence to parliamentarians from the UK and other jurisdictions, including Canada.
Our recent Facebook fine, again at the maximum under the 98 legislation, was part of our wider investigation into data analytics for political purposes, which has thrust the ICO into the centre of political discourse, both here but in in Europe and in North America.
It is the biggest data protection investigation ever undertaken anywhere in the world. It is global in scale and you can see from this slide, it has encompassed a huge cast of organisations and individuals, both here and abroad.
We will be providing an update on the investigation next week when I and the Commissioner are due to address the DCMS Select Committee but we have already taken a number of steps, including issuing sanctions and starting a criminal prosecution.
But this case is also of importance for the change in approach it represents. We have acted as an agent of disruption, identifying unreasonable risks and using our powers to prevent problems before they become more serious.
Organisations, especially the technology giants, can expect to see more of this approach in the future from the ICO – systemic investigations of a sector, or an entire ‘ecosystem’ where data protection concerns are raised, to lift the curtain for the public, provide advice and guidance where necessary for the company and to sanction where we find bad practices or those out to commit harm.
The commissioner has reconfigured our office to do this; we’ve established multi-disciplinary teams; we have embedded our international links to undertake evidence gathering and enforcement activity overseas; we have strengthened further our digital investigation and forensic recovery resources and we have reinforced our in house legal team with external legal support in the UK and overseas.
But while big fines and big investigations make big news, they are just one aspect of how we encourage compliance. We always prefer collaboration to sanction. Our door is open for advice and engagement.
We will continue to encourage and reward those who engage with us to resolve issues and who can demonstrate strong accountability arrangements and data protection by design.
Collaboration
And away from enforcement, we want to engage and collaborate. We want to listen every bit as much as we talk. And we want to stay relevant.
We have reaffirmed our commitment to make technology the backbone of our organisation when we launched our first Technology Strategy in the spring. This has been supported by recruitment in this area, including at senior levels.
Our Grants Programme supporting independent research into areas of information rights threats and opportunities is now in its second year and we are developing our first post-doctoral research programme looking at the regulatory implications of Artificial Intelligence.
We have recently been awarded just over £500,000 from the Department of Business, Energy and Industrial Strategy to establish a Regulators’ Business and Privacy Innovation Hub, aimed at providing other regulators with bespoke data protection advice to support them to bring new products to market that support to innovation in their own sectors.
I know many of you here tonight are from regulatory bodies and we hope that this hub will be something your organisation will find mutually beneficial.
Alongside the hub will be the ICO’s regulatory sandbox. This will be a safe space to explore innovative ways to use personal data. We mentioned the Royal Free case earlier and that is a perfect example of where an organisation could have come to us beforehand with a new form of processing that they wanted to test in a sandbox.
These new developments are part of a wider trend at the ICO towards ever greater collaboration and innovation, domestically and internationally.
Data doesn’t recognise borders and the big tech outfits we regulate are multinational in scope and so we need to be smarter, more flexible and responsive in our approach to meet these new challenges.
The GDPR helps facilitate this through the European Data Protection Board and its new cross-border working arrangements and investigative protocols.
We are increasingly working with others. As mentioned earlier offshore decision making is a feature of nearly all tech giants. With the majority of Tech firms based in the US, this can pose problems when things go wrong. There is a tendency for overseas HQs to only engage with their own supervisory authority and to shut out concerned regulators in other jurisdictions. When an incident affects UK citizens’ data, the ICO does not intend to sit on the sidelines.
In recent months we have been working with other sector and market regulators, with statutory agencies and Commissions, with law enforcement and with DPA colleagues in Europe and further afield. Our Commissioner was last week appointed chair of the ICDPPC, the global conference for data protection regulators I mentioned earlier.
This will continue to be a feature of our approach to regulation – we are engaged with the wider world and we will continue to be so, wherever we find ourselves post-Brexit. We have an International Strategy which sets out our aims to be a global player – just like many of the organisations we regulate and which we are here to talk about tonight.
Future questions – the what?
So what about the future? Are we up to the task? I would say that simply looking at our work in last 12 months proves that we are on the right path. But, as ever in this changing landscape, there is more to do.
In May, we oversaw the biggest change to data protection in a generation, bringing our laws up to date. We have also seen the introduction of requirements in the NIS directive to keep our critical national infrastructure safe on line. However we must always be mindful of the changing range of harms – increasing concerns about harmful content are coming to the fore and we collectively need to think about how we respond to that.
We also need to have the capacity and capability to act decisively within that legal framework. We have significantly increased our staffing from 400 to around 630 in 24 months, making us the largest data protection authority in Europe in terms of both personnel and budget.
We are investing significantly in our staff – our remit covers everything from the local blacksmith to the largest on-line bank and everything in-between. Our teams need to have the up to date skill set and knowledge to understand the technology, the issues it raises, the standard that should apply to it and the corrective action that may be needed. It is a constant challenge.
And as we anticipated, under the GDPR we are doing more of everything - we have seen a rise in personal data breach reports from organisations. Complaints from the public relating to data protection issues are also up as more people became aware of their individual rights. We welcome the fact that data protection and people’s individual rights are now very much part of the conversation, both in the UK but also worldwide, even if that increases our workload.
And this balance MUST be found as the amount of data created worldwide is growing exponentially.
IBM estimates that 90 per cent of all the data that exists in the world today was created in the last two years.
Just think about that for a second. 90 per cent. And rising every year.
That begins to give you some idea of the scale of the challenge we face as a regulator, both now and in the future. But with a new legal framework, greater resources and improved public awareness, it is a challenge we accept without fear.
Original article link: https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2018/10/beesley-lecture-regulating-the-tech-giants-in-the-digital-age/