It seems that every time we turn around, there’s new data or Internet legislation on the table that the media describes along the lines of “similar to the GDPR,” or “GDPR-like.” It makes sense, of course, given that the GDPR is the most important legislation on privacy in several generations, though it can blur some of the important local variations that characterize each new law. CCPA is not GDPR, neither is Brazil’s General Data Protection Law – they impose unique requirements and demand different compliance regimes. Yet these laws generally do not go beyond GDPR in terms of punishments, fines, regulatory oversight. Put another way, although the spate of recently enacted Internet-related laws do have their differences, they all move in roughly the same direction: greater control related to privacy.
That is, until the UK government announced that it was planning to enact the “toughest internet laws in the world,” today. In what may be the beginning of a major shift in focus, the UK government issued a White Paper on tackling harmful content online. The proposals, aimed at curbing extremism and the dissemination of harmful material, focus on tech giants like Facebook and Google for apparently not controlling the content they allow to appear. Theresa May, the Prime Minister, said that
For too long, these companies have not done enough to protect users, especially children and young people, from harmful content. That is not good enough and it is time to do things differently. Online companies must start taking responsibility for their platforms and help restore public trust in this technology.
To translate – the British English phrase “that is not good enough and it is time to do things differently” in the United States is rendered as “I’m mad as hell and I’m not gonna take it anymore.”
The New Sheriff in Town
The White Paper makes clear that oversight is a primary concern, and that centralized authority is essential. To that end, there is to be a new regulator, imbued with the authority to punish offenses, which can include the power to:
- Impose fines that will potentially exceed GDPR’s 4% of global annual turnover;
- Disrupt business by removing them from search results, app-stores, or links on social media posts;
- Set civil and even potentially criminal liability for senior managers of companies, and;
- Require ISPs to block access to offending sites within the UK.
Blocking an ISP is a radical step, one that most western countries have carefully avoided out of concerns for free-speech. Talk among MPs suggests that a simple takedown notice is believed to be an insufficient deterrent to sites, and so the threat of blocking is necessary. That is not to say that blocking can’t be avoided. In fact, it would take approximately 30 seconds (depending on your internet connection) to completely circumvent the law and continue accessing whatever information you want by downloading an onion router/browsing platform and getting right back online. It’s true that the ease with which a law is skirted is not necessarily always an argument against the law itself, but the question here is more about effectiveness.
A “Duty of Care”
A new regulator and new punishments are always major headlines, and will likely consume much of the time the proposals are discussed. But the more important facet of the White Paper, from a lawyer’s perspective, is the fact that it announces a “duty of care.” That phrase carries an awful lot of meaning, largely because it announces the beginnings of an entirely new tort regime. You may not be excited about that, but lawyers everywhere are rejoicing.
To explain: in tort law, which we’ve covered here, a plaintiff can get money damages when a defendant commits a non-criminal wrong against another them. It relies upon the principle that if you owe a duty of care to another person, if you breach that duty by acting negligently (or with intent), you’re liable. “Aha!” you might say. That’s about private actors; this is about the government setting a duty of care. Except breaching a statutory duty of care (as here) is often enough to make out a viable suit. And while statutory duties are much more complicated in the UK than they are in the US, the principle remains: if Parliament imposes a duty of care, a lawsuit might be lurking.
Or, you know, they could just create a private cause of action.
Whatever the case, there will need to be a definitive rule on how the duty applies, to what companies, and with what scope. It’s essential because, without that kind of clarity, the White Paper means that anyone falling within the reach of the law (and not just tech giants) will have to comply with an extremely complicated legal regime with uncertain boundaries and questionable enforcement protocols. The risks of operational slowdowns and economic uncertainty would rise proportionally with the risks of an ambiguous, stochastic enforcement mechanism. Neither of those things is good.
Economic concerns aren’t the only ones at play here. In fact, the far larger consideration is how the proposed rules will affect the freedom of expression online. Any regulation of content implicates this issue, before content can be removed for being dangerous, obscene, threatening, etc., someone will have decide that it is dangerous, obscene, or threatening. And, yes, it’s always going to be a person, because even when we deploy AI to review materials (as we already do) a human person must first establish the parameters that the AI will use to identify the content.
So we’re looking at a law that might result in overbroad censorship of content, all in the name of protecting certain categories of speech and rooting out extremism. It will also establish the need for screening software or tools to help companies identify when and where putatively extreme materials are posted and, if not block them, immediately remove them. In other words, it’s the Copyright Directive and the Zuckerberg op-ed combined in one, at least, potentially.
Combating online extremism is not merely a policy preference–it’s an imperative. But managing the how of regulation is almost always more important, and more difficult, than identifying those areas where regulation is necessary. Here, the conflicting interests of protecting free expression and curbing dangerous content will require careful though, and meaningful consideration. That means input, not only from government figures, but from industry, consumer groups, political advocates, and experts. It’s a conversation we need to have, in the UK, in the US, and more generally. Because without more guidance, a legislative mandate for regulating speech can become a dangerous tool indeed.