The longer you spend in data-oriented businesses, the more you notice a funny thing about the language used to describe data sets and their uses. While, early on, the language sounds a lot like what you’d use to describe currency (“valuable,” “fungible,” “velocity,”), eventually it all starts to sound like you’re talking about food (“organic,” “fresh,” “raw.”). Maybe it’s a product of a food-obsessed culture — Gordon Ramsay makes $45m a year yelling at people — but food metaphors and food-like thinking have thoroughly permeated our collective understanding of, and language for, data.
This is actually a vast improvement over the idea of “data as currency” or “data as oil” or “data as [insert unhelpful metaphor here],” because food shares two important attributes with data: both are tied to the people who prepared it, and quality is inextricably tied to origin. That last point is worth a deeper analysis, because sometimes we see businesses adopting a cavalier attitude to how and why data comes into their possession. The thought is that, as long as you have the data in your possession, you can do whatever you want with it — a thought that is, at least now, a dangerous miscalculation.
At one point, it would have been reasonable to assume that, regardless of provenance, if a dataset came into your possession, you had relatively free rein to use it as you saw fit. That model of information flow essentially powered the explosion of data-driven businesses in the United States, led (and dominated) in no small party by Google and Facebook. Analysts have demonstrated how these companies transformed a simple search function or a basic message board function into trillions of dollars in revenue, but much of it had to do with a voracious appetite for data which fueled highly precise ad-targeting tools. More data, from any source, was good, because it allowed for ML systems and algorithms to more accurately provide opportunities for targeted ads. There’s no doubt that the need for ever-more accurate and current data sources created perverse incentives in that model — incentives that have been the subject of no shortage of regulatory concern.
That concern has prompted changes in governing law that mirror regulations around food: source identification and border controls. Just as you can’t simply bring a box of vegetables or seeds from one country to another, regulations like the GDPR control — and sometimes prohibit — your ability to move data from one jurisdiction to another. Indeed, GDPR expressly forbids transferring data about a data subject inside the EU to any jurisdiction outside the EU unless the receiving country has adequate safeguards for data. Note that it doesn’t typically matter whether the receiving business has adequate safeguards for data: the entire country has to have adequate safeguards.
That approach is catching on, globally. Canada, just yesterday, announced that it was going to begin the process of limiting the ability to transfer data outside the country without assurances that a receiving country has similar safeguards. Unlike the GDPR, Canada’s privacy law — the Personal Information Protection and Electronic Documents Act, or PIPEDA — does not have an express provision limiting the ability to engage in cross-border transfers. Indeed, PIPEDA is a fairly straightforward privacy law that imposes domestic-only restraints on using data properly. But, in response to global changes in privacy, the Office of the Privacy Commissioner in Ottawa has undertaken a years’-long strengthening of PIPEDA by, effectively, reading new requirements into it, and imposing a heavy fine regime.
Why the change? The Privacy Commissioner made clear that this was about giving people what they would expect, even if the law doesn’t explicitly require it:
A company that is disclosing personal information across a border, including for processing, must obtain consent . . . . it is the OPC’s view that individuals would reasonably expect to be notified if their information was to be disclosed outside of Canada and be subject to the legal regime of another country.
There is an important obverse to the idea that data subjects have the right to know where their data goes after they share it — companies have a correspondent duty to know where their data comes from, and how they acquired it. Canada’s proposed rule, like the GDPR, doesn’t impose any burden on the data subject to actively manage their personal data. In other words, these laws require businesses that obtain personal data to proactively identify how they obtained data, and whether it comes from a permissible source. It’s the exact same premise behind sourcing laws for food: if you can’t prove that you obtained produce from a safe, reliable place, you can’t use or sell it in the market.
You Are What You Eat
The thinking that animates these laws is deceptively simple: managing the cross-border flow of personal data allows for better oversight and, ultimately, more individualized control over how, and when, data is shared. The problem for American companies is that very few have made serious efforts to integrate cross-border restrictions into their privacy plans. For instance, one way to secure the right to transfer data out of the EU is by self-certifying to the FTC’s Privacy Shield Framework. The FTC and the EU have an arrangement that allows for the free flow of data to the US for Privacy Shield certified companies, as long as those businesses meet certain data privacy safeguards. It’s a great way for American companies to safely transfer data out of a European Member State.
And only 4600 companies have joined.
Given the trillion dollar cross-border trade between the US and the EU, we can be sure that more than 4600 US companies routinely deal with data about EU citizens. So what is everyone else doing? Presumably, they either don’t realize that this is a problem (which is a big problem) or they assume that it will all be fine. And when Canada, our $600b+ trading partner, enacts data transfer restrictions, and Japan, with $200b+ in trade, does the same, will it still be fine?
Ultimately, the strategy has to involve a rethinking of how we obtain our data. Programs like Privacy Shield are extremely helpful — and something we frequently recommend to our clients — but they depend upon an existing approach to data that is a substantial departure from the norm. That is, before Privacy Shield makes sense for a given company, there must already be a robust privacy program, a commitment to transparency about data uses, and strong data governance and controls.
There’s no escaping the hard work of building those systems. But rather than thinking of them as cost centers, remember that they enable access to huge markets — Canada and Europe are pretty big — and new opportunities. Whether you think of data as currency or oil or food, access to it is essential. Just make sure you know where it comes from.