As Transparent as Mud

Transparency is one of the principles driving recent developments in data privacy and data security.  We’ve spent a lot of time discussing how important it is to be open with consumers and data subjects, to give them a clear idea of how their data is used, and why.  The primary reason for this is to give them a meaningful chance to decide whether they agree to the tradeoff inherent in every data sharing partnership or transaction: if I give you x data, you will give me benefit at potential cost.  If I like the relationship between x, y, and z, then we have a deal.

I know it sounds like math, but still, it makes sense.

Image result for michael scott no gif
I was told that there would be no math.

The problem, to paraphrase Commissioner Gordon, is when consumers neither get the transparency they deserve nor the transparency they need to make those informed decisions.  Consider the following scenario: you have just parked your car, and, for the very first time, the following alert pops up on your phone —

image1.jpeg

“Hey, I always forget where I parked, great idea Waze!” you think, as you quickly tap “OK” and move on.  And, at first glance, you can be forgiven for not seeing a single problem with this offer: I give Waze my health data (x) in exchange for a parking reminder (y) at a potential cost (z) that is seemingly very low because, let’s face it, Waze isn’t a healthcare company, so I don’t really care about them having my health information.  You can even go a step further and get all lawyer-y to say that it was merely a request to use data, and it isn’t as if the app says “We need this information to save your parking location.”  It’s a clean, simple exchange.

Related image
“I’m gonna need you to go ahead and rethink that one, mmkay?”

If you take a step back from that initial view, things become a lot more complicated.  Conceptually, location and health data are some of the most sensitive pieces of information your phone holds – they are the data sets archetypically protected by laws like GDPR or HIPAA, but they are also the subject of extensive litigation about protection of Fourth Amendment rights to privacy and unreasonable searches.  They are also among the most valuable data sets that exist: knowing where a consumer is located is intimately tied to consumer behavior and, therefore, to predicting their habits and purchasing patterns.  Health data, on the other hand, is a primary driver of the seven trillion dollar health industry that comprises 10% of our economy.  For context, this is what that number looks like written out: $7,000,000,000,000.00.  So the data is sensitive, and the data is valuable.  Probably worth re-thinking the x/y ratio above, right?

Consider, too, that in order for a disclosure to be meaningful it has to identify the factor: what are the risks of this data being used?  That’s both an objective and a subjective question.  Intellectually, everyone understands that a company might be hacked and data might be stolen, so that’s not what we mean here.  Instead, what are the risks from the user’s perspective.  For instance, is there a risk that this data will be shared with companies who will target me with ads that I don’t want?  Will it negatively affect my ability to obtain health insurance because I don’t walk enough or because I am constantly parked outside of a Krispy Kreme?  In essence, will this data be used reductively, to identify me only as a plot-point on someone else’s risk/benefit ratio?

But isn’t there still a chance that Waze will only use the data for the purpose it says?  And that it will honestly limit its uses of my data to parking location, and will protect my health data from outsiders?  Or that they might just totally aggregate all health information they receive to create a valuable data point about how far the average person walks from their parking space to the location they’ve driven to?  Absolutely, there is a chance of that and, frankly, if Waze explained that that’s what they’re doing, we would be completely fine with their disclosure.  But they do not say that.  Oh also: Waze is a subsidiary of Google.

Image result for oh, well, why didn't you say so gif
Oh.

Waze, then, is asking for health information on its users without explaining how or why it will be used when it presents the option — in fact, if you review Waze’s privacy policy they don’t mention the collection of health information anywhere.  In fact, despite the fact that Waze is useable in the European Union where, you know, they take data protection kind of seriously, there are no disclosures on any of Waze’s internet properties describing their collection, processing, or sale of personal health data.  (Here’s the policy en Français, pour exemple).  The point here is not to pick on Waze – that screenshot is from one of our phones, and we use the app all the time.  The point is that Waze is by no means an outlier.

In some ways, it seems that after all of the to-do over GDPR and the rise of the supposedly new era of data transparency, things really haven’t changed all that much.  Data intake is still as rapacious as ever, and explanations to consumers — even about the most sensitive of data types — are as opaque as ever, even after Google’s $50m Euro fine for this kind of behavior.  The reason is that every company conducts its own X/Y/Z risk-benefit calculation, but the Z-Risk factor is largely driven by the threat of lawsuits or regulatory actions.  As time goes on, if regulators don’t observe the kind of transparency that they expect, the only way for them to change a business’s calculation is to make the regulatory costs much, much higher.  That means enforcement, fines, and penalties.

Transparency is as much about what you don’t say to consumers as what you do say, and regulators are going to expect more than deafening silence.

Leave a Reply