Weekly Data Privacy Roundup

We’re starting something new, a weekly collection of some of the stories that we saw or found interesting but, for some reason, didn’t have time to address meaningfully (by which we mean throw as many gifs at as possible).  This week is heavy on government action, which is a good indication of the trends we’re seeing globally: regulators are moving beyond simply talking about privacy or pointing out the existence of privacy regulations and have begun investigating and, in some cases, suing.  Which means the time to develop a real privacy program was yesterday.  So, here are three important data privacy stories happening now that you might have missed.

Now This Is Happening GIFs - Find & Share on GIPHY
Chris didn’t de-identify personal information before a DPA audit. Don’t be like Chris.

FTC Starts an Investigation into Broadband Providers

We noted earlier this year that the FTC was in the midst of a months-long roadshow soliciting input from consumers and advocates about privacy and technology.  We also suggested that this roadshow would come to an end, and the Commission would begin implementing a new phase in its regulatory approach to privacy, including policing the acceptable scope of data sharing and data collection.  That shift appears to have begun, with FTC investigating broadband providers to understand what personal data they collect, why they do it, whether they disclose the collection, and how the data is shared.  This is an important step for FTC, because instead of simply waiting for a breach or individual company failure, it is telling an entire sector that data flows are up for review.

HUD Investigates Facebook Housing Discrimination

In yet another bad week for the Menlo Park company, the Department of Housing and Urban Development has brought a discrimination claim against Facebook.  The gravamen of the complaint is that Facebook’s microtargeted ads discriminated against people of color.  Specifically, the Department announced that:

Facebook combines data it collects about user attributes and behavior with data it obtains about user behavior on other websites and in the non-digital world. Facebook then allegedly uses machine learning and other prediction techniques to classify and group users to project each user’s likely response to a given ad, and in doing so, may recreate groupings defined by their protected class.

That’s a fairly major claim, and one that should worry other digital advertisers.  In fact, adtech tools depend on the use of extremely segmented breakdowns of individuals, which allows for the kind of microtargeting that has become the hallmark of modern advertising.  This lawsuit effectively states that if the segmenting 1) creates groupings based on protected status (like race) and 2) delivers content in a manner that unfairly limits that protected status group’s options or rights, then there may be a legal action.

Here we run into an important question about implicit biases and their effects in AI/ML.  An algorithm may recognize that a human person’s dataset includes a signifier for “African-American” or “Hispanic/Latinx,” but the algorithm doesn’t have any actual consciousness of what that means.  It’s down to the human programmers to create the tools to contextualize protected statuses and how they influence choices.  Do I think Facebook intentionally discriminated here?  Probably not.  Do I think Facebook didn’t account for the effects its profiling processes might have on advertisements about housing?  Almost certainly, as the company all but admitted in its press release in response to HUD’s actions.

Watch this case, especially to see if it spawns cases against Google and Twitter, for instance, which make use of similar segmentation tools and similar advertisement delivery systems.  If HUD can successfully mount a case, then adtech may be in for (yet another) confusing year.

Governments are Bad at Data Privacy Too.  No, Seriously.

Lest you worry that it’s only private industry that has difficulties managing the complicated issues of data privacy, this week offers two important counterpoints.  The first covers the safety of data in the US, with FEMA disclosing the personal data of more than 2.5 million individuals affected by disasters.  FEMA revealed 700,000 personal addresses, and 1.8 million personal addresses and banking information.  It sometimes seems like FEMA doesn’t understand that it’s job only to respond to the disasters.

Hecukva job, FEMA.

But American government entities aren’t the only ones causing ripples this week.  Our good friends at Cookiebot have released a study showing that 25 out of 28 EU government have tracking technology embedded on their national websites.  You know, the kind of tracking technology they’re supposed to be finding a way to make more transparent and easier to control by their citizens? Even without the ePrivacy Regulation on the books, GDPR limits what EU governments may do to track citizens without express consent.  For instance:

Another category of websites that should not contain any type of ad tech tracker is public health service websites which process sensitive information about their visitors’ health condition, a “special category data that is carefully protected under Article 9 of the GDPR.”

In spite of this, on 52% of them Cookiebot was able to detect ad trackers, with 73% of the Irish health service featuring trackers, while the one from Germany was on the other end of the spectrum with 33% of all their pages coming trackers.

Yikes.  Although this may be cold comfort, it is at least clear that EU government seem to be having as much difficulty in making the necessary changes for GDPR compliance as many businesses.  The problem is that EU governments are far less likely to be fined millions of Euros for their failings, even if government fines do sometimes happen.  Regulation for thee, but not for me.

Leave a Reply