As we roll into the final two weeks of our countdown, we’re going to take something of a step back and look at issues more broadly. Yesterday, we discussed Google’s AI, and how we’re all going to be living in the Duplex, as it were. I am (clearly) pretty hung up on this, and have spent more time thinking about it this week than anything else. The reason is simple: AI is the easiest way to streamline decisionmaking and facilitate faster transactions with fewer humans in the process. In other words, it will make things move more rapidly, but at the cost of human employment.
Other than empathy, why is that important? Because when decisions are made without direct human input, the dangers change. When an algorithm can make decisions without immediate input from a person, it is often impossible to account for variables that have not been predicted or coded into the AI. And automated decisionmaking can create seeming anomalies when the AI has developed methods for differentiating choices so minute that they appear arbitrary or irrational. This is not to say that humans are better, of course. Humans get into all kinds of intentional mischief, too, but of a kind that has become predictable, if not permissible. Frankly, it’s why we have lawyers.

But the difference that matters for our purposes is that the GDPR specifically calls out automated decisionmaking for special attention. In Article 22, the Regulation provide the right for all data subjects “not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
That is a monumental statement, and one that I’ve saved for discussion this late in our countdown because I think it may prove to be the most important in the entire Regulation. The EU is establishing, at this early date, the right for humans to ensure that decisions that legally or substantially affect them are made by other humans, and not by AI. Think about how forward looking that provision is – we don’t have much in the way of automated decisionmaking right now, with credit applications being the prime example. But the GDPR seems to anticipate that, in the near future, AI will be making decisions without any direct human involvement at all, and empowers humans to reject that process.
There are, of course, exceptions, and in some ways they swallow the rule. The right to object does not apply if the processing is a) necessary to enter into or to perform a contract with the controller, b) a government function, or c) based on explicit consent. That covers an enormous amount of territory, from buying an app to applying to West Point. But not every relationship is specifically contract-based, and GDPR itself is going to push consent further out of the realm of common interactions in e-commerce.
The EU is establishing, at this early date, the right for humans to ensure that decisions that legally or substantially affect them are made by other humans, and not by AI.
It’s easy to be glib about this, and say that it is mere paranoia to be concerned that AI will be making decisions for us without our participation. But why? If you watch the video of Google Assistant, it proposed an appointment for a haircut at noon, but eventually accepted an appointment at an earlier time. Yes, that change was almost certainly because the programmer said that a range of times would work, but it is not a major leap from a human saying “here is a range of 2 hours when I am available” to the AI saying “here are the times in her calendar when she is available.” And don’t be hung up on haircuts — if the AI can schedule appointments with the right inputs now, it can accept an RFP or approve a purchase, too.
Article 22 is a built-in defense against automated processing gone awry, which is why it is so important to focus on it now, when AI is still in its infancy, and to factor it into the development of programs. For instance, Google said that it believes it is important for AI to announce to humans that it is a machine calling (despite not doing so in the demo), but it may well be that the GDPR mandates that kind of disclosure anyway, because how can you object to automated processing if you don’t know you’re speaking to a User Interface and not a human? Article 22 is also a safeguard against the kind of “social credit score” schema that has been implemented in China: if government wants to restrict your rights, you have the ability to challenge that in an appropriate forum.
I’m not trying to argue that Article 22 was written to avoid the emergence of a machine-led dystopia. Honestly, I think the hyperbolic predictions of doom and gloom that you see on the internet are unhelpful, and a distraction. AI has the potential to revolutionize the way we live, do business, and interact with one another. The point here is that automated decisions are going to be a fact of life, and without the kind of hard-wired protections we talked about yesterday, the potential for harm to individuals and companies is very real. And, for those who are leading the AI revolution, Article 22 is a reminder that boundaries, internal or external, are as important as ever.
The question, for all of us, is “what next?” It’s unlikely that we will have an answer between now and May 25. But if Google’s astonishing presentation at I/O this year means anything, it’s that there is a lot to think about before we can meaningfully answer that question.