“OK Google – Is this Legal?”

Machine learning and artificial intelligence are the “it” buzzwords of mid-2018, and even after our short attention spans turn to something else, the concepts behind the words will continue to evolve. Everyone talks about Skynet and the inevitable rise of our robot overlords, and most of the time they’re joking (other than when they show our friends the Boston Dynamics robot “dogs,” which are nightmare fuel).

these-robots-work-together-to-achieve-a-common-goal
“Oh no, old chap, after you!”

The real issue, though, is not about the form that some of these devices take. It is the brains that animate them, the brains that we create and teach to communicate with us. Not sure what I mean? Watch this:

Two points. First, this is unquestionably amazing. Google has created software so advanced that it convinced the receptionist that she was speaking with another human. Second, this is unquestionably terrifying, because Google has created software so advanced that it convinced the receptionist that she was speaking with another human. I mean, the software is so good that it naturally mimics flawed human speech patterns (“umm” and “mm-hmm”) because engineers know that no one speaks with perfect clarity.

Legally, I see a lot of potential problems. Where was the call placed? New York? Then where was the consent of the receptionist to have her phone call recorded (as this was, and all future such calls surely will be, to ensure that the AI is working)?  California? Then where was the disclaimer that the call was being monitored or recorded as required by law to avoid the mandatory statutory fine? Will the vocal signature and personal details of the receptionist be stored? Then what kind of GDPR compliance protocol is in place, because it won’t just be Google that uses this? What about calls where a minor answers the phone? Or where a credit card authorization has been made? It goes on and on.

At the very least, AI like Google Assistant will change the way we interact, and the way we speak. Will we ask if someone is a robot, engaging in our own Turing tests? Perhaps we will no longer say “can I speak with a manager,” but instead “can I speak with a human?” And one can only imagine what will happen when there is no human receptionist on the other end – because we already know that when AI speaks with AI, it stops using human-comprehensible language. So how to we implement safeguards then?

The ethical risks of this kind of technology are worrying, and coping with what is necessary to guard against those risks demands protections coded in at the earliest stages. The legal framework, as well, will need to be updated. As we have discussed elsewhere, the law governing data security and privacy in this country is a good quarter century out of date.

I think people misapprehend what the regulations would need to be. We aren’t talking about finance, where there is an existing layer of rules and conventions that are well established. This is far more like when humans first began harnessing electricity and established rules like “hey don’t run a wire into that lake, okay?”

We are at the very beginning of a series of changes that will, hyperbole aside, modify human existence. If we want to make sure that those changes are cabined by rules that protect not just our privacy, but our autonomy, then we have to make those rules clear, now. When children grow and learn how to interact, we teach them to abide by social codes. Machines and algorithms have a code too, literal and figurative. We’d better start thinking now about what that code is going to say.

4 thoughts on ““OK Google – Is this Legal?”

Leave a Reply