Dancing Robots Redux — More Thoughts and Some Responses

The buzz over the dancing* robots hasn’t abated over the last few days, which is an interesting indicator of how many people either loved it or, like me, didn’t. What’s even more interesting is that so many people have taken to writing about what they saw, rather than simply moving on to the next big meme. Good conversation is all about dialogue, rather than monologue, and so responding to some of those points seems fair. Doing so, of course, requires that I dive into the comments section — what could possibly go wrong?

I can do what with what??

A note on responding to things — I’m only going to go address some of the arguments, like Sander van Dijk’s thoughts. Ask any lawyer about whether it’s easier to respond to a good legal brief from opposing counsel or a bad one, and the answer will be the same: give me the good one any day. It’s counterintuitive to want the better argument from the other side, but there is nothing more frustrating or time-wasting than responding to a brief that doesn’t make sense. When opposing counsel makes a strong argument you can simply address it and, hopefully, convince the court that you’re right. But because you need to answer your opponent’s arguments — no matter how asinine — bad briefs mean that you have to waste precious page-limited space on things like “yes, you do, in fact, have to obey the Supreme Court” or “no, you are not allowed to add words to a statute so that it looks the way you want.” Please note that I have actually had to say both of those things in briefs.

Facebook Things GIF - Find & Share on GIPHY
To be fair, the trial was in Philadelphia.

We Interrupt this Programming…

Quite a few people pointed out that I wasn’t accurate about the nature of the programming and engineering feat in the video. They have a point — I need to clarify that when I say that the intricacy of the movement is a result of programming and engineering, I don’t mean that the movements themselves were expressly set out for performance. Although it’s possible that a team of programmers would take the time to code, line by line, every sequence of the routine, ML systems allow for a far more hands-off approach to this kind of robotic activity. In other words, the engineers and programmers set out the broad strokes of what needs to be done as packages to be executed, but the processors and algorithms that drive the movements do the rest. And there’s no denying, it’s a technical marvel.

But while I can, and should, be more precise in the language I use here, the “programming v. ML system” point is a distinction without a difference. Irrespective of the means by which the robot moves, the ends for which it moves remain the same: an external source directed that it should do so. Think of the difference this way: even if the robots were capable of carrying out a dance entirely of their own design (a process that would likely combine structured and unstructured learning), they still wouldn’t have the means to decide to dance, and that’s both distinct and different than what the humans involved do. As they teach you in the first year of law school, the most important question is always “who decides?”

There’s another important facet to the “AI/ML” response, which is that we’re taking the product of human activity and turning it into a thing that exists like it is a natural object. Humans do this all the time; it’s call reification, and it’s highly problematic because it shields human responsibility from view and, therefore, from criticism or analysis. Consider what happens with AI decisionmaking — humans make choices (what data should be included in a set, how a program should identify associations between the data, and what the outcome of those associations mean) but when the algorithm produces outcomes that are biased, prejudicial, harmful, or even just unhelpful, the humans disappear from view and it is just “the algorithm” or “the AI” deciding. How can a robot be biased, they’re totally impartial! True — and in that way, railroads are impartial too: if you set the tracks in a certain direction, that’s always the way they’ll go. The point is to interrogate who is laying down the rules, and why, and hiding reifying AI, or ML, or algorithms isn’t going to help us do that.

It’s Not You (Robot), It’s Me

That brings us to another criticism — why be so nitpicky about all of this? “Everyone knows the robots aren’t really deciding to dance, so you’re making a lot of fuss over nothing.” Well, maybe. But the vast majority of people I spoke to, whether they liked the video or not, reported the same initial reaction: this is amazing, and so funny.

You may be surprised that that was my first reaction, too. I thought it was funny and really impressive. But the important work for figuring out how to respond to a rapidly changing work doesn’t happen on first impression — we have to consistently challenge how we think and our initial responses, because very often we’re encountering something designed to make a very good first impression. Think about how you respond when someone says that they’re going to show you a magic trick. If you’re like me, you become extremely attentive to what they’re doing, trying to identify the sleight of hand. If they can still pull of the trick, it’s all the more impressive.

Illusions GIFs - Find & Share on GIPHY
Sorry, I meant to say “illusion,” not trick.

When you’re thinking critically, then, you’re more likely to identify problems — like whether and how human rights and humanoid robots intersect. For instance, Josh Gellers makes the point that rights are not a zero sum game, and that we endow non-humans with rights all the time. He’s absolutely right, though, again, that’s a response to a point I don’t make. Really, it’s not about the robots in any way — after all, they didn’t do anything wrong, other than the Mashed Potato — it’s about us and what our approach to robotics, AI, and advanced technological tools says about us.

Prof. Gellers’s note that we grant corporations rights is telling in the same way, inasmuch as the privileges we grant to businesses reveal our values (efficiency, risk-taking, profit) and our ethical blind spots (absolving personal responsibility, exploitativeness, politicizing business). What those priorities say is telling, and reveals where we need to focus our critical attention if we want to make sure that we’re not only acting ethically, but we’re creating spaces where both people and ethics can flourish.

There’s also definitely a sense that alarmism about robots and ethics in AI is, well, alarmist. I mean, they really are just robots dancing to Motown, wouldn’t you say that a blog post referencing MacIntyre or deontology is a little much?

“Any of you robots tries to dance with me…and I’ll deactivate you.”

I get it — it’s about fun and enjoyment, and leave it to a lawyer to come in and ruin the enjoyment of the robot troupe. The fun isn’t the problem — it’s the uncritical response to the video that worries me. That is, for me, the content of the video can be entertaining, but the reasons behind it and the social systems they both establish and unveil are what are troubling. “You can’t criticize this because the robots aren’t used for wrongful purposes.” Perhaps, but that’s an argument against a straw man — my critique never got to the question of uses, because that’s a different ethical question (and consequentialism takes care of pretty much everything there: killer robots are unethical because of what they do and what they might do). My argument is more fundamental, even if it’s no fun.

That said, as I’ve made clear many times before, every lawyer joke is well-deserved (What’s the difference between a Boston Dynamics robot and a lawyer? One is a humanoid, programmed embodiment of raw power and the profit incentive and the other runs on batteries).

You like that joke? Let’s call it a half hour.

What’s Really Going On Here?

Returning to Sean’s article, he notes that Human Robot Interactions is about understanding and examining how we treat robots is well made, and my original article gave short shrift to the field. It’s true that HRI shows the positive interactions humans do have with robots, and how those researchers who work most closely with robots often have the most caring, empathetic relationships. That’s anthropomorphism at work, and it is a sign of the better aspects of human behavior. And scholars like Julie Carpenter or Hiroshi Ishiguro have written extensively on some of these questions.

The issue, though, is not that humans who routinely interact with robots have positive experiences and feelings. I don’t think (and I don’t want to suggest) that we should stop building advanced robots or even humanoid ones. To the extent that Sean’s points are procedural, technological, and functional, I agree with him entirely. But my arguments are sociological and philosophical; I’m less worried about how experts and individuals in controlled circumstances treat and experience robots.

Instead, I’m worried about the structural effects of the introduction of humanoid robots and what it says about us. Remember, Boston Dynamics isn’t building spot or the bro-bots for general use, so why create a mass-distributed, meme-worthy video about them? Why is it necessary to make the public feel, generally, better about how agile, fast, and responsive humanoid robots are? Those concepts that animate corporations we discussed above include the profit motive, of course. So why does Boston Dynamics think that the time, money, and effort it put into making this video will be worth it? Because they want to make you feel more comfortable if a humanoid robot starts working at your shop? Because they don’t want you to be generally concerned about what they’re up to? I don’t have the answers to these questions, but, ethically, we have to ask them and not simply assume the video was just for S’s and G’s. If you’re seeing something slick and polished, assume, like Chekhov’s gun, you’re seeing it for a reason.

Who wants to talk about fun stuff like the Overton window?

Ultimately, the most important aspect to all of this is the conversation. Discussing, disputing, and even disagreeing about the ethics of what we’re presented with is an essential aspect of fighting for the right outcomes, even if we aren’t aligned about what those are. More importantly, it ensures that we don’t unthinkingly build systems and technologies that embed bias, perpetuate wrongs, and undermine human autonomy and wellbeing. Trenchant criticism of AI and ML systems, for instance, notes that, frequently, the training data and architecture used systemically ignore some (racial minorities, women, those with disabilities) or that they favor others (people who, often, match the social, cultural, racial, or economic traits of the programmers and coders).

Often, there’s no evidence of intentional bias: the programmers aren’t setting out to do wrong, to oppress, to humiliate, to diminish. Most of them are just trying to do good work — and the same goes for those who design robots or teach them to move. But biases (even unintentional ones) and the broader implications of the work we do are blind spots, and we can’t recognize our blind spots on our own. We need each other to do that, to keep each other honest, and to identify how we can, and must, do better. That will take technologists, engineers, ethicists and, yes, even lawyers to get right. Even when its dancing right in front of you, it’s never about the robot: it’s about us.

Pictured: Science and Ethics. Not Pictured: Lawyers



Leave a Reply