We hear the word “crisis” a lot these days. A phenomenon of our age is that issues are transformed into crises, and many crises are transformed into existential threats. Think about the way we looked at online harms and screen time for children. Just a year or two ago, the rage was the claim that too much time on screens would be damaging to children and would lead to substantial problems in their development and social well-being. And yet, the past two years, screen time has been not only a permissible thing, but some of the same commentators who were worried about a glut of screen time last year now call for more remote learning for kids. One might cynically say that the difference between an existential threat and something we just happen to disagree with is nothing more than perspective.
The reality, difficult as it may be to accept, is that many of the crises we face are of our own making, and of our own deepening. The detailed research on online harms for children and screen time for children, as a matter fact, would’ve indicated that more screen time doesn’t necessarily translate to harm or bad outcomes for education. It was just a question of how much we knew and how much our fears were based on research. Ironically enough, that’s what leads us to what we consider to be the real crisis right now, which is a crisis of reliability and confidence.
Big Data Confidence
Reliability and confidence are two concepts that feed into one another, and which drive our ability to make reasonable decisions. They both hinge upon our access to, and ability to process, data and insights. If we have access to reliable information, we are far more likely to make better decisions, or at least will have the opportunity to make those decisions. We’ve written before about the inability of AI and facial recognition technologies to recognize minorities, driven primarily by the fact of the training data that’s been used has been of white faces, and not even a representative sample from people of color. In other words, the reliability of the information sets provided to the system was degraded by the designers’ inability to recognize flaws in the underlying samples. Incomplete or biased data become unreliable analytics and answers.
We’ve also talked about how the unreliability or incompleteness of data sets can lead to unintended or strange outcomes. Think about Watson‘s response to the final Jeopardy question or Facebook‘s AI chat that evolved into several commas and broken terms. There, reliability was absent in the output rather than the input. What I mean by this is that Watson had received reliable information and Facebook’s AI system had observed the training data properly, but the unintended manner in which the data were interpreted and presented led to responses or conclusions that were not reliably understandable or actionable on the part of humans.
Put another way, reliability is all about our ability to trust and recognize the source of reasoning for and conclusions provided by the technologies and systems that we use. Garbage in, garbage out.
This is where confidence comes into play as well. Obviously, if we don’t understand what our tools and systems are telling us, as in the case with Facebook‘s negotiation system, we can’t be confident that the decision reached was appropriate, nor can we be confident that we can use it as a basis for further decision-making or actions on our own part. This is the gist of the GDPR’s Article 22, which requires that automated decision-making systems have the ability to explain or reveal to data subjects the nature of the decisions that have been made. The thinking, and we agree with this, is that confidence in a decision is tied inextricably to the explainability of that decision, and also to the ability to interpose a human in the loop. It isn’t that we think humans are infallible; far from it. It’s that we understand how humans make mistakes and are better at spotting the mistakes that humans make than we are at identifying the errors or unreliable outcomes in the byzantine and opaque processes of algorithms and ICTs. In other words, we have to be able to understand process if we want to be confident in outcomes.
The Crisis, Existentially
Pulling these concepts together we think we see the central crisis of our current situation as being one of informational reliability and decisional confidence. Misinformation and disinformation, along with the meme-ification of thought, have created an environment where we are forced to either accept at face value that underlying information behind a statement or a decision was reliable, accurate, complete, integrated, and appropriate, or we simply reject the data even if they bear all of the traditional indicia of reliability; for instance, they come from the CDC and prevention, or they appeared in a peer-reviewed, non-partisan study. Apparently, there are some negative effects to the false equivalency we’ve allowed to develop between critically evaluated, expert-produced materials on the effects of quantitative easing and a “money printer go brrr” meme your cousin Nick posted.
The reason this all begins to pile up onto itself is that we’re spending so much time arguing about the facts that make up our decisionmaking models that we can’t even get to the point of making decisions that we feel confident about. It’s analysis paralysis — so much time goes into debating whether my expert is right (or more frequently, why someone else’s expert is a hack or a shill) that we never get to the actual point of critically evaluating our options and coming to a conclusion. This is a classic consequence of misinformation and disinformation, by the way: picayune debates over the underlying facts turn every discussion into an unnecessary, heated argument about the nature of fact. And as we all know, nothing ruins a meeting like epistemology.
This problem really does become existential, in more ways than one. First, it cabins our ability to make life or death decisions, which is why we find ourselves facing yet another global spike of COVID with no substantively meaningful plan, despite ample notice that it was on its way. Second, it makes us question how we can ever make a decision again without fear of an endless torrent of baseless arguments about the nature of our facts, our choice models, the decision matrix, and all the other reasons we give for what we do. If every set of facts is questionable, and if every decision is political (which is the real point of these arguments), then any decision is just as valid or worthwhile as the next.
That is…that’s just obviously wrong, though. We all acknowledge that some decisions cannot be valid or substantive, because they lack any credibility whatever. And that’s the crucial point to make. In the reliability and confidence crisis, the solution cannot be to create yet another set of false equivalencies between arguments or datasets and get into yet another debate. Instead, we have to sift through the data sources and decide which ones are likely to be more accurate and then decide that we want to act on them.
In that sense, the solution to our reliability and confidence problem is the least popular option in any scenario: we have to make up our own mind and then take responsibility for our actions. If the data is wrong because we didn’t vet it, then we’re going to be responsible. If the conclusions were wrong because we misinterpreted the data, even though the data was right, then we’re going to be responsible. And if we don’t use any data and make things up as we go, we’re going to be responsible.
It’s not a perfect solution, but it is certainly movement in the right direction. And it comes with attendant benefits: the responsibility model emphasizes the need for the highest quality data, the best (and, often, most limited) decisionmaking tools, and the most reasoned approaches to an outcome. Why? Because we hate having to take responsibility for failure almost as much as we love criticizing someone else’s choices. The best way to counteract the negative aspects of human nature is to put them into conflict with another negative aspect — that’s how we got professional fire departments, federalism, the civil service, and double-blind clinical trials. The downside of having to take responsibility is that we have to, you know, take responsibility. But when faced with the alternative — an endless array of circular arguments — it’s the only decent solution to the crises we face.