Sensors, Monitors, and Bill & Ted

Any decent account of the last 30 years will certainly conclude that the high point of culture was 1990’s Bill and Ted’s Excellent Adventure.  History, philosophy, George Carlin, mylar tracksuits (it was the 90s) — it had everything you need.  And, with its long-awaited second sequel coming out this week, I’m sure that if William S. Preston, Esq. and Theodore “Ted” Logan hopped out of their telephone booth today, they’d be thrilled to see how well things have turned out.

Will Dahntay Jones Claim A Utah Jazz Roster Spot? The ...
You don’t want to know what his next seven words were.

Okay maybe not.

One thing I love about movies that predict the future is how much they totally miss the basic components of society that we take for granted — though it would take a lot of the fun out of them. B&T today wouldn’t have much difficulty halting the historical hijinks at the mall if they’d just given Lincoln and Freud cellphones and told them to meet at the Orange Julius by 3:30. But that’s how it is with the things that become ubiquitous — they’re nearly impossible to identify in advance, but by the time they’re everywhere it’s almost as though they disappear, because we don’t even think about them. I mean, seriously, when was the last time you thought about the fact that you carry around a 1-pound machine that can call Belgium, stream Batman reruns, and order bagels.

Keanu Reeves Wow GIF - Find & Share on GIPHY
Phones, man.

I Sense Something…

It’s the same for the sensors that absolutely pervade our lives without us really even knowing it. They occupy more space now than ever before, do more work than ever before, and are less obtrusive and noticeable than ever before as a direct consequence. They unlock our phones, verify our identity, take our temperature, record us speeding, notice our facial expressions, and, sometimes, they predict what we’re likely to do.

Although sensors and monitoring tools were already an important component of the Internet of Things (or, more accurately, “Things Connected to the Internet”), 2020 has jumpstarted their expansion into mainstream, widespread usage and into the public perception of how things should be.  Obviously, the pandemic has triggered a greater desire for and awareness of how important our own biosignals are and given new value to wearable devices.  It’s no surprise that health monitors have seen a surge in purchases, something likely to continue as we move into the cold and flu season.  Self-monitoring for health isn’t something just tied to Covid, though — as devices offer increasingly bespoke datastreams to users, wearable devices appeal to broader audiences, particularly those who want to track their performance or metrics over time.

Health monitoring is something that, relatively speaking, has a long pedigree.  Wearables have been on the market for quite some time, and their appeal is that they give users the ability to quantify what had previously only been a feeling (“I felt tired today” becomes “my heartrate really dropped after lunch) or a rough estimate (“I was slow this morning” became “I was at about 56 seconds per lap and I had great O2 sat”.) 

The idea is a simple one: empower users with more insights about how they move, feel, and act.  The less simple aspect is just what it takes to process, create, and deliver those insights.  We’ve discussed this before: one single eight-mile run wearing a Garmin Fenix generates data that amounts to a whopping 396 pages’ worth of single-spaced text.  One run.  Now combine that with your daily health monitoring, step counter, productivity tracker, microwave usage app… you’re starting to get an idea of the volume of data generated.  In fact, the estimate is that wearable devices produce 28 petabytes of data a day.  For context, neurologists estimate that the binary data capacity of the human mind is around 2.5 petabytes, total.

Edgar's New Features: Upload Video and Expiring Content ...
So much range.

From The Police to the Police

“Surely it can’t simply be that all of this new data collection is a good thing and there are no consequences, there has to be some kind of hitch here,” you might say. Well, yeah. If you think about it objectively, the nonstop collection of health information about you digitizes and stores every breath you take and every move you make. Congratulations: you’ve turned Sting into a prophet.

It’s not just the mass collection of data by wearable devices or home-based sensors to consider, there is the integration of a sensor-led approach to governance, regulation, and policing. The simplest example of this is the stoplight camera, catching red light runners and rolling stops. It’s a sensor, deployed in public, used by authorities for the purpose of identifying infractions and meting out punishment (i.e., sending a ticket). “So what,” you might say, “CCTV has been in use for years, how is that not the same thing?” The answer is that CCTV (at least as it has been used in the past) didn’t call the local precinct to tell them that those pesky kids were loitering at the Circle K again — there was a human in the loop who had to review and act upon the footage.
“Yes Officer, I think they’ve got one of them Nintendos or something.”

The difference, then, is that technologies like red light cameras have circumvented the need for humans, allowing for automated decisions, and laying the groundwork for far more extensive monitoring and punishment tools, everything from facial recognition scans as a barrier to entry to certain facilities to the full range of surveillance tools at work in China (and elsewhere). There is some pushback, to be fair, and privacy advocates in the US have had success in applying limits to automated decisions based on surveillance tools based on the 5th and 6th Amendments. But the trend is definitely towards greater automation of policing through the use of sensors and monitors.

Does that mean that all sensors are bad and that we should smash them? Hardly; the Bruce Banner approach is rarely helpful, and it often ignores the real value that sensors and monitoring tools bring when properly deployed. Ethical boundaries are the key, along with disclosure, explanations, meaningful input, and, yes, consent. The difficulty lies in finding the right balance between risk and utility, privacy and safety.

Even more frustratingly, the balance is going to be different as we move from different places to different times — it’s all about context in a given community, which means that the parameters for how sensors are used depends on where you are and why you’re there. That means that there is no one-size-fits-all answer to the question of “how do we use senors and monitors properly” — we have to find consensus, which means opening the discussion up to partners, customers, communities, friends.

It’s complicated work, made all the more complicated by the exploding number of new sensors deployed every day. But that only makes the work more urgently necessary, especially if we want to create the safeguards that we know we need and we obviously lack. Ultimately, we have to establish the rules for who sees, uses, and possesses this vast torrent of sensor-driven data if we want to harness it for the right purposes, and ensure that we define our data, and not the other way around. Timely and challenging work, but a crucial step in making sure that we’re excellent to each other.

Bill And Ted GIFs on Giphy
See how I worked that in there?

Leave a Reply