Logo

The Data Daily

Machines of Loving Understanding

Machines of Loving Understanding

Brautigan’s poem inspires and terrifies me at the same time. It’s a reminder of how creepy a world full of devices that blur the line between life and objects could be, but there’s also something appealing about connecting more closely to the things we build. Far more insightful people than me have explored these issues, from Mary Shelley to Phillip K. Dick, but the aspect that has fascinated me most is how computers understand us.

We live in a world where our machines are wonderful at showing us near-photorealistic scenes in real time, and can even talk to us in convincing voices. Up until recently though, they’ve not been able to make sense of images or audio that are given to them as inputs. We’ve been able to synthesize voices for decades, but speech recognition has only really started working well in the last few years. Computers have been like Crocodile Sales Reps, with enormous mouths and tiny ears, great at talking but terrible at listening. That means they can be immensely frustrating to deal with, since they seem to have no ability to do what we mean. Instead we have to spend a lot of time painstakingly communicating our needs in a form that makes sense to them, even it is unnatural for us.

This process started with toggling switches on a control panel, moved to punch cards, teletypes, CRT terminals, mouse-driven GUIs, swiping on a touch screen and most recently basic voice interfaces. Each of these steps was a big advance, but compared to how we communicate with other people, or even our pets, they still feel clumsy.

What has me most excited about all the recent advances in machine learning is that they’re starting to give computers the ability to understand us in a much deeper and more natural way. The video above is just a small robot that I built for a few dollars as a technology demonstration, but because it followed my face around, I ended up becoming quite attached. It was exhibiting behavior that we associate with people or animals who like us. Even though I knew it was just code underneath, it was hard not to see it as a character instead of an object. It became a Pencil named Steve.

Face following is a comparatively simple ability, but it’s enough to build more useful objects like a fan that always points at you, or a laptop screen that locks when nobody is around. As one of the comments says, the fan is a bit creepy. I believe this is because it’s an object that’s exhibiting attributes that we associate with living beings, entering the Uncanny Valley. The googly eyes probably didn’t help. The confounding part is that the property that makes it most creepy is the same thing that makes it helpful.

We’re going to see more and more of these capabilities making it into everyday objects (at least if I have anything to do with it) so I expect the creepiness and usefulness will keep growing in parallel too. Imagine a robot vacuum that you can talk to naturally and it will respond, that you can shoo away or control with hand gestures, and that follows you around while you’re eating to pick up crumbs you drop. Doesn’t that sound a lot like a dog? All of these behaviors help it do its job better, it’s understanding us in a more natural way instead of expecting us to learn its language, but they also make it feel a lot more alive. Increased understanding goes hand in hand with creepiness.

This already leads to a lot of unresolved tension in our relationships with voice assistants. 79% of Americans believe they spy on their conversations, but 42% of us still use them! I think this belief is so widespread because it’s hard not to treat something that you can talk to as a pseudo-person, which also makes it hard not to expect that it is listening all the time, even if it doesn’t respond. That feeling will only increase once they take account of glances, gestures, even your mood.

If I’m right, we’re going to be entering a new age of creepy but useful objects that seem somewhat alive. What should we do about it? The first part might seem obvious but it rarely happens with any new technology – have a public debate about what we as a community think should be acceptable, right now, while it’s in the early stages of deployment, not after it’s a done deal. I’m a big fan of representative democracy, with all its flaws, so let’s encourage people outside the tech world to help draw the lines of what’s ethical and reasonable. I’m trying to take a step in that direction by putting our products up on maker sites so that anyone can try them out for themselves, but I’d love to figure out how to do something like a roadshow demonstrating what’s coming in the near future. I guess this blog post is an attempt at that too. If there’s going to be a tradeoff between creepiness and utility, let’s give ordinary people the power to determine what the balance should be.

The second important realization is that the tech industry is beyond the point where we can just say “trust us” and reasonably expect people to believe our claims. We’ve lost too much credibility. Moving forward we need to build our products in a way that third parties can check what we’re doing in a meaningful way. As I wrote a few months ago, I know Google isn’t spying on your conversations, but I can’t prove it. I’ve proposed the ML sensors approach we use as a response to that problem, so that someone like Underwriters Laboratories can test our privacy claims on the behalf of consumers.

That’s just one idea though, anything that lets people outside the manufacturers verify their claims would be welcome. To go along with that, I’d love to see enforceable laws that require creators of devices to label what information they collect, like an ingredients list for food items. What I don’t know is how to prevent these from turning into meaningless Prop 65-style privacy policies, where every company basically says they can do anything with any information and share it with anyone they choose. Even though GDPR is flawed in many ways, it did force a lot of companies to be more careful with how they handle personal data, internally and externally. I’d love smarter people than me to figure out how we make privacy claims minimal and enforceable, but I believe the foundation has to be designing systems that can be audited.

Whether this new world we’re moving towards becomes more of a utopia or dystopia depends on the choices we make now. Computers that understand us better can help when an elderly person falls, but the exact same technology could send police after a homeless person bedding down in a doorway. Our ML models can spot drowning victims, or criminalize wild swimming. Ubiquitous, cheap, battery-powered voice recognition could make devices accessible to many more people, or supercharge bugging by repressive regimes. Technologists alone shouldn’t have the power to decide the direction we head in, we need everyone’s help to chart the right path, and make the hard tradeoffs. We need to make sure that the machines that watch over us truly will be loving.

Images Powered by Shutterstock