Visions For The Future of UI: Her or Minority Report?
Every decade or so a new (sci-fi) movie comes out that completely transforms the way we think about the future of technology. For more than 10 years, that movie was Minority Report. Steven Spielberg’s 2002 adaptation of the Philip K. Dick novel was set in the year 2054, yet many of the “predictions” of our far out future have been coming to real-life existence in the 13 years since the movie debuted. Most notably, the gesture-based UI interaction Tom Cruise and his counterparts use throughout the film have moved from research centers to the consumer world of tablets and touchscreens and finally into the realm of simply moving your hands through the air.
A little over a year ago, director Spike Jonez brought us Her, a new-age interpretation of romance in the era of artificial intelligence. Instead of pushing the future out 50 years like Spielberg, Jonez sets Her in only the “slight future”, taking on the challenge of imagining our world in a time when artificial intelligence is a commodity, as affordable and proliferous as smartphones are today.
If Minority Report was a future of embedded technology where every object can have a screen or a projector (ie: multiple devices, multi-monitor desktop computing, “smart” billboards and newspapers, etc), then Her was its antithesis, a world where technology blends into the background and disappears as users engage intuitively with their voice.
In truth, the prediction of disappearing technology isn’t all that new or innovative. In 1995, researchers Mark Weiser and John Seely Brown published an article titled “Designing Calm Technology,” a concept which has evolved into the more well-known moniker of “ubiquitous computing”. Ubiquitous technology maintains that as technology matures, it naturally evolves into an invisible layer between ourselves and our environment, much like how electricity or automobiles have become ubiquitous to our everyday lives. They simply just work. Just last year, Google’s Eric Schmidt made headlines by predicting that the entire internet will ultimately disappear in the near future as our entire world become invisibly connected all around us.
The main concern with the evolution of the internet into a ubiquitous technology is that surveillance becomes ubiquitous as well. If skeptics today are concerned with Big Brother always watching and recording our lives or advertisers overtly buying our personal data to better market to us, what happens if technology becomes so non-intrusive that we don’t even realize its existence? Does the invasion of our privacy become invisible as well?
Furthermore, if we are entering a world where technology has receded into the background, how will that affect our human relationships? If it’s barely considered rude today to check your smartphone during dinner or a meeting, what will be the societal norms when technology is disguised inside our natural surroundings?
Designing the Future of UI
Our challenge then is to understand how humans want to interact with their surroundings, including their connected devices. Do they want to carry a device or do they want to wear a device? Do they want to interact with a screen or do they want to use their voice? What is the need or motivation that drives this interaction in the first place? These are the most critical questions to ask because the future of user interaction is really just a reflection of our behaviors and desires, the traits that make us human.
What we love about both of these movies is their attempt to solve what we consider the ultimate UX challenge: user interface design for artificial intelligence.
Perhaps pitting Minority Report (gestural interaction) against Her (voice interaction) is not the most effective way to approach this challenge. In fact, the two visions of the future of UI are not mutually exclusive, nor are they even opposing options. If Her succeeds in creating a more private, human channel of interaction with AI, it fails by replacing real human interaction with artificial relationships. Minority Report on the other hand, introduced gestural interaction that does not rely on isolated conversations, however its vision is limited in the aesthetic sense that we don’t really want a world covered in glass screens, sensors and cameras.
Instead, we believe the future of UI is multimodal – in other words, we should be able to choose the interaction or combination of interactions that learns from our behavior and improves our experience each time. Our immediate future is certainly better suited for gestural interaction and we believe this will be the next wave of UI design, while voice input technology needs a little more time in the innovation oven. Apple’s Siri and Google Now have come along way, but the switching costs are high simply because humans are creatures of habit, and we’re currently in the habit of interacting with screens. As the voice input technology improves, it will be interesting to see how humans adopt and incorporate the new “invisible” entity into their everyday lives.