Smart speakers are getting smarter. The Wall Street Journal recently reported that Amazon and Google are experimenting with features for their smart-home devices that more proactively assist people instead of waiting to be called on each time. This development could make voice more inclusive. Let’s take a closer look.
According to The Wall Street Journal, Google’s newest Nest Hub device can automatically deploy radar to track a person’s sleep patterns every night (once the owner sets up the feature). The device uses microphones and sound, light, and temperature sensors to monitor factors that can influence how well a person sleeps (e.g., coughing or snoring). Ashton Udall, senior product manager at Google, told The Wall Street Journal that the company developed the feature after Google research indicated that people often forget to use or charge the wearables often employed for sleep tracking, or find the devices uncomfortable, he said.
The Wall Street Journal also reported that the Amazon Echo Show 10 automatically moves its display to face the user, even if it is performing a task that doesn’t need user input, like showing a recipe on the screen.
Why the News Matters
Edison Research says that adoption of smart speakers increased during the pandemic, with about 94 million people in the U.S. estimated to own at least one smart speaker in 2021, up from 76 million in 2020.
But owning and using smart speakers are two different things. Amazon and Google are concerned that people may not be using smart speakers to their fullest potential, and they want to make them more user friendly by easing the onus on the owner to activate speakers with voice commands. Per the Wall Street Journal, “ . . . while adoption has increased, device owners tend to try fewer new activities over time, researchers said. Proactive user experiences give tech companies another chance to present how useful smart devices can be, said Tom Webster, senior vice president at Edison Research.”
At Moonshot, we believe that making smart speakers more proactive could make them more inclusive, too. Putting the onus on the owner to activate smart speakers creates more of a burden for people with disabilities and the elderly, especially for those who experience memory issues. There were 703 million persons aged 65 years or over in the world in 2019, and the number of older persons is projected to double to 1.5 billion in 2050. What if smart speakers were to provide healthcare-related prompts such as reminding someone when it’s time to take a prescribed medication? This kind of feature could be incredibly useful especially for someone who needs to take multiple medications.
At the same time, designers need to be mindful that it’s going to take some time for people to trust AI-powered devices to help them manage their lives. AI suffers from lingering mistrust. We believe that by designing for emotional trust, builders of AI-based products can help overcome that problem. Designing for emotional trust requires product developers to factor into account not just a product’s features but how it relates to the product owner. For instance, a smart speaker that employs a flat, robotic tone might be off putting and damaging to trust.
What Businesses Should Do
The key to making AI-based products more inclusive is to design them with a more diverse, inclusive approach. For instance, we rely on a globally diverse team to develop inclusive, human-centered products as part of our Mindful AI approach (read more about that here). As part of that, we use tools such as the Mindful AI Canvas to identify a person’s wants and needs.
The key to designing products with emotional trust is to keep people at the center of design. We rely on an approach known as FUEL to constantly keep the user’s emotional wants and needs at the center of both product design and roll-out. FUEL incorporates design thinking techniques such as design sprints to develop product prototypes quickly and cost-effectively via ongoing user feedback.
Contact Moonshot to get started.