Voice-activated virtual assistants are competing to manage your life, and while it appears appealing to have a digital assistant taking care of our needs, are we sure we can trust them to keep our information secure? We take a look at the pros and cons of giving Samsung’s Bixby, Apple’s Siri, Amazon’s Alexa and Google Assistant unparalleled access to our data

Every since Captain Kirk uttered “Computer” and the USS Enterprise’s onboard AI woke ready to do his bidding, tech giants have been striving to develop practical voice-activated assistants to replace the keyboard. In the last couple of years the technology has come on leaps and bounds and interactions with other internet-enabled devices, enabling us to order groceries, dim the lights or find out the latest football scores with a couple of spoken words have emerged.

It’s no surprise then that the world’s biggest personal technology providers are vying for our voice commands to steer business their way. But people have raised concerns that devices in the house that are ‘always listening’ could be spying on them. News stories such as Samsung warning customers about discussing personal information in front of its smart television and Arkansas police demanding that Amazon release recordings from an Echo device that was present at the scene of a murder have helped stir misconceptions about how much our devices are listening in.

Are security concerns justified?

According to Alec Muffett, a freelance blogger, speaker, software engineer and computer/network security consultant who serves on the board of directors for the Open Rights Group, such fears are unfounded and are down to a basic misunderstanding of how the technology works.

If one treats voice command as a glorified keyboard for putting search terms into Google or Amazon or anything like that, what does it matter that it’s voice as opposed to all this other information which they’re already collecting about you?

“If one treats voice command as a glorified keyboard for putting search terms into Google or Amazon or anything like that, what does it matter that it’s voice as opposed to all this other information which they’re already collecting about you?” he asks.

If you use Google, and especially if you have an Android phone, you can get an insight into how much data is gathered on your activity via the Google Dashboard, for example. Similarly, consumers with Google Assistant enabled can access can review their voice command activity through the Google My Activity dashboard.

“You can go through your history, and there’s a transcription of what you asked, like ‘What’s the weather like in London today?’ and there’s a playback button next to it where you can hear your voice command. Your device records what you’re saying and uploads it to Google, because that’s part of their engineering and debugging process.”

However, the listening and recording doesn’t start until you say the ‘wake word’ relevant to that platform – ‘Siri’ or ‘OK Google’ for example. The way those commands are identified doesn’t require the constant listening some people fear, as that would be hideously inefficient. Instead it uses a similar pattern-matching trick to the Shazam tune identification app.

“Shazam doesn’t upload an audio clip because that would be really noisy. It analyses the frequency pattern of the sound – there’s some high frequencies here, low frequencies there, a pulsing backbeat– are there any songs with that fingerprint? Shazam can look up a fingerprint faster than it can match segments of audio,” explains Muffett.

So if people are worried that Apple, Amazon or Google are listening to them, it’s only because that is what they’re buying into when they trigger listening by saying the keyword, which is identified through Shazam-type fingerprint matching.

“If people are upset or concerned, is it in an informed way?” Asks Muffett. “Otherwise what they’re doing is essentially marching up and down and demanding the new looms at the mill are taken down because it will destroy work in the future; it’s that level of Luddism,” he warns.

Don’t wait for the law

While the consumer has a responsibility to understand how his or her data is collected and used, Scot Ganow, a US attorney at the law firm of Faruki, Ireland, Cox, Rhinehart & Dusing PLL in Dayton, Ohio, advises corporate clients on privacy and security law practice. He recently delivered a compelling TEDx Dayton talk on Humanity in Privacy.

“There’s a paradox that we have with privacy,” says Ganow. “We want the convenience, we want the technology, but we also want privacy, and Americans are negotiating this transaction every day. I think the biggest issue is, are they doing it knowingly, are they aware of everything their private data influences?”

Image courtesy of Apple. Featured image courtesy of Amazon

Concerning stories such as Samsung’s snooping TV or the Amazon Echo potentially recording evidence of a crime, Ganow says, ”Any time a story like this breaks that tweaks people’s spooky button as to whether companies or the government should be doing this, you always hear the question ‘Surely there’s a law against this?’ Often there aren’t laws for a specific area, and I’d encourage authorities to be slow in making laws about new technology, because laws that are made quickly tend to be not very good law.

“The biggest impact you can have on your privacy is through the choices you make on a day-to-day basis. The law will be too slow, the technology will only be as good as you, and in the market place, let’s be clear, they want your data, and they want more of it.”

Ganow encourages his business clients and companies to build privacy into technology using the approach promoted by the Canadian movement Privacy By Design. This suggests that technology that uses personal data must be built with privacy at the forefront, and ultimately give the user clear choices, making it easier for them to say yes or no.

“Generally speaking the companies that make digital assistants, like Amazon, Google and Apple, build in privacy protection,” he says. “Siri doesn’t record and act on your commands unless you give it the keyword to do it. Part of Apple’s culture is a respect for privacy. We saw that in the US when Apple refused FBI requests to create software that would unlock an iPhone recovered from one of the shooters in the 2015 San Bernadino terrorist attack.”

While companies are doing their part to secure customer’s data, Ganow’s message to people using these devices is to educate themselves on the privacy and security functions of the product before turning it on and connecting it to the network, and exercise the options provided.

“As with any technology, it tends to blend into the background and people seem to forget that it’s on. There’s a very simple solution; unplug it when you no longer want to use it, when you go to bed at night, or if you have concerns. Make conscious choices as to when you’re going to use it and when you’re not.”

And, as the device itself may not be the weakest point in your data security, Ganow adds a final piece of advice. “As with all devices, make sure you’re implementing a secure wireless network within your house.”

Proactive personal assistants 

Once we’re satisfied that our data is secure and being captured on our terms, can we be sure that the choices digital assistants make on our behalf are right for us? Ariel Ezrachi is the Slaughter and May Professor of Competition Law and a Fellow of Pembroke College, Oxford. Along with Maurice E Stucke, Professor of Law at the University of Tennessee and co-founder of The Konkurrenz Group, he wrote Virtual Competition, a book which examines whether the sophisticated algorithms and data-crunching that make browsing so convenient are also changing the nature of market competition.

He warns that as virtual assistants are being increasingly adopted by customers, this could generate risk as we trade convenience for competition and giving away more and different personal data.

“Personal digital assistants are alluring,” Ezrachi says. “They can read to our children, order beer and pizza, update us on traffic and news, and stump us with Stars Wars trivia. So we likely will trust them.

Our chosen personal helper will have unparalleled access to our information. Our assistant will become pro-active. Knowing what shows we watch, the stories we read, and the music and food we like, they will anticipate our needs

“Our chosen personal helper will have unparalleled access to our information. Our assistant will become pro-active. Knowing what shows we watch, the stories we read, and the music and food we like, they will anticipate our needs. Using our personal data, including our calendar, texts, e-mails, and geolocation data, our personal assistant may recognise a busier than usual day, and suggest a particular Chinese restaurant. Powered by AI, the helper will become an integral part of our life.

“In doing so, its gate-keeper power increases in controlling the information we receive. One concern is economic, namely its ability to engage in behaviour discrimination and foreclose rival products. But the larger concern is social and political, namely its ability to affect the marketplace of ideas, elections and our democracy.”

The nature of the voice interface itself may also mean we’re missing out.

“The moment you run a traditional query, if you’re unhappy with the results, you have the screen in front of you and it’s easy to navigate through other options,” Ezrachi says. “With voice activation people will rely much more on the first reply we get from the digital helper; it lends itself to a single recommendation or a very short list.”

Will our digital assistants be with us from cradle to grave? 

Like it or not, digital assistants are here to stay, and for the next generation they could become as indispensible and ubiquitous as mobile phones are today.

“Mattel is now selling a baby digital virtual assistant called Aristotle,” says Ezrachi. “It can help purchase diapers, read bedtime stories, soothe infants back to sleep, and teach toddlers foreign words.

“For babies born in 2017, a digital assistant may become their lifelong companion, who will know more about each person than parents, siblings, or individuals themselves.”

For that to be an exciting rather than terrifying prospect requires consumers to educate themselves on the privacy and security functions of their device and how their data is captured and used today, so it can serve them better tomorrow.

Hearables have been touted as the next big thing for wearables for some time, but will they really have a meaningful impact on our lives? We hear from Bragi CMO Jarrod Jordan about how the technology could transform the way we communicate

For the past few decades, computing has advanced at an incredible pace. Within a single generation we’ve gone from cumbersome desktops to devices that are effectively pocket-sized supercomputers, so it comes as no surprise that technology manufacturers and consumers alike are hungry for the next step.

For many this step is known as ‘the fourth platform’: something that takes us beyond our current screen-based lives, and pushes computing and communication seamlessly into the background.

And while we’re quite not there yet, hearables may very well be that something.

“The hearable is actually the precipice of the very beginning of the start of fourth-platform computing, where people can put a device inside of their ear, they can actually become more productive, they can move more places and they can do more things,” explains Jarrod Jordan, CMO of Bragi, speaking at the Wearable Technology Show.

“This isn’t just happening five years from now; we are talking about 18 months from now, it’s starting to become more and more prominent. People are starting to look at this in the same way they did when they were looking at, for example, when the iPhone first came out.”

Bragi is arguably in a very good place for this oncoming breakthrough. The Germany-based company is behind one of the first true hearables, the Dash, which has developed from a Kickstarter success story back in 2014 to a fully fledged product now available in stores. And with an SDK (software development kit) to allow third-parties to develop apps for the device on its way, it has all the makings of a truly useful device.

Beyond the tethered smartphone

Wearable technology has long been touted as a game-changing space from which the next generation of computing will come, but so far much of what’s been developed has failed to live up to that claim. Most devices remain entirely reliant on smartphones, making them more peripherals to existing devices rather than a technology that truly pushes things forwards in their own right.

Images courtesy of Bragi

Which begs the question: what does a true fourth-platform device need to offer?

“A few things need to happen in order for that fourth platform to exist, or to have that device or item exist on the fourth platform,” Jordan explains. “It has to make the user more integrated into real-world scenarios; it has to make the user be more productive and it has to be automated to help them with predictive behaviours – in other words it has to start doing things on behalf of the user without the user having to think about it.”

For some, virtual reality could be that platform, but Jordan argues that it so far fails to achieve these goals.

“As much as I love it as a form of entertainment, the idea that you have an integration with the real world, or that you can become automated or more productive with a device currently over your head [is wrong],” he says. “[VR] actually brings you out of the world and distracts you from what’s happening around you.”

Another option is the voice-enabled devices such as Amazon Echo, which are arguably much closer to being true fourth-platform devices, but fail in that they are typically in fixed locations with little ability to gather data about their users.

“What’s great about this is it does do a lot of the things I just mentioned: you can actually walk in the room and get things ordered, you can have things turn on or turn off etc,” Jordan says. “But there’s a couple of things: it doesn’t actually integrate with you as a human, it doesn’t understand what your body or your biometrics are telling it and it can go with you but it doesn’t travel with you per se.”

The logical step for some, then, is implanted computers. They’re always there, they can gather data and provide unseen feedback and assistance and they don’t need to rely on a smartphone. But they come with a rather significant problem: how many of us are really up for having tech surgically implanted inside us?

“To a lot of people that bothers them; it even bothers me,” says Jordan. “I don’t necessarily want a device inside of me, but I do need a device that can somehow get inside of me, grab different parts of my biometrics and help me become more productive or more active.”

When does a headphone become a hearable?

For Jordan, true fourth-platform devices will combine the best of these nearly-there technologies into something consumers will actually want to use.

“The way I look at it, there are three ways that these things need to come together to make that fourth platform,” he says. “It needs to be embedded yet detachable: I think if it’s inside of you then that’s a problem, I just don’t think adoption of that by the masses is really there.

It needs to leverage multiple sensors so it’s not only voice, it’s not only eyes, it’s not only touch: its taking in several different components of your body and being able to give output from that

“It needs to leverage multiple sensors so it’s not only voice, it’s not only eyes, it’s not only touch: its taking in several different components of your body and being able to give output from that. It needs to be able to augment your behaviour and predict your behaviour as well.”

Hearables, he argues, are this device, although he is keen to stress that not all technology you can put in your ears is really a true hearable.

“It is not as simply a truly wireless in-ear device. Many of them are excellent: wonderful sound, fun to use etc but they are not a computer,” he explains.

“If you cannot update your device with firmware, just like you get with your iPhone through those OS updates, if you cannot do that with your hearable it is not by definition a hearable. It is a headphone and it may be excellent, it may be fun to use, but not exactly a hearable.

“The second thing is it must be intelligent; the device must be able to pick up what you are doing and give you a feedback loop in order to make you more productive.”

Bragi Dash: in-ear computers

Whether Bragi’s own device, the Dash, fulfils these needs will ultimately be decided by its users, but it does make a compelling case. Because while the Dash looks like just a regular set of wireless earbuds, it is in fact a complete computer in two parts, housed entirely inside the minimal casing.

“We did not go out to build headphones. We actually went out to build in-ear computers; a binary computer, with a left and right side allowing us to make even more datasets and even more predictions,” says Jordan.

“In the building a device we were challenged – first of all we had nanotechnology: how do we push all of these things into very, very little space? We put 27 sensors inside of our device, we put infrared, we put accelerometers, a gyroscope, a 32 bit processor and a 4GB hard drive all in a thing the size of a dime that sits inside your ear.”

And that means that Dash can do pretty much all the things you’d expect from conventional wearable technology, without needing to hook up to a phone or plant sensors across your body.

We did not go out to build headphones. We actually went out to build in-ear computers; a binary computer, with a left and right side allowing us to make even more datasets and even more predictions

“We have tracking heart rate, respiration, acceleration, temperature, rotation, absolute direction: all of these types of things can be gathered through the device. All of those things can also be gathered and put into datasets,” he says.

“You have a headset that functions similarly to how you make normal telephone calls. You have an earphone microphone – that means the microphone is actually inside your ear not outside. You have noise cancellation with audio transparency: that means that you can hear the world around you as well as what’s in your device, so you’re actually able to have an augmented situation there. Speech control in the ambient microphone: again, those are things that allow you to sit there and make things more productive.”

Dash also solves the interaction problem – usually in-ear wearables rely on smartphones – with a mixture of gestures and voice commands.

“Right now on our device you can actually nod your head and answer a call. You can say no and reject that call. You can shake your head three times and make music shuffle; you can double tap your face twice and say ’tell me how to get home’ and the device will tell you how to get home,” Jordan explains.

But that’s not all Bragi has planned for the device. The company is already working with IBM Watson and several automotive companies to build on the Dash’s capabilities, and hopes to be able to utilise the data collected to significantly advance how the device can help you in your day to day life.

“We are collecting biometric data: we know your red-and-white blood cell counts; we know your difference between your scared heart rate and your nervous heart rate and your exercise induced heart rate,” he says. “We can see the difference between all of those so we can actually look to a world where we can start to build apps on top of those behaviours to allow you to then become more productive based on exactly what’s happening with your body, as well as starting to predict what may happen to you in the future.”

A post-screen world

The true promise of hearables lies in their ability to interact with the increasingly connected world around us, and remove many of the increasingly prevalent screens that are carving through our lives. Computers would remain ever-present, but in a manner that would be less intrusive, and more able to respond to our needs without us needing to tell them to.

“By integrating with you and into the Internet of Things, think about all those gestures, think about all that biofeedback you’re getting, and imagine being able to control the devices around you,” Jordan enthuses. “So you yourself become the trackpad.

“Imagine being able to walk into a room and simply control the devices. Let’s say you walk home and you just lift your arms up and the lights turn on, a double snap and a little Barry White starts playing.

“Your temperature is high, you’re on your way home and all of a sudden that air conditioner at home knows to turn on. You can do things that are very different by having the computer integrated into you.”