Do You Trust Your Digital Assistant? Listening Tech Joins the Privacy Debate

Voice-activated virtual assistants are competing to manage your life, and while it appears appealing to have a digital assistant taking care of our needs, are we sure we can trust them to keep our information secure? We take a look at the pros and cons of giving Samsung’s Bixby, Apple’s Siri, Amazon’s Alexa and Google Assistant unparalleled access to our data

Every since Captain Kirk uttered “Computer” and the USS Enterprise’s onboard AI woke ready to do his bidding, tech giants have been striving to develop practical voice-activated assistants to replace the keyboard. In the last couple of years the technology has come on leaps and bounds and interactions with other internet-enabled devices, enabling us to order groceries, dim the lights or find out the latest football scores with a couple of spoken words have emerged.

It’s no surprise then that the world’s biggest personal technology providers are vying for our voice commands to steer business their way. But people have raised concerns that devices in the house that are ‘always listening’ could be spying on them. News stories such as Samsung warning customers about discussing personal information in front of its smart television and Arkansas police demanding that Amazon release recordings from an Echo device that was present at the scene of a murder have helped stir misconceptions about how much our devices are listening in.

Are security concerns justified?

According to Alec Muffett, a freelance blogger, speaker, software engineer and computer/network security consultant who serves on the board of directors for the Open Rights Group, such fears are unfounded and are down to a basic misunderstanding of how the technology works.

If one treats voice command as a glorified keyboard for putting search terms into Google or Amazon or anything like that, what does it matter that it’s voice as opposed to all this other information which they’re already collecting about you?

“If one treats voice command as a glorified keyboard for putting search terms into Google or Amazon or anything like that, what does it matter that it’s voice as opposed to all this other information which they’re already collecting about you?” he asks.

If you use Google, and especially if you have an Android phone, you can get an insight into how much data is gathered on your activity via the Google Dashboard, for example. Similarly, consumers with Google Assistant enabled can access can review their voice command activity through the Google My Activity dashboard.

“You can go through your history, and there’s a transcription of what you asked, like ‘What’s the weather like in London today?’ and there’s a playback button next to it where you can hear your voice command. Your device records what you’re saying and uploads it to Google, because that’s part of their engineering and debugging process.”

However, the listening and recording doesn’t start until you say the ‘wake word’ relevant to that platform – ‘Siri’ or ‘OK Google’ for example. The way those commands are identified doesn’t require the constant listening some people fear, as that would be hideously inefficient. Instead it uses a similar pattern-matching trick to the Shazam tune identification app.

“Shazam doesn’t upload an audio clip because that would be really noisy. It analyses the frequency pattern of the sound – there’s some high frequencies here, low frequencies there, a pulsing backbeat– are there any songs with that fingerprint? Shazam can look up a fingerprint faster than it can match segments of audio,” explains Muffett.

So if people are worried that Apple, Amazon or Google are listening to them, it’s only because that is what they’re buying into when they trigger listening by saying the keyword, which is identified through Shazam-type fingerprint matching.

“If people are upset or concerned, is it in an informed way?” Asks Muffett. “Otherwise what they’re doing is essentially marching up and down and demanding the new looms at the mill are taken down because it will destroy work in the future; it’s that level of Luddism,” he warns.

Don’t wait for the law

While the consumer has a responsibility to understand how his or her data is collected and used, Scot Ganow, a US attorney at the law firm of Faruki, Ireland, Cox, Rhinehart & Dusing PLL in Dayton, Ohio, advises corporate clients on privacy and security law practice. He recently delivered a compelling TEDx Dayton talk on Humanity in Privacy.

“There’s a paradox that we have with privacy,” says Ganow. “We want the convenience, we want the technology, but we also want privacy, and Americans are negotiating this transaction every day. I think the biggest issue is, are they doing it knowingly, are they aware of everything their private data influences?”

Image courtesy of Apple. Featured image courtesy of Amazon

Concerning stories such as Samsung’s snooping TV or the Amazon Echo potentially recording evidence of a crime, Ganow says, ”Any time a story like this breaks that tweaks people’s spooky button as to whether companies or the government should be doing this, you always hear the question ‘Surely there’s a law against this?’ Often there aren’t laws for a specific area, and I’d encourage authorities to be slow in making laws about new technology, because laws that are made quickly tend to be not very good law.

“The biggest impact you can have on your privacy is through the choices you make on a day-to-day basis. The law will be too slow, the technology will only be as good as you, and in the market place, let’s be clear, they want your data, and they want more of it.”

Ganow encourages his business clients and companies to build privacy into technology using the approach promoted by the Canadian movement Privacy By Design. This suggests that technology that uses personal data must be built with privacy at the forefront, and ultimately give the user clear choices, making it easier for them to say yes or no.

“Generally speaking the companies that make digital assistants, like Amazon, Google and Apple, build in privacy protection,” he says. “Siri doesn’t record and act on your commands unless you give it the keyword to do it. Part of Apple’s culture is a respect for privacy. We saw that in the US when Apple refused FBI requests to create software that would unlock an iPhone recovered from one of the shooters in the 2015 San Bernadino terrorist attack.”

While companies are doing their part to secure customer’s data, Ganow’s message to people using these devices is to educate themselves on the privacy and security functions of the product before turning it on and connecting it to the network, and exercise the options provided.

“As with any technology, it tends to blend into the background and people seem to forget that it’s on. There’s a very simple solution; unplug it when you no longer want to use it, when you go to bed at night, or if you have concerns. Make conscious choices as to when you’re going to use it and when you’re not.”

And, as the device itself may not be the weakest point in your data security, Ganow adds a final piece of advice. “As with all devices, make sure you’re implementing a secure wireless network within your house.”

Proactive personal assistants 

Once we’re satisfied that our data is secure and being captured on our terms, can we be sure that the choices digital assistants make on our behalf are right for us? Ariel Ezrachi is the Slaughter and May Professor of Competition Law and a Fellow of Pembroke College, Oxford. Along with Maurice E Stucke, Professor of Law at the University of Tennessee and co-founder of The Konkurrenz Group, he wrote Virtual Competition, a book which examines whether the sophisticated algorithms and data-crunching that make browsing so convenient are also changing the nature of market competition.

He warns that as virtual assistants are being increasingly adopted by customers, this could generate risk as we trade convenience for competition and giving away more and different personal data.

“Personal digital assistants are alluring,” Ezrachi says. “They can read to our children, order beer and pizza, update us on traffic and news, and stump us with Stars Wars trivia. So we likely will trust them.

Our chosen personal helper will have unparalleled access to our information. Our assistant will become pro-active. Knowing what shows we watch, the stories we read, and the music and food we like, they will anticipate our needs

“Our chosen personal helper will have unparalleled access to our information. Our assistant will become pro-active. Knowing what shows we watch, the stories we read, and the music and food we like, they will anticipate our needs. Using our personal data, including our calendar, texts, e-mails, and geolocation data, our personal assistant may recognise a busier than usual day, and suggest a particular Chinese restaurant. Powered by AI, the helper will become an integral part of our life.

“In doing so, its gate-keeper power increases in controlling the information we receive. One concern is economic, namely its ability to engage in behaviour discrimination and foreclose rival products. But the larger concern is social and political, namely its ability to affect the marketplace of ideas, elections and our democracy.”

The nature of the voice interface itself may also mean we’re missing out.

“The moment you run a traditional query, if you’re unhappy with the results, you have the screen in front of you and it’s easy to navigate through other options,” Ezrachi says. “With voice activation people will rely much more on the first reply we get from the digital helper; it lends itself to a single recommendation or a very short list.”

Will our digital assistants be with us from cradle to grave? 

Like it or not, digital assistants are here to stay, and for the next generation they could become as indispensible and ubiquitous as mobile phones are today.

“Mattel is now selling a baby digital virtual assistant called Aristotle,” says Ezrachi. “It can help purchase diapers, read bedtime stories, soothe infants back to sleep, and teach toddlers foreign words.

“For babies born in 2017, a digital assistant may become their lifelong companion, who will know more about each person than parents, siblings, or individuals themselves.”

For that to be an exciting rather than terrifying prospect requires consumers to educate themselves on the privacy and security functions of their device and how their data is captured and used today, so it can serve them better tomorrow.

Soviet report detailing lunar rover Lunokhod-2 released for first time

Russian space agency Roskosmos has released an unprecedented scientific report into the lunar rover Lunokhod-2 for the first time, revealing previously unknown details about the rover and how it was controlled back on Earth.

The report, written entirely in Russian, was originally penned in 1973 following the Lunokhod-2 mission, which was embarked upon in January of the same year. It had remained accessible to only a handful of experts at the space agency prior to its release today, to mark the 45th anniversary of the mission.

Bearing the names of some 55 engineers and scientists, the report details the systems that were used to both remotely control the lunar rover from a base on Earth, and capture images and data about the Moon’s surface and Lunokhod-2’s place on it. This information, and in particularly the carefully documented issues and solutions that the report carries, went on to be used in many later unmanned missions to other parts of the solar system.

As a result, it provides a unique insight into this era of space exploration and the technical challenges that scientists faced, such as the low-frame television system that functioned as the ‘eyes’ of the Earth-based rover operators.

A NASA depiction of the Lunokhod mission. Above: an image of the rover, courtesy of NASA, overlaid onto a panorama of the Moon taken by Lunokhod-2, courtesy of Ruslan Kasmin.

One detail that main be of particular interest to space enthusiasts and experts is the operation of a unique system called Seismas, which was tested for the first time in the world during the mission.

Designed to determine the precise location of the rover at any given time, the system involved transmitting information over lasers from ground-based telescopes, which was received by a photodetector onboard the lunar rover. When the laser was detected, this triggered the emission of a radio signal back to the Earth, which provided the rover’s coordinates.

Other details, while technical, also give some insight into the culture of the mission, such as the careful work to eliminate issues in the long-range radio communication system. One issue, for example, was worked on with such thoroughness that it resulted in one of the devices using more resources than it was allocated, a problem that was outlined in the report.

The document also provides insight into on-Earth technological capabilities of the time. While it is mostly typed, certain mathematical symbols have had to be written in by hand, and the report also features a number of diagrams and graphs that have been painstakingly hand-drawn.

A hand-drawn graph from the report, showing temperature changes during one of the monitoring sessions during the mission

Lunokhod-2 was the second of two unmanned lunar rovers to be landed on the Moon by the Soviet Union within the Lunokhod programme, having been delivered via a soft landing by the unmanned Luna 21 spacecraft in January 1973.

In operation between January and June of that year, the robot covered a distance of 39km, meaning it still holds the lunar distance record to this day.

One of only four rovers to be deployed on the lunar surface, Lunokhod-2 was the last rover to visit the Moon until December 2013, when Chinese lunar rover Yutu made its maiden visit.

Robot takes first steps towards building artificial lifeforms

A robot equipped with sophisticated AI has successfully simulated the creation of artificial lifeforms, in a key first step towards the eventual goal of creating true artificial life.

The robot, which was developed by scientists at the University of Glasgow, was able to model the creation of artificial lifeforms using unstable oil-in-water droplets. These droplets effectively played the role of living cells, demonstrating the potential of future research to develop living cells based on building blocks that cannot be found in nature.

Significantly, the robot also successfully predicted their properties before they were created, even though this could not be achieved using conventional physical models.

The robot, which was designed by Glasgow University’s Regius Chair of Chemistry, Professor Lee Cronin, is driven by machine learning and the principles of evolution.

It has been developed to autonomously create oil-in-water droplets with a host of different chemical makeups and then use image recognition to assess their behaviour.

Using this information, the robot was able to engineer droplets to have different properties­. Those which were found to be desirable could then be recreated at any time, using a specific digital code.

“This work is exciting as it shows that we are able to use machine learning and a novel robotic platform to understand the system in ways that cannot be done using conventional laboratory methods, including the discovery of ‘swarm’ like group behaviour of the droplets, akin to flocking birds,” said Cronin.

“Achieving lifelike behaviours such as this are important in our mission to make new lifeforms, and these droplets may be considered ‘protocells’ – simplified models of living cells.”

One of the oil droplets created by the robot

The research, which is published today in the journal PNAS, is one of several research projects being undertaken by Cronin and his team within the field of artificial lifeforms.

While the overarching goal is moving towards the creation of lifeforms using new and unprecedented building blocks, the research may also have more immediate potential applications.

The team believes that their work could also have applications in several practical areas, including the development of new methods for drug delivery or even innovative materials with functional properties.