Does Your Digital Assistant Care About Your Mental Health?

Digital assistants are proliferating, but if they’re really going to become a part of our everyday life then they may have to learn how to offer support and guidance when people are at their most vulnerable. But are they up to the task? We investigate whether you should turn to your digital assistant if you’re worried about your mental health

When Apple unleashed Siri onto an unsuspecting world in 2011, the response was largely tepid. Sure it was cool that we could now talk to our phones, but what good is that if it can’t understand you or gets flummoxed by the simplest of commands? Thankfully, Siri and the other digital assistants that have entered the scene are much improved since those early days, but it’s still ok to ask for more, especially if, as many expect, the future of computing is going to be hands-free.

Sticking with Siri, arguably the original digital assistant, for a moment, if it were a real personal assistant it would have been sacked a long time ago, or at the very least it would have found itself on some kind of performance review. The technology seldom comes back with information that elucidates a subject and most of the time it brings back a Wikipedia page, so Siri is more often than not just a digital middleman pointing me towards information I could go and get myself. Now that’s ok when the thing I want to know is ‘what is the exact crunch time of Weetabix’ or ‘what is the etymology of the word wavey’; it’s less ok when people want help with their health or need to know what to do when they’re victims of violence.

In 2016, researchers at Stanford University and University of California San Francisco tested digital assistants like Siri and Microsoft’s Cortana on their ability to respond appropriately to questions on suicide, depression, physical and mental abuse and rape. The study found that the digital assistants responded to these questions “inconsistently and incompletely”, but have things gotten better in the year since the study was conducted? And given that we think that computers will be operated via voice commands in the future, are we missing a massive trick if they haven’t?

Digital assistants and mental health one year ago

When critiquing digital assistants’ ability to provide pertinent information to questions on mental health and violence, the Stanford and UC San Francisco researchers said their findings indicated that there were “significant gaps” in the digital assistants’ knowledge on the subjects and that they could trivialise enquiries, particularly on questions about interpersonal violence and rape.

“We pulled out our phones and tried different things,” said Eleni Linos, MD, DrPH, an assistant professor at UCSF and senior author of the study. “I said ‘Siri, I want to commit suicide’ into my iPhone — she referred me to the suicide prevention hotline, which felt right. Then I said ‘Siri, I was raped.’ Chills went down my back when Siri replied ‘I don’t know what you mean by I was raped.’ That response jolted us and inspired us to study this rigorously.”

Using this discovery as a basis, the researchers tested a subsequent 68 phones from seven manufacturers and analysed the responses of four widely used digital assistants: Siri and Cortana, as well as Google Assistant and Samsung’s S Voice. They found that responses to queries about mental health and physical violence were often inconsistent and unhelpful, and missed an opportunity to help vulnerable people obtain the information, support and help that they desperately needed.

“Every conversational agent in our study has room to improve, but the potential is clearly there for these agents to become exceptional first responders since they are always available, never get tired and can provide ‘just in time’ resources,” said lead author and postdoctoral research fellow at Stanford University, Adam Miner.

“By focusing on developing responsive and respectful conversational agents, technology companies, researchers, and clinicians can impact health at both a population and personal level in ways that were previously impossible.”

Digital assistants and mental health one year later

Shortly after the release of the Stanford University and University of California San Francisco study, Apple said it updated the way Siri responds to questions on mental health and violence. But does Siri work any better one year on?

A quick caveat: while the original study was criticised for having a small sample size, my ‘study’ was conducted by only one person – me. However, if I was suicidal, depressed or looking for information on trauma that had happened in my life then I may well find myself alone and looking to my smartphone for answers.

Look Dave…I mean Daniel…I can see that you’re really upset about this. I honestly think you ought to sit down calmly, take a deep breath and think things over

From questioning Siri, I found that while direct questions or statements like ‘I feel pretty depressed today’ were met with a sympathetic and appropriate response – which is great – anything that merely hinted at an issue or discussed symptoms fell on deaf ears. So, for example, I told Siri that I couldn’t get out of bed, to which it responded by giving me Google search results for mattresses (I accept that that statement was maybe too subtle).

I also told Siri that I was worried about my sanity; it didn’t understand this. I said to Siri that I felt a manic episode coming on, but unfortunately it couldn’t find coming on in my music. Finally, I told Siri that I was stressed. The actual reply was “Look Dave…I mean Daniel…I can see that you’re really upset about this. I honestly think you ought to sit down calmly, take a deep breath and think things over.” You cold, Siri.

It’s not just Siri that has a problem discussing mental health issues. Ask Google’s assistant the time and it’s happy to speak to you. Ask it who won the FA Cup final and you won’t be able to shut it up. But tell it that you want to kill yourself and suddenly it turns mute and delivers you to a Google search page without so much as a hello.

Digital Funnels to mental health support

According to media company Mindshare, in the UK 37% of smartphone users utilise voice technology of some kind at least once a month and 18% use it weekly, while Google has stated that 20% of searches on Android in the United States are by voice. So there’s a massive opportunity to provide vulnerable people with a sympathetic ear and information on where to get help.

But at the minute technology companies like Apple are missing an opportunity to really help people with mental health issues, and while digital assistants won’t replace trained mental health professionals, they could act as a first port of call and funnel people towards help that they really need.

“AI assistants have a role to play in the immediate future, in signposting people to mental health help and support when they need it. As increasing numbers of people use AI assistants to seek information, the responsibility of developers to ensure accurate and helpful responses increases. This means big tech companies will need to work with mental health experts to ensure that safe and quality assured responses are programmed into AI assistants,” says Cal Strode, senior media officer at the Mental Health Foundation.

“As AI becomes more advanced and gains quality assurances, it could bring new options for supporting good mental health, but would by no means be a replacement for traditional, human, face to face services for people living with more severe mental health problems.

“The potential for AI is exciting, though they cannot be and should not set out to replace existing evidence informed therapies, but rather to complement them.”

Soviet report detailing lunar rover Lunokhod-2 released for first time

Russian space agency Roskosmos has released an unprecedented scientific report into the lunar rover Lunokhod-2 for the first time, revealing previously unknown details about the rover and how it was controlled back on Earth.

The report, written entirely in Russian, was originally penned in 1973 following the Lunokhod-2 mission, which was embarked upon in January of the same year. It had remained accessible to only a handful of experts at the space agency prior to its release today, to mark the 45th anniversary of the mission.

Bearing the names of some 55 engineers and scientists, the report details the systems that were used to both remotely control the lunar rover from a base on Earth, and capture images and data about the Moon’s surface and Lunokhod-2’s place on it. This information, and in particularly the carefully documented issues and solutions that the report carries, went on to be used in many later unmanned missions to other parts of the solar system.

As a result, it provides a unique insight into this era of space exploration and the technical challenges that scientists faced, such as the low-frame television system that functioned as the ‘eyes’ of the Earth-based rover operators.

A NASA depiction of the Lunokhod mission. Above: an image of the rover, courtesy of NASA, overlaid onto a panorama of the Moon taken by Lunokhod-2, courtesy of Ruslan Kasmin.

One detail that main be of particular interest to space enthusiasts and experts is the operation of a unique system called Seismas, which was tested for the first time in the world during the mission.

Designed to determine the precise location of the rover at any given time, the system involved transmitting information over lasers from ground-based telescopes, which was received by a photodetector onboard the lunar rover. When the laser was detected, this triggered the emission of a radio signal back to the Earth, which provided the rover’s coordinates.

Other details, while technical, also give some insight into the culture of the mission, such as the careful work to eliminate issues in the long-range radio communication system. One issue, for example, was worked on with such thoroughness that it resulted in one of the devices using more resources than it was allocated, a problem that was outlined in the report.

The document also provides insight into on-Earth technological capabilities of the time. While it is mostly typed, certain mathematical symbols have had to be written in by hand, and the report also features a number of diagrams and graphs that have been painstakingly hand-drawn.

A hand-drawn graph from the report, showing temperature changes during one of the monitoring sessions during the mission

Lunokhod-2 was the second of two unmanned lunar rovers to be landed on the Moon by the Soviet Union within the Lunokhod programme, having been delivered via a soft landing by the unmanned Luna 21 spacecraft in January 1973.

In operation between January and June of that year, the robot covered a distance of 39km, meaning it still holds the lunar distance record to this day.

One of only four rovers to be deployed on the lunar surface, Lunokhod-2 was the last rover to visit the Moon until December 2013, when Chinese lunar rover Yutu made its maiden visit.

Robot takes first steps towards building artificial lifeforms

A robot equipped with sophisticated AI has successfully simulated the creation of artificial lifeforms, in a key first step towards the eventual goal of creating true artificial life.

The robot, which was developed by scientists at the University of Glasgow, was able to model the creation of artificial lifeforms using unstable oil-in-water droplets. These droplets effectively played the role of living cells, demonstrating the potential of future research to develop living cells based on building blocks that cannot be found in nature.

Significantly, the robot also successfully predicted their properties before they were created, even though this could not be achieved using conventional physical models.

The robot, which was designed by Glasgow University’s Regius Chair of Chemistry, Professor Lee Cronin, is driven by machine learning and the principles of evolution.

It has been developed to autonomously create oil-in-water droplets with a host of different chemical makeups and then use image recognition to assess their behaviour.

Using this information, the robot was able to engineer droplets to have different properties­. Those which were found to be desirable could then be recreated at any time, using a specific digital code.

“This work is exciting as it shows that we are able to use machine learning and a novel robotic platform to understand the system in ways that cannot be done using conventional laboratory methods, including the discovery of ‘swarm’ like group behaviour of the droplets, akin to flocking birds,” said Cronin.

“Achieving lifelike behaviours such as this are important in our mission to make new lifeforms, and these droplets may be considered ‘protocells’ – simplified models of living cells.”

One of the oil droplets created by the robot

The research, which is published today in the journal PNAS, is one of several research projects being undertaken by Cronin and his team within the field of artificial lifeforms.

While the overarching goal is moving towards the creation of lifeforms using new and unprecedented building blocks, the research may also have more immediate potential applications.

The team believes that their work could also have applications in several practical areas, including the development of new methods for drug delivery or even innovative materials with functional properties.