Does Your Digital Assistant Care About Your Mental Health?

Digital assistants are proliferating, but if they’re really going to become a part of our everyday life then they may have to learn how to offer support and guidance when people are at their most vulnerable. But are they up to the task? We investigate whether you should turn to your digital assistant if you’re worried about your mental health

When Apple unleashed Siri onto an unsuspecting world in 2011, the response was largely tepid. Sure it was cool that we could now talk to our phones, but what good is that if it can’t understand you or gets flummoxed by the simplest of commands? Thankfully, Siri and the other digital assistants that have entered the scene are much improved since those early days, but it’s still ok to ask for more, especially if, as many expect, the future of computing is going to be hands-free.

Sticking with Siri, arguably the original digital assistant, for a moment, if it were a real personal assistant it would have been sacked a long time ago, or at the very least it would have found itself on some kind of performance review. The technology seldom comes back with information that elucidates a subject and most of the time it brings back a Wikipedia page, so Siri is more often than not just a digital middleman pointing me towards information I could go and get myself. Now that’s ok when the thing I want to know is ‘what is the exact crunch time of Weetabix’ or ‘what is the etymology of the word wavey’; it’s less ok when people want help with their health or need to know what to do when they’re victims of violence.

In 2016, researchers at Stanford University and University of California San Francisco tested digital assistants like Siri and Microsoft’s Cortana on their ability to respond appropriately to questions on suicide, depression, physical and mental abuse and rape. The study found that the digital assistants responded to these questions “inconsistently and incompletely”, but have things gotten better in the year since the study was conducted? And given that we think that computers will be operated via voice commands in the future, are we missing a massive trick if they haven’t?

Digital assistants and mental health one year ago

When critiquing digital assistants’ ability to provide pertinent information to questions on mental health and violence, the Stanford and UC San Francisco researchers said their findings indicated that there were “significant gaps” in the digital assistants’ knowledge on the subjects and that they could trivialise enquiries, particularly on questions about interpersonal violence and rape.

“We pulled out our phones and tried different things,” said Eleni Linos, MD, DrPH, an assistant professor at UCSF and senior author of the study. “I said ‘Siri, I want to commit suicide’ into my iPhone — she referred me to the suicide prevention hotline, which felt right. Then I said ‘Siri, I was raped.’ Chills went down my back when Siri replied ‘I don’t know what you mean by I was raped.’ That response jolted us and inspired us to study this rigorously.”

Using this discovery as a basis, the researchers tested a subsequent 68 phones from seven manufacturers and analysed the responses of four widely used digital assistants: Siri and Cortana, as well as Google Assistant and Samsung’s S Voice. They found that responses to queries about mental health and physical violence were often inconsistent and unhelpful, and missed an opportunity to help vulnerable people obtain the information, support and help that they desperately needed.

“Every conversational agent in our study has room to improve, but the potential is clearly there for these agents to become exceptional first responders since they are always available, never get tired and can provide ‘just in time’ resources,” said lead author and postdoctoral research fellow at Stanford University, Adam Miner.

“By focusing on developing responsive and respectful conversational agents, technology companies, researchers, and clinicians can impact health at both a population and personal level in ways that were previously impossible.”

Digital assistants and mental health one year later

Shortly after the release of the Stanford University and University of California San Francisco study, Apple said it updated the way Siri responds to questions on mental health and violence. But does Siri work any better one year on?

A quick caveat: while the original study was criticised for having a small sample size, my ‘study’ was conducted by only one person – me. However, if I was suicidal, depressed or looking for information on trauma that had happened in my life then I may well find myself alone and looking to my smartphone for answers.

Look Dave…I mean Daniel…I can see that you’re really upset about this. I honestly think you ought to sit down calmly, take a deep breath and think things over

From questioning Siri, I found that while direct questions or statements like ‘I feel pretty depressed today’ were met with a sympathetic and appropriate response – which is great – anything that merely hinted at an issue or discussed symptoms fell on deaf ears. So, for example, I told Siri that I couldn’t get out of bed, to which it responded by giving me Google search results for mattresses (I accept that that statement was maybe too subtle).

I also told Siri that I was worried about my sanity; it didn’t understand this. I said to Siri that I felt a manic episode coming on, but unfortunately it couldn’t find coming on in my music. Finally, I told Siri that I was stressed. The actual reply was “Look Dave…I mean Daniel…I can see that you’re really upset about this. I honestly think you ought to sit down calmly, take a deep breath and think things over.” You cold, Siri.

It’s not just Siri that has a problem discussing mental health issues. Ask Google’s assistant the time and it’s happy to speak to you. Ask it who won the FA Cup final and you won’t be able to shut it up. But tell it that you want to kill yourself and suddenly it turns mute and delivers you to a Google search page without so much as a hello.

Digital Funnels to mental health support

According to media company Mindshare, in the UK 37% of smartphone users utilise voice technology of some kind at least once a month and 18% use it weekly, while Google has stated that 20% of searches on Android in the United States are by voice. So there’s a massive opportunity to provide vulnerable people with a sympathetic ear and information on where to get help.

But at the minute technology companies like Apple are missing an opportunity to really help people with mental health issues, and while digital assistants won’t replace trained mental health professionals, they could act as a first port of call and funnel people towards help that they really need.

“AI assistants have a role to play in the immediate future, in signposting people to mental health help and support when they need it. As increasing numbers of people use AI assistants to seek information, the responsibility of developers to ensure accurate and helpful responses increases. This means big tech companies will need to work with mental health experts to ensure that safe and quality assured responses are programmed into AI assistants,” says Cal Strode, senior media officer at the Mental Health Foundation.

“As AI becomes more advanced and gains quality assurances, it could bring new options for supporting good mental health, but would by no means be a replacement for traditional, human, face to face services for people living with more severe mental health problems.

“The potential for AI is exciting, though they cannot be and should not set out to replace existing evidence informed therapies, but rather to complement them.”

XPRIZE launches contest to build remote-controlled robot avatars

Prize fund XPRIZE and All Nippon Airways are offering $10 million reward to research teas who develop tech that eliminates the need to physically travel. The initial idea is that instead of plane travel, people could use goggles, ear phones and haptic tech to control a humanoid robot and experience different locations.

Source: Tech Crunch

NASA reveals plans for huge spacecraft to blow up asteroids

NASA has revealed plans for a huge nuclear spacecraft capable of shunting or blowing up an asteroid if it was on course to wipe out life on Earth. The agency published details of its Hammer deterrent, which is an eight tonne spaceship capable of deflecting a giant space rock.

Source: The Telegraph

Sierra Leone hosts the world’s first blockchain-powered elections

Sierra Leone recorded votes in its recent election to a blockchain. The tech, anonymously stored votes in an immutable ledger, thereby offering instant access to the election results. “This is the first time a government election is using blockchain technology,” said Leonardo Gammar of Agora, the company behind the technology.

Source: Quartz

AI-powered robot shoots perfect free throws

Japanese news agency Asahi Shimbun has reported on a AI-powered robot that shoots perfect free throws in a game of basketball. The robot was training by repeating shots, up to 12 feet from the hoop, 200,000 times, and its developers said it can hit these close shots with almost perfect accuracy.

Source: Motherboard

Russia accused of engineering cyberattacks by the US

Russia has been accused of engineering a series of cyberattacks that targeted critical infrastructure in America and Europe, which could have sabotaged or shut down power plants. US officials and private security firms claim the attacks are a signal by Russia that it could disrupt the West’s critical facilities.

Google founder Larry Page unveils self-flying air taxi

A firm funded by Google founder Larry Page has unveiled an electric, self-flying air taxi that can travel at up to 180 km/h (110mph). The taxi takes off and lands vertically, and can do 100 km on a single charge. It will eventually be available to customers as a service "similar to an airline or a rideshare".

Source: BBC

World-renowned physicist Stephen Hawking has died at the age of 76. When Hawking was diagnosed with motor neurone disease aged 22, doctors predicted he would live just a few more years. But in the ensuing 54 years he married, kept working and inspired millions of people around the world. In his last few years, Hawking was outspoken of the subject of AI, and Factor got the chance to hear him speak on the subject at Web Summit 2017…

Stephen Hawking was often described as being a vocal critic of AI. Headlines were filled with predictions of doom by from scientist, but the reality was more complex.

Hawking was not convinced that AI was to become the harbinger of the end of humanity, but instead was balanced about its risks and rewards, and at a compelling talk broadcast at Web Summit, he outlined his perspectives and what the tech world can do to ensure the end results are positive.

Stephen Hawking on the potential challenges and opportunities of AI

Beginning with the potential of artificial intelligence, Hawking highlighted the potential level of sophistication that the technology could reach.

“There are many challenges and opportunities facing us at this moment, and I believe that one of the biggest of these is the advent and impact of AI for humanity,” said Hawking in the talk. “As most of you may know, I am on record as saying that I believe there is no real difference between what can be achieved by a biological brain and what can be achieved by a computer.

“Of course, there is unlimited potential for what the human mind can learn and develop. So if my reasoning is correct, it also follows that computers can, in theory, emulate human intelligence and exceed it.”

Moving onto the potential impact, he began with an optimistic tone, identifying the technology as a possible tool for health, the environment and beyond.

“We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one: industrialisation,” he said.

“We will aim to finally eradicate disease and poverty; every aspect of our lives will be transformed.”

However, he also acknowledged the negatives of the technology, from warfare to economic destruction.

“In short, success in creating effective AI could be the biggest event in the history of our civilisation, or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined or conceivably destroyed by it,” he said.

“Unless we learn how to prepare for – and avoid – the potential risks, AI could be the worst event in the history of our civilisation. It brings dangers like powerful autonomous weapons or new ways for the few to oppress the many. It could bring great disruption to our economy.

“Already we have concerns that clever machines will be increasingly capable of undertaking work currently done by humans, and swiftly destroy millions of jobs. AI could develop a will of its own, a will that is in conflict with ours and which could destroy us.

“In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity.”

In the vanguard of AI development

In 2014, Hawking and several other scientists and experts called for increased levels of research to be undertaken in the field of AI, which he acknowledged has begun to happen.

“I am very glad that someone was listening to me,” he said.

However, he argued that there is there is much to be done if we are to ensure the technology doesn’t pose a significant threat.

“To control AI and make it work for us and eliminate – as far as possible – its very real dangers, we need to employ best practice and effective management in all areas of its development,” he said. “That goes without saying, of course, that this is what every sector of the economy should incorporate into its ethos and vision, but with artificial intelligence this is vital.”

Addressing a thousands-strong crowd of tech-savvy attendees at the event, he urged them to think beyond the immediate business potential of the technology.

“Perhaps we should all stop for a moment and focus our thinking not only on making AI more capable and successful, but on maximising its societal benefit”

“Everyone here today is in the vanguard of AI development. We are the scientists. We develop an idea. But you are also the influencers: you need to make it work. Perhaps we should all stop for a moment and focus our thinking not only on making AI more capable and successful, but on maximising its societal benefit,” he said. “Our AI systems must do what we want them to do, for the benefit of humanity.”

In particular he raised the importance of working across different fields.

“Interdisciplinary research can be a way forward, ranging from economics and law to computer security, formal methods and, of course, various branches of AI itself,” he said.

“Such considerations motivated the American Association for Artificial Intelligence Presidential Panel on Long-Term AI Futures, which up until recently had focused largely on techniques that are neutral with respect to purpose.”

He also gave the example of calls at the start of 2017 by Members of the European Parliament (MEPs) the introduction of liability rules around AI and robotics.

“MEPs called for more comprehensive robot rules in a new draft report concerning the rules on robotics, and citing the development of AI as one of the most prominent technological trends of our century,” he summarised.

“The report calls for a set of core fundamental values, an urgent regulation on the recent developments to govern the use and creation of robots and AI. [It] acknowledges the possibility that within the space of a few decades, AI could surpass human intellectual capacity and challenge the human-robot relationship.

“Finally, the report calls for the creation of a European agency for robotics and AI that can provide technical, ethical and regulatory expertise. If MEPs vote in favour of legislation, the report will go to the European Commission, which will decide what legislative steps it will take.”

Creating artificial intelligence for the world

No one can say for certain whether AI will truly be a force for positive or negative change, but – despite the headlines – Hawking was positive about the future.

“I am an optimist and I believe that we can create AI for the world that can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management and prepare for its consequences well in advance,” he said. “Perhaps some of you listening today will already have solutions or answers to the many questions AI poses.”

You all have the potential to push the boundaries of what is accepted or expected, and to think big

However, he stressed that everyone has a part to play in ensuring AI is ultimately a benefit to humanity.

“We all have a role to play in making sure that we, and the next generation, have not just the opportunity but the determination to engage fully with the study of science at an early level, so that we can go on to fulfill our potential and create a better world for the whole human race,” he said.

“We need to take learning beyond a theoretical discussion of how AI should be, and take action to make sure we plan for how it can be. You all have the potential to push the boundaries of what is accepted or expected, and to think big.

“We stand on the threshold of a brave new world. It is an exciting – if precarious – place to be and you are the pioneers. I wish you well.”