Edward Snowden on the future of surveillance

He destroyed his life to tell us about inter-governmental mass surveillance, so what does Edward Snowden think the future holds for our data?

Your government, my government, and several other governments around the world have access to our data. Not just the basics, but reams of the stuff, ranging from passive-aggressive work emails to that time you got drunk and bought something hilarious on Amazon.

This may bother you, but then again, it may not. It did, however, bother Edward Snowden, enough to give up his job, his home and the ability to see his family.

“I burned my life to the ground,” he says, speaking as always from his ironically Orwellian position as a talking head on a screen, broadcast from Moscow, Russia, his current place of sanctuary.

In this case, he’s speaking at the London-based FutureFest, held back in March. And although he isn’t there in person, the room is packed to the rafters with people aching to hear what he has to say.

“There’s a fork in the road today, and this is one of the few places in the global political debate where we have a meaningful choice to be made about where we steer this,” he explains.

“If we don’t do anything, if we go along with the status quo, we’re going to have a mass surveillance world.

“And what I mean by that is that we’re not just worried about the UK as a government, we’re worried about every government in the world doing this – even the smallest ones – and additional companies being able to do this, additional criminals being able to do this, and having access to the entirety of the human pool of communications that’s washing back and forth across the Earth.”

Mechanics of surveillance

While the vast majority of people have heard about Edward Snowden and the revelations about the NSA’s “bulk collection” practices, few of us stopped to really investigate the exact details of what was happening.

Luckily, Snowden, as a former Hawaii-based NSA contractor on $122,000 a year, has pretty detailed knowledge of what he more plainly describes as mass surveillance, and particularly its use in different countries.

“When you communicate with a server, it’s very likely not in your country. It’s somewhere else in the world, and as soon as that communication leaves your borders, you lose those protections; it’s sort of a free-for-all. Anyone can intercept it, they can analyse it, they can monitor it, they can store it for increasing and ultimately permanent periods of time,” he explains.

Terrorists are not the key target; these powers don’t usefully thwart them

“The systems are all collected and put into one bucket. It gets filled in Canada; it gets filled in the UK; it gets filled in the US; it gets filled in New Zealand; it gets filled in Australia. But they’re all searchable from the same user tokens.”

“Based on which agency you work for, based on what sort of authorities you’ve been provided on the technical side of it determines which bucket you get to search,” adds Snowden.

“But really they all flow to the same home. This is called a federated query system.

“So you send one search, it goes to all these buckets around the world, and it searches through everybody’s communications – all of your private lives – and it notices anything interesting.”

Truth behind terrorism

The main argument for this surveillance has been terrorism, the political excuse of the century that has justified everything from the US’ Patriot Act to the UK’s planned departure from the European Convention on Human Rights.
But according to Snowden, the reality is quite different.

“When we look at the full-on mass surveillance watching everyone in the country in the United States, it doesn’t work,” he says.

“It didn’t stop the attacks in Boston, where we knew who these individuals were, it didn’t stop the underwear bomber, whose father had walked into an embassy and warned us about this individual before he walked onto an airplane, and it’s not going to stop the next attacks either because they’re not public safety programmes, they’re spying programmes.”

In terms of an asset for spying, this form of mass surveillance is, according to Snowden, unparalled in its ability to
provide information.

We cannot simply scare people into giving up their rights

“They are extraordinarily valuable in terms of spying. You can pick any individual and learn everything about them,” he explains.

“That’s not necessarily going to help stop terrorist attacks, because Bin Laden, for example, stopped using a cell phone in 1998. Terrorists are not the key target; these powers don’t usefully thwart them.

“But they do help you understand who’s involved in environmental activism, they do help you know who’s involved in trade negotiations that you want an advantage in as a government, they do help you know about the military organisations of foreign countries, and some of these things are valuable, and we do want to retain these capabilities to some extent.”

Providing a choice

Snowden argues that the issue is that governments have been fundamentally dishonest about the way they represent these programmes, and in doing so have denied us the ability to choose whether we consider the information gained to be worth the privacy we have lost.

“We have to have honesty; we cannot simply scare people into giving up their rights on the basis that ‘oh, this protects us from terrorism’,” he says.

“The question that we as a society have to ask is are our collective rights worth a small relative advantage in our ability to spy on other countries and foreign citizens?

“I have my opinion about that, but we all collectively have to come to an opinion about that and we have to bargain
forcefully and demand that the government recognise that mass surveillance does not prevent acts of terrorism.”

Power to seize

It may surprise you to learn that Snowden himself is not especially radical in his views about governments’ ability to access their citizens’ data. He does not in any way condone mass surveillance, but at the same time he does regard lawful access with the digital equivalent of a warrant to be acceptable.

“I think it’s reasonable that the government, when it has a warrant from a court, enjoy extraordinary powers,” he says.

“This is no different from having the police able to get a warrant to search your house, kick in your door because they think you’re an arms dealer or something. There needs to be a process involved that needs to be public and needs to be challengeable in court at all times.”

He sees this approach, however, as a long way from the current reality.

photo 1

“This whole pre-criminal investigation where we watch everybody the whole time, just in case, is really an extraordinary departure from the Western liberal tradition. We are all today being monitored in advance of any criminal suspicion. And I think that’s terrifying, a deeply illiberal concept and something that we should reject.”

“In liberal societies we don’t typically require citizens to rearrange their activities, their lives, the way they go about their business, to make it easy for the police to do their work,” he says.

“When police officers knock on your door with a warrant, they don’t expect you to give them a tour. It’s supposed to be an adversarial process so that these extraordinary powers are used only when there’s no alternative.

“Only when they’re absolutely necessary and only when they are proportionate to the threat faced by these individuals. And that’s what we do by shifting it from mass communications, bulk collections, and put it back on the targeted, individualised basis where they have to show they have a reasonable suspicion that this particular individual is involved in wrongdoing ahead of interception.”

Technical solution

While there are continued efforts to bring an end to mass surveillance through legal means, most recently with the ruling in a New York federal appeals court that the collection of American’s metadata and phone records is unlawful, there is scepticism that governments will ever fully stop mass surveillance due to its tremendous spying benefits.
“We’re losing leverage. Governments are increasingly gaining more power and we are increasingly losing our ability to control that power and even to be aware of that power,” he says.

Although he is keen to remind everyone that he is just the “mechanism of disclosure”, Snowden does have some ideas about how we can turn the tables.

“Fundamentally, changes to the fabric of the Internet, our methods of communication, can enforce our rights, they can enforce our liberties, our values, on governments,” he says.

Increasingly all of our elected officials are pulled from the same class

“By leaning on companies, by leaning on infrastructure providers, by leaning on researchers, graduate students, postdocs, even undergrads to look at the challenges of having untrusted Internet, we can restructure that communications fabric in a way that’s encrypted.

“And by encrypted I mean the only people who can read and understand the communications across those wires are the people at the two distant ends. This is called end-to-end encryption, and what we’re doing there is making it much more difficult to perform mass surveillance.”

Not only does he believe this is the way forward, but suspects that this will be the likely scenario as we seek to resolve this issue.

“I think it is more likely than not that the technical side of the argument will come in, because it’s much easier, I think, to protect communications rather than it is to enforce legislation in every country in the world.”

Long road ahead

However, the future is likely to be fraught with challenges as we seek to put an end to mass surveillance, Snowden warns.

“I think we’re going to see disasters on both sides, I think we’re going to see it exploited callously and relentlessly by governments to purposes that undermine the progress of the public’s interest in favour of the elite’s interests,” he says.

“Increasingly all of our elected officials are pulled from the same class. These aren’t normal people; they’re not like you and me.

“And when we have these people representing everyone in our society, millions of people, the question becomes, are we really going to get policies that reflect the broad social interests, the broad public interests, or more of class interests?”

However, this does not mean we should give up, and simply ignore mass surveillance as we get on with our daily lives.

“We have to at least say that this is happening. We can’t wish it away, we can’t say that it’s something that it’s not. We have to confront the reality of our world, and make the hard decisions about which way we want to move forward,” he advises.

Snowden’s peace

Having, by his own admission, wrecked his life to bring us this information, we would expect Snowden to be deeply concerned about the prospect that mass surveillance may continue.

However, he is surprisingly at peace with the idea.

“It’s very possible that this will be debated by governments, it will be debated by the public, and nothing will change,” he says.

“But that’s alright. I did my part. All I was was a mechanism. So I’m ultimately satisfied that we know a little bit more about how the world really works.”

But as a man currently stranded in Russia, with little prospect of being able to return to his home country, does Snowden worry about how his decision will affect his life going forward? Apparently not.

“Weirdly I don’t think about my future,” he says, with a look of genuine contentment.

promo-article-page-top“Before any of this happened, I had a much more forward-looking perspective. You think about retirement; you think about vacations; you think about where you’re going next.

“One of the unexpectedly liberating things of becoming this global fugitive is the fact that you don’t worry so much about tomorrow. You think more about today, and unexpectedly, I like that very much.”

XPRIZE launches contest to build remote-controlled robot avatars

Prize fund XPRIZE and All Nippon Airways are offering $10 million reward to research teas who develop tech that eliminates the need to physically travel. The initial idea is that instead of plane travel, people could use goggles, ear phones and haptic tech to control a humanoid robot and experience different locations.

Source: Tech Crunch

NASA reveals plans for huge spacecraft to blow up asteroids

NASA has revealed plans for a huge nuclear spacecraft capable of shunting or blowing up an asteroid if it was on course to wipe out life on Earth. The agency published details of its Hammer deterrent, which is an eight tonne spaceship capable of deflecting a giant space rock.

Source: The Telegraph

Sierra Leone hosts the world’s first blockchain-powered elections

Sierra Leone recorded votes in its recent election to a blockchain. The tech, anonymously stored votes in an immutable ledger, thereby offering instant access to the election results. “This is the first time a government election is using blockchain technology,” said Leonardo Gammar of Agora, the company behind the technology.

Source: Quartz

AI-powered robot shoots perfect free throws

Japanese news agency Asahi Shimbun has reported on a AI-powered robot that shoots perfect free throws in a game of basketball. The robot was training by repeating shots, up to 12 feet from the hoop, 200,000 times, and its developers said it can hit these close shots with almost perfect accuracy.

Source: Motherboard

Russia accused of engineering cyberattacks by the US

Russia has been accused of engineering a series of cyberattacks that targeted critical infrastructure in America and Europe, which could have sabotaged or shut down power plants. US officials and private security firms claim the attacks are a signal by Russia that it could disrupt the West’s critical facilities.

Google founder Larry Page unveils self-flying air taxi

A firm funded by Google founder Larry Page has unveiled an electric, self-flying air taxi that can travel at up to 180 km/h (110mph). The taxi takes off and lands vertically, and can do 100 km on a single charge. It will eventually be available to customers as a service "similar to an airline or a rideshare".

Source: BBC

World-renowned physicist Stephen Hawking has died at the age of 76. When Hawking was diagnosed with motor neurone disease aged 22, doctors predicted he would live just a few more years. But in the ensuing 54 years he married, kept working and inspired millions of people around the world. In his last few years, Hawking was outspoken of the subject of AI, and Factor got the chance to hear him speak on the subject at Web Summit 2017…

Stephen Hawking was often described as being a vocal critic of AI. Headlines were filled with predictions of doom by from scientist, but the reality was more complex.

Hawking was not convinced that AI was to become the harbinger of the end of humanity, but instead was balanced about its risks and rewards, and at a compelling talk broadcast at Web Summit, he outlined his perspectives and what the tech world can do to ensure the end results are positive.

Stephen Hawking on the potential challenges and opportunities of AI

Beginning with the potential of artificial intelligence, Hawking highlighted the potential level of sophistication that the technology could reach.

“There are many challenges and opportunities facing us at this moment, and I believe that one of the biggest of these is the advent and impact of AI for humanity,” said Hawking in the talk. “As most of you may know, I am on record as saying that I believe there is no real difference between what can be achieved by a biological brain and what can be achieved by a computer.

“Of course, there is unlimited potential for what the human mind can learn and develop. So if my reasoning is correct, it also follows that computers can, in theory, emulate human intelligence and exceed it.”

Moving onto the potential impact, he began with an optimistic tone, identifying the technology as a possible tool for health, the environment and beyond.

“We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one: industrialisation,” he said.

“We will aim to finally eradicate disease and poverty; every aspect of our lives will be transformed.”

However, he also acknowledged the negatives of the technology, from warfare to economic destruction.

“In short, success in creating effective AI could be the biggest event in the history of our civilisation, or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined or conceivably destroyed by it,” he said.

“Unless we learn how to prepare for – and avoid – the potential risks, AI could be the worst event in the history of our civilisation. It brings dangers like powerful autonomous weapons or new ways for the few to oppress the many. It could bring great disruption to our economy.

“Already we have concerns that clever machines will be increasingly capable of undertaking work currently done by humans, and swiftly destroy millions of jobs. AI could develop a will of its own, a will that is in conflict with ours and which could destroy us.

“In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity.”

In the vanguard of AI development

In 2014, Hawking and several other scientists and experts called for increased levels of research to be undertaken in the field of AI, which he acknowledged has begun to happen.

“I am very glad that someone was listening to me,” he said.

However, he argued that there is there is much to be done if we are to ensure the technology doesn’t pose a significant threat.

“To control AI and make it work for us and eliminate – as far as possible – its very real dangers, we need to employ best practice and effective management in all areas of its development,” he said. “That goes without saying, of course, that this is what every sector of the economy should incorporate into its ethos and vision, but with artificial intelligence this is vital.”

Addressing a thousands-strong crowd of tech-savvy attendees at the event, he urged them to think beyond the immediate business potential of the technology.

“Perhaps we should all stop for a moment and focus our thinking not only on making AI more capable and successful, but on maximising its societal benefit”

“Everyone here today is in the vanguard of AI development. We are the scientists. We develop an idea. But you are also the influencers: you need to make it work. Perhaps we should all stop for a moment and focus our thinking not only on making AI more capable and successful, but on maximising its societal benefit,” he said. “Our AI systems must do what we want them to do, for the benefit of humanity.”

In particular he raised the importance of working across different fields.

“Interdisciplinary research can be a way forward, ranging from economics and law to computer security, formal methods and, of course, various branches of AI itself,” he said.

“Such considerations motivated the American Association for Artificial Intelligence Presidential Panel on Long-Term AI Futures, which up until recently had focused largely on techniques that are neutral with respect to purpose.”

He also gave the example of calls at the start of 2017 by Members of the European Parliament (MEPs) the introduction of liability rules around AI and robotics.

“MEPs called for more comprehensive robot rules in a new draft report concerning the rules on robotics, and citing the development of AI as one of the most prominent technological trends of our century,” he summarised.

“The report calls for a set of core fundamental values, an urgent regulation on the recent developments to govern the use and creation of robots and AI. [It] acknowledges the possibility that within the space of a few decades, AI could surpass human intellectual capacity and challenge the human-robot relationship.

“Finally, the report calls for the creation of a European agency for robotics and AI that can provide technical, ethical and regulatory expertise. If MEPs vote in favour of legislation, the report will go to the European Commission, which will decide what legislative steps it will take.”

Creating artificial intelligence for the world

No one can say for certain whether AI will truly be a force for positive or negative change, but – despite the headlines – Hawking was positive about the future.

“I am an optimist and I believe that we can create AI for the world that can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management and prepare for its consequences well in advance,” he said. “Perhaps some of you listening today will already have solutions or answers to the many questions AI poses.”

You all have the potential to push the boundaries of what is accepted or expected, and to think big

However, he stressed that everyone has a part to play in ensuring AI is ultimately a benefit to humanity.

“We all have a role to play in making sure that we, and the next generation, have not just the opportunity but the determination to engage fully with the study of science at an early level, so that we can go on to fulfill our potential and create a better world for the whole human race,” he said.

“We need to take learning beyond a theoretical discussion of how AI should be, and take action to make sure we plan for how it can be. You all have the potential to push the boundaries of what is accepted or expected, and to think big.

“We stand on the threshold of a brave new world. It is an exciting – if precarious – place to be and you are the pioneers. I wish you well.”