2017 will give us unprecedented information about the world around us. But can privacy survive the future?

In a world of increased connectivity, where our presence and behaviour is increasingly being tracked by the technology we use, is there a place for privacy? We hear from Scobleizer’s Robert Scoble, author, blogger and VR and mixed reality evangelist, and Salil Shetty, secretary-general of Amnesty International, about whether there’s space to hide as the technologies of 2017 provide us with ever-more data about the world around us

From 2017, according to technology evangelist Robert Scoble, we are going to see technologies that give us unprecedented information about the world around us, including the people in it.

“The next iPhone is going to be a clear iPhone with a 3D sensor that is so sensitive that it can see your heart beating from about 3ft away,” said Scoble, CEO of Scobleizer, at a debate at Web Summit, during which he spent the entire time wearing a Microsoft Hololens. “It’s so sensitive it can see how hard you’re touching on a desk and it can see the fibres on a jacket from 3ft away so it can check its authenticity from that distance.

“It’s the same technology that is going to be running our self-driving cars, or a very similar technology, and is already being used at Qualcomm in drones to see the world. We are heading into a mixed reality world; one where things like this Microsoft Hololens are going to be very commonplace.”

In a world where the digital seems to be stripping away our privacy at every turn, however, this may not be the best news.

“The world we’re about to enter is going to bring us huge new increases in functionality and features, and they do come with a scary price: I will know a lot about you. Soon,” said Scoble with glee.

Value of technology’s utility

In reality, any technologies that do infringe further on our privacy will likely be accepted, Scoble argues, because they will provide us with knowledge and abilities that will enrich our lives.

“The utility of all these technologies that are coming are going to be extraordinary,” he said. “They’re going to save my kid’s life from killing himself in a car; they’re going to make it easy for me to walk into a shopping mall and find the blue jeans; it’s going to let me play new kinds of video games with my kids in virtual reality, augmented reality.”

Image courtesy of Web Summit

He gave the example of a scenario familiar to many convention regulars: where you are in the presence of a person you know is important, but you can’t work out who they are. This situation was experienced by Scoble himself when he was talking to Peter Piot at the World Economic Forum earlier this year.

“He had a badge on so I knew his name – I knew he worked for the Bill and Melinda Gates Foundation – but I couldn’t get on my phone because I couldn’t get Wi-Fi, and I couldn’t figure out who he was,” he said.

In this instance, it was obvious to Scoble that Piot was extremely important, but not why.

“I did know he was highly technical because of how he was talking to people, and I knew he was a god among people because everybody was genuflecting, so I knew he was important in some way but I couldn’t figure out how.”

In the future, Scoble said, this problem would be resolved, because mixed reality glasses would provide the wearer with pertinent information pulled from the web about those around them.

“Soon I’m going to have glasses with LinkedIn right here that’s going to tell me who he is,” he said, pointing to his eyes. “The first line on LinkedIn is going to say he discovered the Ebola virus. I wish I had known that when I was talking to him!”

Protecting privacy

As exciting as this technology is, in this increasingly connected reality we do, however, need to maintain a certain level of personal privacy. Salil Shetty, secretary-general of Amnesty International, was keen to remind the assembled crowd of developers, investors and tech enthusiasts of the importance of something we often blindly take for granted.

“Privacy in a sense is being used as shorthand for human rights now because privacy is a key enabler for freedom of expression, freedom of speech and many other key human rights,” he said.

Privacy is a key enabler for freedom of expression, freedom of speech

“Every day of the week that governments are using the same power of digital technology to crack down on dissent, on freedom of expression, on our position and it’s very different if you’re having this conversation in mature democracies, say like the United States, but Amnesty’s work, a lot of it, is in places like Ethiopia, Egypt, Vietnam, China.

“If you’re raising your voice against the government in any of these places and you do not have the privilege of privacy, you’re dead meat.”

He gave the example of the Malaysian cartoonist Zunar who, according to Shetty “had 11 sedition charges against him, one each tweet”.

“Journalists in Mexico are being hounded because of what they do,” he added. “If you want to meet journalists in Turkey right now where would you go? You go to jail, that’s where they are. And a lot of this is happening because of exposure online. Women, minorities, LGBT activists, all being hounded.”

Technology: the cause and solution

However, although technology can expose people to human rights abuses, and any technology that is developed has the potential to increase this, Shetty believes technology can also provide the answer to this problem.

“It’s not a question in my view as to whether it’s privacy or human rights, technology or human rights. I think the question is can we make it technology for human rights? How do we make it work for human rights?” he queried the 15,000-strong crowd of attendees.

Amnesty International secretary-general Salil Shetty. Image and featured image courtesy of Amnesty International

“I personally believe that in some ways digital technology’s expansion has done more for making people aware of what their human rights are and bringing to them the capability of claiming their human rights, and holding governments and companies accountable for their human rights violations.”

But using technology to protect privacy does also mean needing one rule for everyone, no matter who they are, according to the Amnesty secretary-general.

“We’ve had many battles with Apple. We’re having one right now about the potential use of child labour in the Democratic Republic of the Congo in the production of lithium batteries, which are in every single device,” he said. “But on [encryption] they are right. There is no backdoor only for good guys.

“You have a backdoor, you have a backdoor everyone is going to enter from there. And so on the FBI case I think Apple has absolutely taken the right stand and we were very much with them on this.”

Old rules, new reality

While government surveillance can be fought against by maintaining strict encryption, the everyday creep that new mixed-reality devices are set to provide is harder to counter.

For Scoble, the answer lies is the existing laws we have, which can be reapplied to the new abilities technology has given us.

Privacy is a key enabler for freedom of expression, freedom of speech

“In journalism school we learned about the difference between public and private, and a lot of these rules still will apply in this 3D world,” he said. “We’re heading to a world where the old rules still have some value to talk about, right? In a public street I have the permission to take a picture of you, which actually helps with human rights because if you’re getting shot by the cops you might want that picture of you to be displayed to the world.”

In spaces such as your own home, there is an expectation of privacy, meaning tighter rules already apply.

“The rules change from the publicness of the public street to: where are you in your own private world, did you have expectations of privacy? Did you close the drapes so nobody could take a picture of you inside?” said Scoble. “The more things you do like that, in a court of law you will have more of an expectation of privacy to show the judge that hey, somebody was breaking the rules when they took a drone into my window.”

“The principles are the same,” agreed Shetty. “So when it comes to individuals we would go for maximum privacy, but when it comes to things which are of public interest we go for maximum transparency.”

However, there are times when new technologies will be required to assist with the protection of this privacy.

“This 3D sensor on your glasses is also going to be able to capture you in a locker room, or somewhere inappropriate, and we have to have technology that turns the glasses off because a lot of us are going to forget we have them on,” said Scoble.

“Particularly when we get to contact lenses in 10 or 15 years, we’re going to forget we have them on and we’re not going to take them off just to go into a restroom, right?

“But they’re going to be capturing stuff about what’s going on in those places, so we need new kinds of technology to block it, because I’m not going to be one of the guys who are going to say we have the right to capture something in the bathroom in 3D when you walk in. No.”

XPRIZE launches contest to build remote-controlled robot avatars

Prize fund XPRIZE and All Nippon Airways are offering $10 million reward to research teas who develop tech that eliminates the need to physically travel. The initial idea is that instead of plane travel, people could use goggles, ear phones and haptic tech to control a humanoid robot and experience different locations.

Source: Tech Crunch

NASA reveals plans for huge spacecraft to blow up asteroids

NASA has revealed plans for a huge nuclear spacecraft capable of shunting or blowing up an asteroid if it was on course to wipe out life on Earth. The agency published details of its Hammer deterrent, which is an eight tonne spaceship capable of deflecting a giant space rock.

Source: The Telegraph

Sierra Leone hosts the world’s first blockchain-powered elections

Sierra Leone recorded votes in its recent election to a blockchain. The tech, anonymously stored votes in an immutable ledger, thereby offering instant access to the election results. “This is the first time a government election is using blockchain technology,” said Leonardo Gammar of Agora, the company behind the technology.

Source: Quartz

AI-powered robot shoots perfect free throws

Japanese news agency Asahi Shimbun has reported on a AI-powered robot that shoots perfect free throws in a game of basketball. The robot was training by repeating shots, up to 12 feet from the hoop, 200,000 times, and its developers said it can hit these close shots with almost perfect accuracy.

Source: Motherboard

Russia accused of engineering cyberattacks by the US

Russia has been accused of engineering a series of cyberattacks that targeted critical infrastructure in America and Europe, which could have sabotaged or shut down power plants. US officials and private security firms claim the attacks are a signal by Russia that it could disrupt the West’s critical facilities.

Google founder Larry Page unveils self-flying air taxi

A firm funded by Google founder Larry Page has unveiled an electric, self-flying air taxi that can travel at up to 180 km/h (110mph). The taxi takes off and lands vertically, and can do 100 km on a single charge. It will eventually be available to customers as a service "similar to an airline or a rideshare".

Source: BBC

World-renowned physicist Stephen Hawking has died at the age of 76. When Hawking was diagnosed with motor neurone disease aged 22, doctors predicted he would live just a few more years. But in the ensuing 54 years he married, kept working and inspired millions of people around the world. In his last few years, Hawking was outspoken of the subject of AI, and Factor got the chance to hear him speak on the subject at Web Summit 2017…

Stephen Hawking was often described as being a vocal critic of AI. Headlines were filled with predictions of doom by from scientist, but the reality was more complex.

Hawking was not convinced that AI was to become the harbinger of the end of humanity, but instead was balanced about its risks and rewards, and at a compelling talk broadcast at Web Summit, he outlined his perspectives and what the tech world can do to ensure the end results are positive.

Stephen Hawking on the potential challenges and opportunities of AI

Beginning with the potential of artificial intelligence, Hawking highlighted the potential level of sophistication that the technology could reach.

“There are many challenges and opportunities facing us at this moment, and I believe that one of the biggest of these is the advent and impact of AI for humanity,” said Hawking in the talk. “As most of you may know, I am on record as saying that I believe there is no real difference between what can be achieved by a biological brain and what can be achieved by a computer.

“Of course, there is unlimited potential for what the human mind can learn and develop. So if my reasoning is correct, it also follows that computers can, in theory, emulate human intelligence and exceed it.”

Moving onto the potential impact, he began with an optimistic tone, identifying the technology as a possible tool for health, the environment and beyond.

“We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one: industrialisation,” he said.

“We will aim to finally eradicate disease and poverty; every aspect of our lives will be transformed.”

However, he also acknowledged the negatives of the technology, from warfare to economic destruction.

“In short, success in creating effective AI could be the biggest event in the history of our civilisation, or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined or conceivably destroyed by it,” he said.

“Unless we learn how to prepare for – and avoid – the potential risks, AI could be the worst event in the history of our civilisation. It brings dangers like powerful autonomous weapons or new ways for the few to oppress the many. It could bring great disruption to our economy.

“Already we have concerns that clever machines will be increasingly capable of undertaking work currently done by humans, and swiftly destroy millions of jobs. AI could develop a will of its own, a will that is in conflict with ours and which could destroy us.

“In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity.”

In the vanguard of AI development

In 2014, Hawking and several other scientists and experts called for increased levels of research to be undertaken in the field of AI, which he acknowledged has begun to happen.

“I am very glad that someone was listening to me,” he said.

However, he argued that there is there is much to be done if we are to ensure the technology doesn’t pose a significant threat.

“To control AI and make it work for us and eliminate – as far as possible – its very real dangers, we need to employ best practice and effective management in all areas of its development,” he said. “That goes without saying, of course, that this is what every sector of the economy should incorporate into its ethos and vision, but with artificial intelligence this is vital.”

Addressing a thousands-strong crowd of tech-savvy attendees at the event, he urged them to think beyond the immediate business potential of the technology.

“Perhaps we should all stop for a moment and focus our thinking not only on making AI more capable and successful, but on maximising its societal benefit”

“Everyone here today is in the vanguard of AI development. We are the scientists. We develop an idea. But you are also the influencers: you need to make it work. Perhaps we should all stop for a moment and focus our thinking not only on making AI more capable and successful, but on maximising its societal benefit,” he said. “Our AI systems must do what we want them to do, for the benefit of humanity.”

In particular he raised the importance of working across different fields.

“Interdisciplinary research can be a way forward, ranging from economics and law to computer security, formal methods and, of course, various branches of AI itself,” he said.

“Such considerations motivated the American Association for Artificial Intelligence Presidential Panel on Long-Term AI Futures, which up until recently had focused largely on techniques that are neutral with respect to purpose.”

He also gave the example of calls at the start of 2017 by Members of the European Parliament (MEPs) the introduction of liability rules around AI and robotics.

“MEPs called for more comprehensive robot rules in a new draft report concerning the rules on robotics, and citing the development of AI as one of the most prominent technological trends of our century,” he summarised.

“The report calls for a set of core fundamental values, an urgent regulation on the recent developments to govern the use and creation of robots and AI. [It] acknowledges the possibility that within the space of a few decades, AI could surpass human intellectual capacity and challenge the human-robot relationship.

“Finally, the report calls for the creation of a European agency for robotics and AI that can provide technical, ethical and regulatory expertise. If MEPs vote in favour of legislation, the report will go to the European Commission, which will decide what legislative steps it will take.”

Creating artificial intelligence for the world

No one can say for certain whether AI will truly be a force for positive or negative change, but – despite the headlines – Hawking was positive about the future.

“I am an optimist and I believe that we can create AI for the world that can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management and prepare for its consequences well in advance,” he said. “Perhaps some of you listening today will already have solutions or answers to the many questions AI poses.”

You all have the potential to push the boundaries of what is accepted or expected, and to think big

However, he stressed that everyone has a part to play in ensuring AI is ultimately a benefit to humanity.

“We all have a role to play in making sure that we, and the next generation, have not just the opportunity but the determination to engage fully with the study of science at an early level, so that we can go on to fulfill our potential and create a better world for the whole human race,” he said.

“We need to take learning beyond a theoretical discussion of how AI should be, and take action to make sure we plan for how it can be. You all have the potential to push the boundaries of what is accepted or expected, and to think big.

“We stand on the threshold of a brave new world. It is an exciting – if precarious – place to be and you are the pioneers. I wish you well.”