Combining ecological and cyber threats: Author Thomas Waite on writing Trident Code

Soon to be released cyber thriller Trident Code charts the events of a near-future under threat from cyber and ecological terrorism. We speak to author Thomas Waite to find out more

Cyber thriller writer Thomas Waite has a new book out, and it couldn’t come at a better time.

Out on Tuesday, 26th May, Trident Code takes on what Waite describes as a trifecta of cyber terrorism, ecological terrorism and nuclear submarines, following a week where scientists have reported unforeseen ice loss in Antarctica and a now-arrested British submariner warned that the Trident submarine weapon system was a “disaster waiting to happen”.

The second in Waite’s Lana Elkins series, following Lethal Code, Trident Code is a thrilling near-future tale, showing a world where cyber threats risk far more than a single city.

We caught up with Waite to learn more about the book and how possible the events they chart really are.

Cyber terrorism and ecological terrorism make for an interesting combination in Trident Code. Why did you choose to combine these two threats?

After I had finished Lethal Code and started to think about the next novel in the series, which is now Trident Code, I was searching for a unique story with an unusual terrorist threat, and I was actually – of all places – watching television.

Probably some authors won’t admit that they get ideas from television but I don’t mind, because it’s true.

Trident Code author Thomas Waite

Trident Code author Thomas Waite

I was watching CNN one night last year, and I remember there was a short segment on ISIS; there was a segment on the negotiations on Iran’s nuclear program; a recent report about a hacker trying to penetrate a US government agency, and there was a program about an attack on a very vulnerable part of our environment.

There was a report about the West Antarctic Ice Sheet and there was a NASA animation of Antarctica’s most threatened glaciers, showing the ice draining into the Amundsen Sea and an ominous warning that it could result in more than 10 feet of sea level rise.

But of course it added in a century or two, and that triggered the thought: well what if you could make that happen sooner, and much sooner?

So that, in combination, led me to think, wow, no terrorist act, not even a nuclear bomb set up in a major city, could so unalterably change the Earth, and if you could make that happen quickly, that would be great. From a plot standpoint, that was gold.

I realised that climate change could actually become a weapon of choice for terrorists. I looked at what countries might fare relatively well with a particular type of climate cataclysm, and Russia stands out when you do the research.

So I had two kinds of terrorism for my novel: cyber and environmental, and I wanted to got for the trifecta, because that’s always the big winner, right?

So that’s when the nuclear submarine with Trident II missiles cruised into my novel, and I had my story.

I understand you spoke to a number of experts for Trident Code. Who did you speak to, and how did they react?

For all my novels I do. I had a career in the technology sector, so I’m comfortable and familiar with technology and I’d been involved with various companies, including cybersecurity firms, but when I think about a book like this I do a lot of primary and secondary research.

Research only gets you so far, so for Trident Code I consulted some leading experts

Like other writers I go to the Internet, I read authoritative books and articles. But research only gets you so far, so for Trident Code I consulted some leading experts: CEOs of some major cybersecurity firms, and former and current senior government officials.

For example the head of the FBI’s cyber terrorism unit, and I benefited a lot from a retired admiral and a from a vice president of the Chiefs of Staff, as well as a submarine warfare expert.

They thought it was sinisterly… creative. And to my pleasure, they were very willing to assist, although in some cases, particularly the former government folks, they wanted to make sure that they didn’t disclose anything that was confidential or top secret. In some cases they also didn’t want any attribution.

So I’ve acknowledged some of them in my acknowledgements, but some not, for a variety of reasons – mostly policy reasons, a couple of them are currently in their positions and didn’t want to do that.

But it was vetted – it was cleared – with the “authorities”.

Rapid sea level rise plays a key role in Trident Code’s premise – is it a threat you see literature in general exploring more?

I think there’s a trend in literature, particularly genre fiction, that is – even more than normal – leading writers to basically blur the line further between fiction and reality. I see that happening; I think there could be reasons for that.

When you try to write novels that are truly different and unique I think you cast your net wider, so to speak, looking at current events. For me, the piece about that was on the news about pending, although relatively long-term environmental catastrophe, is very different, and I worked hard to come up with the trifecta that I mentioned earlier.

I would expect other authors to do the same. Now for pure science fiction, you could do that a lot more easily. I try to create essentially near future thrillers that are well-researched and based in a reality that the reader can relate to, as scary as it could be.

What are your thoughts on the way people view cyber threats?

Trident Code is released on Tuesday 26th May. For more details visit

Trident Code is released on Tuesday 26th May. For more details visit

Quite honestly, I write cyber thrillers, and I’m still concerned and rather surprised that the vast majority – I can only speak for the folks I know largely here in the US – of people really don’t understand the risks that cyber warfare and cyber attacks pose to a nation.

They tend to think of it as only an individual problem, namely hacked emails, credit card data, that sort of thing, or going after retailers.

But we’re looking now at very sophisticated industrial-level attacks; the most famous one that some people know is Stuxnet.

When I do my research I come up with a lot of stories, whether it’s Stuxnet, or you may be aware of a steel mill in Germany that was attacked last year that caused physical destruction.

When I look at those things, and the vast majority of people I talk to, even frankly some of the people I interview, aren’t very aware of those and they aren’t really connecting the dots about what the risk really is.

So just as the Sony hack put the hacking of emails and the threat into the public minds of most Americans, I think it’s unfortunate, but it’s probably going to take something like that, and that far-reaching, to get people to understand what nation states are doing and what the threat to industrial controls and other important parts of our infrastructure could be.

Do you feel you help inform people about cyber threats?

I hope so! I have to walk a line between treating it too lightly and not credibly in my novels and going so deep that it’s sort of inside baseball, as the expression goes, and you lose the readers and they’re not interested, their eyes glaze over and they close the book.

It’s probably going to take something that far-reaching for people to understand what nation states are doing and what the threat to our infrastructure could be

So I try to walk that line between educating and entertaining at the same time, but yes, I am trying to do that.

Usually my author note praises people who are fighting these kinds of crimes and has a warning for the public, and certainly when I talk to people or give interviews like this I mention that, because I think it’s important.

How possible do you think the primary events in Trident Code are?

I think it’s possible. If I thought of it, I can’t imagine no one else has.

Now, as any thriller writer does, I’ve put together an exciting story that would require an enormous amount of sophistication and sort of a  worst-case scenario, just like I did with Lethal Code, but I think it’s plausible.

I’m careful, so for example in my book a submarine is hijacked, but it’s not technically hacked, I call it hacked because what they’ve done is they’ve hacked into the communications systems, but nuclear submarines are very secure, and many of their systems aren’t connected to the Internet, obviously, so you have to create that vérité between kinetic or regular warfare, as well as cyber warfare.

Your main villain, Oleg, is very unusual. Why did you choose that character?

[Laughs] Well, in my life I’ve read a lot of thrillers and whenever there’s been a Russian villain, and it’s very classic, they’re a Cold War villain, and it’s become almost a stereotype of what the evil Russian Cold War villain looks like.

I wanted to do something different. I write cyber thrillers so I wanted to create what I call a Code War villain, and when I thought about that, it occurred to me that that person is going to be very different; they’re going to be younger, they’re going to be very contemporary, and they’re going to have technology skills that are never mentioned in the classic Cold War era, certainly not by the central villain.

I wanted to create what I call a Code War villain

So I decided that i would create Oleg, and in all honesty he became much larger than life, and took on a much larger role in my book than I had originally envisioned for him, but I really loved it.

One of the reviewers said he’s the villain that you love to hate, and that’s really what I was going for. so I’m glad that people see it that way.

A number of people of people who have read my book with the advanced copies have commented about his character and how interesting it is. It’s kind of sad to say you love him, but you do love to hate him because he’s just so despicable!

The contrast with him is his Russian counterpart Galina – why did you choose her?

I wanted to show a female character – Lana’s the protagonist and readers who are following the series know about her, but I wanted to introduce a female character that has a developmental arc throughout the novel.

So in the early parts of the novel the reader will probably view her as very young and innocent and sweet, and perhaps naive, and over the course of the novel she matures, she becomes stronger and more determined, she’s battling for her daughter, who has leukemia, and, without ruining the novel, she understands and figures out what Oleg’s been up to.

He’s led her along because her intentions were noble from an environmental standpoint with the theft of the ambient air capture device, and towards the end of the novel she ends up playing a central role in Oleg’s downfall and stopping the impending catastrophe.

I gather you’re planning to continue the series after Trident Code. Is there anything you can tell us about the sequel?

Yes I am. I’m actually working on my next one right now.

I don’t like to give away too much, but I will say that in the new one, what I’m doing, and this will be a first, is Lana is looking at threats within the homeland, or America, that are emanating from there.

In the first two, the threats were always foreign threats. In this one this is more of a home-grown threat, at least primarily.

Finally, what technologies in development are you personally excited for?

I think that things around the generation and capture of power are intriguing, so Tesla, Musk’s work, is very impressive and the battery technology that’s advancing I think has the potential to be enormously powerful.

I would say the driverless car concept is another technology that I think in twenty or thirty years could literally change the map, and what I mean by that is I live in the city – I live in Boston and I would gladly book and walk out and get into an Uber-owned vehicle or whatever that would take me to my destination and give up my own personal car.

The driverless car concept is another technology that in twenty or thirty years could literally change the map

Now that’s not necessarily for everybody, but I think once it becomes reliable that’ll be interesting.

As far as computer technology, boy, I’m reading with fascination the debate about AI, you’ve probably read Hawking’s warning that it could be the end of mankind.

I’m not that pessimistic. I actually think that a lot of technologies that emerge are feared at first, certainly computers were when they first came out, so I think there is AI that is incredibly exciting and that can dramatically improve human life.

The other one is in the genetics realm, because I think it’s going to improve the quality of healthcare, so that all of us are getting personalised care down to the genetic level.

XPRIZE launches contest to build remote-controlled robot avatars

Prize fund XPRIZE and All Nippon Airways are offering $10 million reward to research teas who develop tech that eliminates the need to physically travel. The initial idea is that instead of plane travel, people could use goggles, ear phones and haptic tech to control a humanoid robot and experience different locations.

Source: Tech Crunch

NASA reveals plans for huge spacecraft to blow up asteroids

NASA has revealed plans for a huge nuclear spacecraft capable of shunting or blowing up an asteroid if it was on course to wipe out life on Earth. The agency published details of its Hammer deterrent, which is an eight tonne spaceship capable of deflecting a giant space rock.

Source: The Telegraph

Sierra Leone hosts the world’s first blockchain-powered elections

Sierra Leone recorded votes in its recent election to a blockchain. The tech, anonymously stored votes in an immutable ledger, thereby offering instant access to the election results. “This is the first time a government election is using blockchain technology,” said Leonardo Gammar of Agora, the company behind the technology.

Source: Quartz

AI-powered robot shoots perfect free throws

Japanese news agency Asahi Shimbun has reported on a AI-powered robot that shoots perfect free throws in a game of basketball. The robot was training by repeating shots, up to 12 feet from the hoop, 200,000 times, and its developers said it can hit these close shots with almost perfect accuracy.

Source: Motherboard

Russia accused of engineering cyberattacks by the US

Russia has been accused of engineering a series of cyberattacks that targeted critical infrastructure in America and Europe, which could have sabotaged or shut down power plants. US officials and private security firms claim the attacks are a signal by Russia that it could disrupt the West’s critical facilities.

Google founder Larry Page unveils self-flying air taxi

A firm funded by Google founder Larry Page has unveiled an electric, self-flying air taxi that can travel at up to 180 km/h (110mph). The taxi takes off and lands vertically, and can do 100 km on a single charge. It will eventually be available to customers as a service "similar to an airline or a rideshare".

Source: BBC

World-renowned physicist Stephen Hawking has died at the age of 76. When Hawking was diagnosed with motor neurone disease aged 22, doctors predicted he would live just a few more years. But in the ensuing 54 years he married, kept working and inspired millions of people around the world. In his last few years, Hawking was outspoken of the subject of AI, and Factor got the chance to hear him speak on the subject at Web Summit 2017…

Stephen Hawking was often described as being a vocal critic of AI. Headlines were filled with predictions of doom by from scientist, but the reality was more complex.

Hawking was not convinced that AI was to become the harbinger of the end of humanity, but instead was balanced about its risks and rewards, and at a compelling talk broadcast at Web Summit, he outlined his perspectives and what the tech world can do to ensure the end results are positive.

Stephen Hawking on the potential challenges and opportunities of AI

Beginning with the potential of artificial intelligence, Hawking highlighted the potential level of sophistication that the technology could reach.

“There are many challenges and opportunities facing us at this moment, and I believe that one of the biggest of these is the advent and impact of AI for humanity,” said Hawking in the talk. “As most of you may know, I am on record as saying that I believe there is no real difference between what can be achieved by a biological brain and what can be achieved by a computer.

“Of course, there is unlimited potential for what the human mind can learn and develop. So if my reasoning is correct, it also follows that computers can, in theory, emulate human intelligence and exceed it.”

Moving onto the potential impact, he began with an optimistic tone, identifying the technology as a possible tool for health, the environment and beyond.

“We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one: industrialisation,” he said.

“We will aim to finally eradicate disease and poverty; every aspect of our lives will be transformed.”

However, he also acknowledged the negatives of the technology, from warfare to economic destruction.

“In short, success in creating effective AI could be the biggest event in the history of our civilisation, or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined or conceivably destroyed by it,” he said.

“Unless we learn how to prepare for – and avoid – the potential risks, AI could be the worst event in the history of our civilisation. It brings dangers like powerful autonomous weapons or new ways for the few to oppress the many. It could bring great disruption to our economy.

“Already we have concerns that clever machines will be increasingly capable of undertaking work currently done by humans, and swiftly destroy millions of jobs. AI could develop a will of its own, a will that is in conflict with ours and which could destroy us.

“In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity.”

In the vanguard of AI development

In 2014, Hawking and several other scientists and experts called for increased levels of research to be undertaken in the field of AI, which he acknowledged has begun to happen.

“I am very glad that someone was listening to me,” he said.

However, he argued that there is there is much to be done if we are to ensure the technology doesn’t pose a significant threat.

“To control AI and make it work for us and eliminate – as far as possible – its very real dangers, we need to employ best practice and effective management in all areas of its development,” he said. “That goes without saying, of course, that this is what every sector of the economy should incorporate into its ethos and vision, but with artificial intelligence this is vital.”

Addressing a thousands-strong crowd of tech-savvy attendees at the event, he urged them to think beyond the immediate business potential of the technology.

“Perhaps we should all stop for a moment and focus our thinking not only on making AI more capable and successful, but on maximising its societal benefit”

“Everyone here today is in the vanguard of AI development. We are the scientists. We develop an idea. But you are also the influencers: you need to make it work. Perhaps we should all stop for a moment and focus our thinking not only on making AI more capable and successful, but on maximising its societal benefit,” he said. “Our AI systems must do what we want them to do, for the benefit of humanity.”

In particular he raised the importance of working across different fields.

“Interdisciplinary research can be a way forward, ranging from economics and law to computer security, formal methods and, of course, various branches of AI itself,” he said.

“Such considerations motivated the American Association for Artificial Intelligence Presidential Panel on Long-Term AI Futures, which up until recently had focused largely on techniques that are neutral with respect to purpose.”

He also gave the example of calls at the start of 2017 by Members of the European Parliament (MEPs) the introduction of liability rules around AI and robotics.

“MEPs called for more comprehensive robot rules in a new draft report concerning the rules on robotics, and citing the development of AI as one of the most prominent technological trends of our century,” he summarised.

“The report calls for a set of core fundamental values, an urgent regulation on the recent developments to govern the use and creation of robots and AI. [It] acknowledges the possibility that within the space of a few decades, AI could surpass human intellectual capacity and challenge the human-robot relationship.

“Finally, the report calls for the creation of a European agency for robotics and AI that can provide technical, ethical and regulatory expertise. If MEPs vote in favour of legislation, the report will go to the European Commission, which will decide what legislative steps it will take.”

Creating artificial intelligence for the world

No one can say for certain whether AI will truly be a force for positive or negative change, but – despite the headlines – Hawking was positive about the future.

“I am an optimist and I believe that we can create AI for the world that can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management and prepare for its consequences well in advance,” he said. “Perhaps some of you listening today will already have solutions or answers to the many questions AI poses.”

You all have the potential to push the boundaries of what is accepted or expected, and to think big

However, he stressed that everyone has a part to play in ensuring AI is ultimately a benefit to humanity.

“We all have a role to play in making sure that we, and the next generation, have not just the opportunity but the determination to engage fully with the study of science at an early level, so that we can go on to fulfill our potential and create a better world for the whole human race,” he said.

“We need to take learning beyond a theoretical discussion of how AI should be, and take action to make sure we plan for how it can be. You all have the potential to push the boundaries of what is accepted or expected, and to think big.

“We stand on the threshold of a brave new world. It is an exciting – if precarious – place to be and you are the pioneers. I wish you well.”