Scientists develop real-life ‘What-If Machine’ to produce AI-created fiction

In yet another example of how Futurama’s year 3000 is coming faster than we might think, scientists have created a What-If Machine (WHIM), one of the first pieces of software to use artificial intelligence to write fiction.

Although not capable of the full visual renderings its fictional counterpart achieved in Futurama’s Anthology of Interest episodes, the machine can take ‘true’ facts from the web and twist them to create ‘what-if’ scenarios.

However, while the Futurama machine is used by members of the Planet Express crew to determine what would happen if they undertook certain personal actions or changes, the real-life version produces general ‘what-if’ scenarios and, in some cases, their likely results, with humans able to rate them for their narrative potential.

The intention is to expand these into full works of fiction, eventually using these for movie and video game storylines.

Image and featured image: screenshots from Futurama S2 E20: Anthology of Interest I.

Not quite at the level of the fictional What-If machine just yet. Image and featured image: screenshots from Futurama S2 E20: Anthology of Interest I.

“WHIM is an antidote to mainstream artificial intelligence which is obsessed with reality,” said Simon Colton, project coordinator and professor in computational creativity at Goldsmiths College, University of London. ‘

“We’re among the first to apply artificial intelligence to fiction.”

At present WHIM generates short ‘what-ifs’ under five fictional categories: Kafkaesque, alternative scenarios, utopian and dystopian, metaphors and Disney.

Some of the results are more bizarre than compelling, such as this gem from the alternative scenarios section:

“What if there was an old refrigerator who couldn’t find a house that was solid? But instead, she found a special style of statue that was so aqueous that the old refrigerator didn’t want the solid house anymore.”

And there are also those that show that mining historical data from the web doesn’t always result in fictional premises with mass appeal, such as this snoozefest from the utopian and dystopian section:

“What if the world suddenly had lots more queens? Then there would be more serfs, since queens establish the monarchies that contain serfs.”

However, there are some with the potential to become genuinely good works of fiction.

“What if revered artists were to be abandoned by their muses, develop rivalries and become hated rivals?” from the metaphors section could be the basis for quite a good comedy movie, and “What if there was a little atom who lost his neutral charge?” from the Disney section sounds rather like the premise of a Pixar film.

The real-life What-If machine, which can be accessed here.

The real-life What-If machine, which can be accessed here.

Over time, the WHIM is expected to develop the ability to not only write premises, but judge how good they are.

This will be achieved using a machine-learning system, which will learn about what makes good fiction and what doesn’t from the ratings people give different ideas.

The result should be that WHIM will gain the ability to judge if something has potential for mass consumption, flying in the face of the convention that creativity cannot be achieved with a scientific approach.

“One may argue that fiction is subjective, but there are patterns,” said Colton.

“If 99% of people think a comedian is funny, then we could say that comedian is funny, at least in the perception of most people.”

The European Union-funded project is very much in its infancy, but there are research teams around Europe working to make it a genuine creator of fiction for use in movies and video games.

At the University of Cambridge, UK, researchers are working to improve the web-mining system so the WHIM comes up with better ideas, while over at the University College in Dublin, Ireland, researchers are working to produce better irony and metaphorical insights.

Perhaps most importantly, at the Universidad Complutense Madrid, in Spain, researchers are working to expand the short premises into full narratives, which could be used for film plots and other forms of fiction.

WHIM’s creators even believe it could be used by scientists explore potential scenarios by asking ‘what-if’ questions, perhaps even making it a realistic AI ringer for Professor Farnsworth’s solid gold creation.

Soviet report detailing lunar rover Lunokhod-2 released for first time

Russian space agency Roskosmos has released an unprecedented scientific report into the lunar rover Lunokhod-2 for the first time, revealing previously unknown details about the rover and how it was controlled back on Earth.

The report, written entirely in Russian, was originally penned in 1973 following the Lunokhod-2 mission, which was embarked upon in January of the same year. It had remained accessible to only a handful of experts at the space agency prior to its release today, to mark the 45th anniversary of the mission.

Bearing the names of some 55 engineers and scientists, the report details the systems that were used to both remotely control the lunar rover from a base on Earth, and capture images and data about the Moon’s surface and Lunokhod-2’s place on it. This information, and in particularly the carefully documented issues and solutions that the report carries, went on to be used in many later unmanned missions to other parts of the solar system.

As a result, it provides a unique insight into this era of space exploration and the technical challenges that scientists faced, such as the low-frame television system that functioned as the ‘eyes’ of the Earth-based rover operators.

A NASA depiction of the Lunokhod mission. Above: an image of the rover, courtesy of NASA, overlaid onto a panorama of the Moon taken by Lunokhod-2, courtesy of Ruslan Kasmin.

One detail that main be of particular interest to space enthusiasts and experts is the operation of a unique system called Seismas, which was tested for the first time in the world during the mission.

Designed to determine the precise location of the rover at any given time, the system involved transmitting information over lasers from ground-based telescopes, which was received by a photodetector onboard the lunar rover. When the laser was detected, this triggered the emission of a radio signal back to the Earth, which provided the rover’s coordinates.

Other details, while technical, also give some insight into the culture of the mission, such as the careful work to eliminate issues in the long-range radio communication system. One issue, for example, was worked on with such thoroughness that it resulted in one of the devices using more resources than it was allocated, a problem that was outlined in the report.

The document also provides insight into on-Earth technological capabilities of the time. While it is mostly typed, certain mathematical symbols have had to be written in by hand, and the report also features a number of diagrams and graphs that have been painstakingly hand-drawn.

A hand-drawn graph from the report, showing temperature changes during one of the monitoring sessions during the mission

Lunokhod-2 was the second of two unmanned lunar rovers to be landed on the Moon by the Soviet Union within the Lunokhod programme, having been delivered via a soft landing by the unmanned Luna 21 spacecraft in January 1973.

In operation between January and June of that year, the robot covered a distance of 39km, meaning it still holds the lunar distance record to this day.

One of only four rovers to be deployed on the lunar surface, Lunokhod-2 was the last rover to visit the Moon until December 2013, when Chinese lunar rover Yutu made its maiden visit.

Robot takes first steps towards building artificial lifeforms

A robot equipped with sophisticated AI has successfully simulated the creation of artificial lifeforms, in a key first step towards the eventual goal of creating true artificial life.

The robot, which was developed by scientists at the University of Glasgow, was able to model the creation of artificial lifeforms using unstable oil-in-water droplets. These droplets effectively played the role of living cells, demonstrating the potential of future research to develop living cells based on building blocks that cannot be found in nature.

Significantly, the robot also successfully predicted their properties before they were created, even though this could not be achieved using conventional physical models.

The robot, which was designed by Glasgow University’s Regius Chair of Chemistry, Professor Lee Cronin, is driven by machine learning and the principles of evolution.

It has been developed to autonomously create oil-in-water droplets with a host of different chemical makeups and then use image recognition to assess their behaviour.

Using this information, the robot was able to engineer droplets to have different properties­. Those which were found to be desirable could then be recreated at any time, using a specific digital code.

“This work is exciting as it shows that we are able to use machine learning and a novel robotic platform to understand the system in ways that cannot be done using conventional laboratory methods, including the discovery of ‘swarm’ like group behaviour of the droplets, akin to flocking birds,” said Cronin.

“Achieving lifelike behaviours such as this are important in our mission to make new lifeforms, and these droplets may be considered ‘protocells’ – simplified models of living cells.”

One of the oil droplets created by the robot

The research, which is published today in the journal PNAS, is one of several research projects being undertaken by Cronin and his team within the field of artificial lifeforms.

While the overarching goal is moving towards the creation of lifeforms using new and unprecedented building blocks, the research may also have more immediate potential applications.

The team believes that their work could also have applications in several practical areas, including the development of new methods for drug delivery or even innovative materials with functional properties.