To infinity and beyond: Teaching drones to interact and work together

Drones are able to reach places that humans cannot and by teaching them how to work together it is hoped they can be used in crisis situations such as search and rescue missions.

One such robotics project at the University of Sheffield, UK, is trying to teaching quadcopters to learn from the environment they are in by 3D mapping what is in front of them.

The team from the university is also trying to enable the quadcopters to interact so it is possible they can work together.

Researchers are trying to programme the drones with intelligence to allow them to complete more complex tasks in environments that are unsafe for humans, such as areas affected by nuclear radiation or outer space.

The new programming developments in these robots enhance their learning and decision-making capabilities.

Professor Sandor Veres, who is leading the project, said: “We are used to the robots of science fiction films being able to act independently, recognise objects and individuals and make decisions.

“In the real world, however, although robots can be extremely intelligent individually, their ability to co-operate and interact with each other and with humans is still very limited.

“As we develop robots for use in space or to send into nuclear environments – places where humans cannot easily go – the goal will be for them to understand their surroundings and make decisions based on that understanding.”


A team from the university is trying to teach the drones to achieve this level of intelligence by using a computer concept called game theory.

In game theory, robots treat their tasks as a game, record and learn from the behaviour of the other robots they encounter, and draw from their experiences to try to ‘win’.

Though the theory is based around competition, it encourages compatibility and teamwork within a group of robots.  As they learn to predict each other’s next moves, they avoid collisions and increase efficiency.

The quadcopters collect data through attached forward facing cameras that allow them to create 3D maps of their surroundings, also sensing barometric and ultrasonic information to add to their understanding.

The improved processing of this data will allow them to work both with humans and other robots, a skill that will be crucial if the robot is to work in high-pressure situations.

While quadcopters are being developed for emergency aid and for use in dangerous environments, other flying robots are being honed for recreational purposes.


AirDog, an action sports drone, acts as a flying video crew. It follows its users through a tracking bracelet as they participate in sports like BMX, surfing and wake-boarding, taking high-quality videos and photographs.

The Airdog is manufactured by 3D printing, which allows for a lighter, less expensive design that can be sold as an accessible consumer product.

Essentially a quadcopter for the extreme sports market, the AirDog can record angles that a human could only achieve by filming from a helicopter.

Users program the desired distance, height and speed levels before they release the drone, and then it follows the user according to the desired specifications.

These different devices show just a small range of the possible applications for advanced flying robots.

Their ability to easily travel to places that humans cannot reach without the aid of a plane or helicopter makes them incredibly useful in all kinds of situations, from search-and-rescue missions to package deliveries. What other uses will we find for these sky-roaming drones?

Featured image courtesy of Kaometet, first body image courtesy of Steve Lodefink, second body image courtesy of Helico Aerospace Industries.

Talking with robots: Training future robot chefs with speech

Without a detailed knowledge of coding and programming languages it is almost impossible for the average person to be able to interact with most robots.

It is one of the biggest hurdles that needs to be overcome if our future is to include living with robots.

Thankfully, researchers at Cornell University, US, are teaching robots to understand instructions that are given to them by voice – and are teaching one robot to cook.

The robot is able to scan the room it is in with a 3D camera, and has been trained to associate the objects it sees with what they can do.

For example, it understands that water can be put in a pan and that the pan can be heated up on a stove.

It has a built in programming language that includes commands that allow it to find, grasp, carry or fill, for objects such as a pan.

The researchers have made the robot clever enough to be able to perform the same actions at a different time – even if the pan has been moved.

While the work is in the early stages it shows a promise of what could be to come with the future of robots and those that are able to learn by themselves in a human language.

On the researchers’ website the team of three, who are primarily behind the work, say that they want to be able to make a robot take generic instructions and be able to act on them, as well as understanding them.

“In order for robots to perform tasks in real world, they need to be able to understand our natural language commands,” the team wrote on its website.

“While there is a lot of past research that went into the task of language parsing, they often require the instructions to be spelled out in full detail which makes it difficult to use them in real world situations.

“Our goal is to enable a robots to even take an ill-specified instruction as generic as ‘Make a cup of coffee’ and be able to figure out how to fill a cup with milk or use one if it already has milk etc. depending on how the environment looks.”

The team are also collecting data to help with the robot’s development and have created an online simulator that anyone can try out.

Understandably the robot is not perfect yet and during testing the researchers found that it performed correctly 64% of the time.

This did include when the commands were changed and the robot is able to fill in the missing step.

For us to be able to utilise the potential of robots to aid us in everyday tasks such as cooking a meal, they need to be able to understand what we say to them in our native language.

Then, when combined with artificial intelligence, learn over time.

Japanese juggernaut SoftBank has recently announced that it will be selling a robot nanny called Pepper, from 2015, that can respond to the emotions of humans.

It is mainly being touted as a companion for children, and will report children’s positive emotional responses to their mothers.

Featured image courtesy of Cornell University