Scientists create self-learning brain-machine interface for prosthetics

Brain machine interfaces allow those with artificial limbs to control them with nothing but their thoughts. They are, however, difficult to control and take patience and persistence to master.

But this could be about to change as researchers have come up an approach that may make the control of artificial limbs easier.

They have created a way for an artificial limb to store correct movements. When a patient is missing a limb and is using a brain-controlled prosthetic – which, officially comes under the discipline of neuroprosthetics – the brain sends out an Error-related potential (Errp); effectively an error signal.

Scientists from École polytechnique fédérale de Lausanne, Switzerland, have now used this signal to create new brain interfaces that they say can learn full movements.

“If we fail to grasp a glass of water placed in front of us, the neuroprosthesis will understand that the action was unsuccessful and the next movements will change accordingly until the desired result is achieved,” the institution says in a press release.

“The machine knows that the goal is reached when the actions performed no longer generate an ErrP.”

In essence it works by using trial and error to teach the system whether a movement has been successful or not.

“According to our expectations, this new approach will become a key element of the next generation brain-machine interfaces that mimic the natural motor control,” said lead researcher José Millán.

“The prosthesis can function even if it does not have clear information about the target.”

The study used 12 subjects, who were all asked to train their prosthesis to be able to detect the error signal.

They were then strapped into an electrode headset where the machine completed 350 separate movements. However, to teach the system when it was wrong, it was programmed to fail 20% of the time.

braininline1
Those being studied were then required to complete three experiments using their prosthetic arms. The final one of these involved them being asked to identify a target that was two meters away.

The researchers found that the artificial arm stores the correct movements and build up a range of movements.

The research was published in the Nature Scientific Reports journal.

Digital doubles: Researchers create accurate 3D avatars from smartphone selfies

It’s now possible to reproduce someone’s head, digitally, using no more than a smartphone camera and an algorithm.

Researchers from École Polytechnique Fédérale de Lausanne, in Switzerland, created the algorithm which will one day be able to replicate a person’s whole body digitally.

They say the mapping tool will be able to be used to create avatars for gaming, virtual reality, video conferences and also potentially in some medical situations as well.

“We wanted the process to be fast and easy: all you have to do is take a video of yourself and then snap a few more shots to get facial expressions, and our algorithm does the rest,” said researcher Alexandru Ichim.

“The goal was to make the process accessible to anyone with a smartphone, even an old model, as long as it can take video,” said the researcher.

The set-up works by taking a video around a person’s head, along with some still images of their face, the algorithm created is then able create a virtual 3D version of what has been captured.

They say that the digital double can be shown on a screen and animated in real time using a video camera that follows the movements of the person being created.

The researchers say that for this sort of technology to ever be used in a real-world scenario, they needed to be able to get the technology to work with low quality images. These could be ones that are blurry, poorly lit, or a combination of factors.

And, as with almost everything, the first impression of the technology will lead to a person not using the technology again.

“A small detail will turn people off immediately,” said Ichim. “The avatar has to have the right facial geometry and reproduce the texture, color and details like face wrinkles.”

Images courtesy of EPFL

Images courtesy of EPFL

The technology still has a little way to go, however, as generic teeth, ears, and hair styles are slapped onto the 3D faces at the moment.

Creating individual textures for a persons’ hair is still too challenging for the technology to be able to create in a short amount of time.

The researchers’ paper: Dynamic 3D avatar creation from hand-held video input, can be found here.