Orchestra of Samples: How Addictive TV uses tech to bring world music supercuts to the stage

When British audio-visual electronic DJ duo Addictive TV begin touring their Orchestra of Samples project this week, 200 musicians from around the world will be joining them. But thanks to sophisticated technology – plus an astonishing ear for how hugely diverse genres could blend into perfect harmonies – they won’t need an enormous stage or a massive pizza delivery on their rider

Addictive TV are Graham Daniels and Mark Vidler, known for gigs where they splice together music, movies and videos, creating unique, immersive dance music. A typical Addictive TV set would consist of a mash-up of film supercuts and remixes with music videos. They’d take sounds like a car door slamming from Transformers or phaser fire from Star Trek and mix that into a rhythmic bass-line. Over the top would come unlikely musical pairings like Stevie Wonder with Red Hot Chilli Peppers, Rihanna and Blur, or Azalea Banks and The Clash.

The Guardian said of them: “Addictive TV continue to take hip-hop‘s scratch philosophy into the cyberpunk age”. Or as Grandmaster Flash put it: “next level shit”.

With a tour starting on 5th May and an album launch on 2nd June, their latest project, Orchestra of Samples, breaks new ground by sampling audiovisual content from global musicians and mixing it together in groundbreaking ways.

The five-year sample hunt

The pair first met when Vidler approached Daniels to make the video for a mash-up he’d created between Blondie and The Doors that was going to be released by EMI, back in the mid-noughties. Appropriately, when the two speak, it’s a seamless mix of them constantly interrupting and talking over each other, and finishing each other’s sentences.

We wanted to collaborate with as many people as we could and do more than the DJ or band thing where you fly in, do the gig and fly back out

According to Daniels, the idea for Orchestra of Samples came about because they wanted to do something that involved more people that just themselves.

“Because we were travelling a lot, we wanted to collaborate with as many people as we could and do more than the DJ or band thing where you fly in, do the gig and fly back out,” he says. “We found that pretty much everyone we were working with were more than happy to introduce us to musician friends of theirs. Then we’d build up a pool of musicians and an archive to sample from, and that became the project.”

Vidler adds: “Because our recording equipment was small enough fit into our hand luggage, it was a great way of capturing audio and video on the road and build up an orchestra from that in our spare time.”

The pair emphasise that the human aspect of Orchestra of Samples means they don’t use anonymous samples downloaded from YouTube. The samples took over five years to collect in person, and there’s a story behind every one. Surprisingly, the musicians are given no strict direction as to pitch and tempo; the magic happens in the mix.

Goat bagpipes and stone xylophones

To capture the tracks, Addictive TV used a palm-sized TASCAM DR-40 digital recorder stereo recorder with an SM57 microphone from Shure, which records onto a SD card together with an Apple Mac with some audio software in it. As a guide track they also recorded onto the camera with XLR camera microphone cables.

But perhaps the most surprising technology involved was the instruments some of the musicians used.

“What surprised me the most was the boudègue, a French bagpipe made form a whole goat,” says Daniels. “And in Mexico a guy who’s an expert in ancient pre-historical musical instruments thinks one of the earliest instruments humans would have made would have been a stone xylophone. He spent many years looking for naturally-tuned fragments of rock. He lays them out in a scale and hits them with another piece of rock.”

“The Circuit Bent made from children’s toys was good,” adds Vidler. “You could get quite musical, psychedelic sounds out of that. And our friends from Kazakhstan had a dombyra two-string guitar which they amplified to give a Jimi Hendrix effect.”

Given the raw material, it’s difficult to comprehend how these radically diverse sounds merge together harmoniously.

“There’s a little bit of maths involved,” says Vidler. “We get the tempo of the riffs and samples and find out the keys. But we never re-pitch the samples. If you start time-stretching and retuning things, you’re moving away from the natural origin of the sound and it’s very noticeable.

Image courtesy of Addictive TV / Joe Haydon. Featured image courtesy of Addictive TV / Alexis Maryon

“Some instruments, like the Hang [a UFO-shaped steel drum type instrument] are only in one key, you can’t retune them,” adds Daniels.

The pair labels every sample by country and instrument, with the key and tempo. The genius comes when they remember, say, the riff from the dombah in Kazakhstan was in the same key as the singer in Mexico. It may not work with the goat bagpipe, but it’s perfect with the mandolin. They then construct their own riffs using a few notes from each.

Some of the more surprising combinations they found were the Japanese Koto which worked really well with the Turkish/Iranian tanbur, and the Hang that sounded perfect with voices.

“One that worked well for me was the Cristal Baschet [an instrument played by stroking glass rods with wet fingers] and the viola-guitar, which gave a quite unique tuning that goes really well together,” says Vidler. “We gave it a more contemporary song arrangement; it was one of the first tracks we started and one of the last we finished.”

A borderless musical journey

Because of the way the samples are mixed live, if you go to see a show on the Orchestra of Samples tour you’ll be guaranteed a unique experience.

“There’s a base bed, because you have to have a foundation to build upon, but it’s highly portable,” explains Vidler. “We could be playing in Leeds and have a blues harmonica playing, or we could be travelling to Russia and invite balalaikas.”

During a show, the audio comes from one laptop and the video from another, but they are networked together and one is slaved to the other, to keep the music and video in sync.

“The software we’re using, Traktor and Arena, are commercially available. But we’ve got specialised versions that the manufacturers are allowing us to use,” says Daniels. “One is sending MIDI signals to the other, so the audio is triggering the video live. All the video is mute on one computer and all the audio WAVs are on another computer, so when you load an audio sample or bass track, it automatically triggers the corresponding video on the other laptop.”

This enables the audience to see where the samples come from and the artists behind it.

“Audiences can expect a musical journey without borders,” says Daniels. “One of the key components of the project is demonstrating how technology can be used to bring people together in new, artistic ways.”

The sound of tomorrow

Looking to the future, the Orchestra of Samples project will continue to grow as Addictive TV’s reputation spreads, and new technology will enhance the experience.

“We’re looking to use something called Stems that Native Instruments do,” says Daniels. “You can perform live with individual parts of different tracks. You could solo the trumpet, drums, or singer on a track, for example, effectively doing a live mix of the elements within a track. But there currently isn’t a visual version of that. We’re going to see the software developers about that next month.”

“It’d be great if they could, because on a night where we have live trumpets, we can mute the trumpet and bring up the bouzouki,” says Vidler. “It means you can build unique versions of a track on every performance.”

Given the combination of musicians who’d never normally perform together, and their instruments which wouldn’t normally be heard together, it’s fair to say Orchestra of Samples promises a unique technology-driven audio-visual experience.

Soviet report detailing lunar rover Lunokhod-2 released for first time

Russian space agency Roskosmos has released an unprecedented scientific report into the lunar rover Lunokhod-2 for the first time, revealing previously unknown details about the rover and how it was controlled back on Earth.

The report, written entirely in Russian, was originally penned in 1973 following the Lunokhod-2 mission, which was embarked upon in January of the same year. It had remained accessible to only a handful of experts at the space agency prior to its release today, to mark the 45th anniversary of the mission.

Bearing the names of some 55 engineers and scientists, the report details the systems that were used to both remotely control the lunar rover from a base on Earth, and capture images and data about the Moon’s surface and Lunokhod-2’s place on it. This information, and in particularly the carefully documented issues and solutions that the report carries, went on to be used in many later unmanned missions to other parts of the solar system.

As a result, it provides a unique insight into this era of space exploration and the technical challenges that scientists faced, such as the low-frame television system that functioned as the ‘eyes’ of the Earth-based rover operators.

A NASA depiction of the Lunokhod mission. Above: an image of the rover, courtesy of NASA, overlaid onto a panorama of the Moon taken by Lunokhod-2, courtesy of Ruslan Kasmin.

One detail that main be of particular interest to space enthusiasts and experts is the operation of a unique system called Seismas, which was tested for the first time in the world during the mission.

Designed to determine the precise location of the rover at any given time, the system involved transmitting information over lasers from ground-based telescopes, which was received by a photodetector onboard the lunar rover. When the laser was detected, this triggered the emission of a radio signal back to the Earth, which provided the rover’s coordinates.

Other details, while technical, also give some insight into the culture of the mission, such as the careful work to eliminate issues in the long-range radio communication system. One issue, for example, was worked on with such thoroughness that it resulted in one of the devices using more resources than it was allocated, a problem that was outlined in the report.

The document also provides insight into on-Earth technological capabilities of the time. While it is mostly typed, certain mathematical symbols have had to be written in by hand, and the report also features a number of diagrams and graphs that have been painstakingly hand-drawn.

A hand-drawn graph from the report, showing temperature changes during one of the monitoring sessions during the mission

Lunokhod-2 was the second of two unmanned lunar rovers to be landed on the Moon by the Soviet Union within the Lunokhod programme, having been delivered via a soft landing by the unmanned Luna 21 spacecraft in January 1973.

In operation between January and June of that year, the robot covered a distance of 39km, meaning it still holds the lunar distance record to this day.

One of only four rovers to be deployed on the lunar surface, Lunokhod-2 was the last rover to visit the Moon until December 2013, when Chinese lunar rover Yutu made its maiden visit.

Robot takes first steps towards building artificial lifeforms

A robot equipped with sophisticated AI has successfully simulated the creation of artificial lifeforms, in a key first step towards the eventual goal of creating true artificial life.

The robot, which was developed by scientists at the University of Glasgow, was able to model the creation of artificial lifeforms using unstable oil-in-water droplets. These droplets effectively played the role of living cells, demonstrating the potential of future research to develop living cells based on building blocks that cannot be found in nature.

Significantly, the robot also successfully predicted their properties before they were created, even though this could not be achieved using conventional physical models.

The robot, which was designed by Glasgow University’s Regius Chair of Chemistry, Professor Lee Cronin, is driven by machine learning and the principles of evolution.

It has been developed to autonomously create oil-in-water droplets with a host of different chemical makeups and then use image recognition to assess their behaviour.

Using this information, the robot was able to engineer droplets to have different properties­. Those which were found to be desirable could then be recreated at any time, using a specific digital code.

“This work is exciting as it shows that we are able to use machine learning and a novel robotic platform to understand the system in ways that cannot be done using conventional laboratory methods, including the discovery of ‘swarm’ like group behaviour of the droplets, akin to flocking birds,” said Cronin.

“Achieving lifelike behaviours such as this are important in our mission to make new lifeforms, and these droplets may be considered ‘protocells’ – simplified models of living cells.”

One of the oil droplets created by the robot

The research, which is published today in the journal PNAS, is one of several research projects being undertaken by Cronin and his team within the field of artificial lifeforms.

While the overarching goal is moving towards the creation of lifeforms using new and unprecedented building blocks, the research may also have more immediate potential applications.

The team believes that their work could also have applications in several practical areas, including the development of new methods for drug delivery or even innovative materials with functional properties.