IBM and MIT plot 10 year, $240 million partnership to advance artificial intelligence

Two pioneers of artificial intelligence research, IBM and MIT, have announced today that they will combine forces and create the MIT–IBM Watson AI Lab.

IBM plans to make a 10-year, $240 million investment in the new lab, which will aim to advance AI hardware, software, and algorithms related to deep learning and other areas; increase AI’s impact on industries, such as health care and cybersecurity; and explore the economic and ethical implications of AI on society.

“I am delighted by this new collaboration,” said MIT President L. Rafael Reif. “True breakthroughs are often the result of fresh thinking inspired by new kinds of research teams. The combined MIT and IBM talent dedicated to this new effort will bring formidable power to a field with staggering potential to advance knowledge and help solve important challenges.”

MIT President L. Rafael Reif, left, and John Kelly III, IBM senior vice president, Cognitive Solutions and Research, shake hands at the conclusion of a signing ceremony establishing the new MIT–IBM Watson AI Lab. Image courtesy of Jake Belcher

The new lab will utilise the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab in Cambridge, Massachusetts and on the neighbouring MIT campus.

In addition to research, a distinct objective of the new lab will be to encourage MIT faculty and students to launch companies that will focus on commercialising AI inventions and technologies that are developed at the lab.

The lab’s scientists will though publish their work, contribute to the release of open source material, and foster an adherence to the ethical application of AI.

“The field of artificial intelligence has experienced incredible growth and progress over the past decade. Yet today’s AI systems, as remarkable as they are, will require new innovations to tackle increasingly difficult real-world problems to improve our work and lives,” said John Kelly III, IBM senior vice president, Cognitive Solutions and Research.

“The extremely broad and deep technical capabilities and talent at MIT and IBM are unmatched, and will lead the field of AI for at least the next decade.”

This latest collaboration between IBM and MIT builds on a decades-long research relationship between the two.

Just last year, IBM Research announced a multiyear collaboration with MIT’s Department of Brain and Cognitive Sciences to advance the scientific field of machine vision, a core aspect of artificial intelligence.

That partnership brought together leading brain, cognitive, and computer scientists to conduct research in the field of unsupervised machine understanding of audio-visual streams of data, using insights from next-generation models of the brain to inform advances in machine vision.

In addition, IBM and the Broad Institute of MIT and Harvard have established a five-year, $50 million research collaboration on AI and genomics.

The MIT–IBM Watson AI Lab lab will be co-chaired by Dario Gil, IBM Research VP of AI and IBM Q, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering. IBM and MIT plan to issue a call for proposals to MIT researchers and IBM scientists to submit their ideas for joint research to push the boundaries in AI science and technology.

Automating composition: AI composer generates original melodies by ‘reading’ sheet music

In a world first, a deep-learning algorithm has been developed that generates original melodies in a given music style, without any knowledge of musical theory.

Developed by scientists at Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, the algorithm, dubbed ‘Deep Artificial Composer’ or DAC, “trains” on sheet music of a given musical style before producing an original score in the same style of its own composition.

As a result, it does not produce audio files, but instead provides finished sheet music that humans can perform. At present DAC has been trained on Irish or Klezmer folk music, but it is designed to work with and mimic any musical style given to it.

“The deep artificial composer can produce complete melodies, with a beginning and an end, that are completely novel and that share features that we relate to style,” said EPFL scientist Florian Colombo, who developed DAC with the supervision of Wulfram Gerstner, director of the Computational Neuroscience Laboratory.

“To my knowledge, this is the first time that an artificial neural network model has produced entire and convincing melodies.”

Other AI composers have already been developed, however DAC is unique in that it does not use and implement musical theory. Instead it uses neural networks to determine probability distributions from existing melodies – that is, the likelihood of certain notes and note lengths occurring after each successive note.

DAV learns how music transitions from one note to the next, and the probability of both the pitch and the duration of the next note. Then it works through multiple scores, correcting its predictions and building up a model for that musical genre. Once it has successfully predicted 50% of successive notes and 80% of successive note durations, it is considered ‘trained’ and ready to compose its own music.

Working through note by note, DAC builds up an entire melody that is completely original, but is in the style of the music it has trained on. The result is melodies that sound completely man-made, but which are entirely produced by artificial intelligence.

Colombo playing a composition created by DAC on the cello. Images and music courtesy of EPFL

At present DAC is only able to produce melodies for one instrument or voice at a time, but Colombo hopes to in the future make it able to compose scores for entire orchestras in real-time.

This would realise an idea first proposed by mathematician Ada Lovelace in the 19th Century, but it will also likely cause fear among human composers, who join the pile of industries looking increasingly ready to be replaced by machines.

For others, though it could be hugely beneficial, bringing original music within reach of, for example, indie game developers and filmmakers.

The research was presented today at the Evostar conference in Amsterdam, the Netherlands.