Saturday, Jun 01, 2024
Advertisement

Google’s Brain2Music AI interprets brain signals to reproduce the music you liked

Brain2Music was trained from fMRI data recorded from people when they were listening to various genres like hip-hop, jazz, rock and classical.

Google Brain2Music | Brain listening AI model | Mind reading AIData gathered by Brain2Music was fed to Google's MusicLM AI. (Image Source: Pixabay)
Listen to this article
Google’s Brain2Music AI interprets brain signals to reproduce the music you liked
x
00:00
1x 1.5x 1.8x

Google is working on a new AI called ‘Brain2Music’ that uses brain imaging data to generate music. Researchers say the AI model can generate music that closely resembles parts of songs a person was listening to when their brain was scanned.

According to a recently published research paper by Google and Osaka University on arXiv, the AI model can reconstruct music by tracking your brain activity using functional magnetic resonance imaging (fMRI) data.

For the uninitiated, fMRI works by tracking the flow of oxygen-rich blood to the brain and seeing which parts of the brain are most active.

Advertisement

During the research, scientists examined fMRI data from five people who listened to the same 15-second-long music clips from various genres like classical, blues, disco, hip-hop, jazz, metal, pop, reggae and rock.

The brain activity was then used to train a deep neural network to determine the relationship between brain activity patterns and music elements like emotion and rhythm. Researchers then labelled the mood in various categories like tender, sad, exciting, angry, scary and happy.

Festive offer

Brain2Music was customised for every person in the study and was able to successfully convert data from the brain to music with original song clips. It was then fed to Google’ MusicLM AI model, which can generate music from text descriptions.

During the research, it was observed that ‘when a human and MusicLM are exposed to the same music, the internal representations of MusicLM are correlated with brain activity in certain regions’. Yu Takagi, professor of computational neuroscience and Al at Osaka University and the co-author of the paper told LiveScience that the main aim of the project was to understand how the brain processes music.

Advertisement

However, the research paper suggests that because every person’s brain is differently wired, it won’t be possible to apply a model created for an individual to another person.

Also, it is highly unlikely that this type of technology will become practical in the near future since recording fMRI signals requires users to spend hours in a fMRI scanner. But future studies might be able to explore if AI can reconstruct music people imagine in their heads.


 

First uploaded on: 08-08-2023 at 16:37 IST
Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement
close