- India
- International
Google is working on a new AI called ‘Brain2Music’ that uses brain imaging data to generate music. Researchers say the AI model can generate music that closely resembles parts of songs a person was listening to when their brain was scanned.
According to a recently published research paper by Google and Osaka University on arXiv, the AI model can reconstruct music by tracking your brain activity using functional magnetic resonance imaging (fMRI) data.
For the uninitiated, fMRI works by tracking the flow of oxygen-rich blood to the brain and seeing which parts of the brain are most active.
During the research, scientists examined fMRI data from five people who listened to the same 15-second-long music clips from various genres like classical, blues, disco, hip-hop, jazz, metal, pop, reggae and rock.
The brain activity was then used to train a deep neural network to determine the relationship between brain activity patterns and music elements like emotion and rhythm. Researchers then labelled the mood in various categories like tender, sad, exciting, angry, scary and happy.
Brain2Music was customised for every person in the study and was able to successfully convert data from the brain to music with original song clips. It was then fed to Google’ MusicLM AI model, which can generate music from text descriptions.
During the research, it was observed that ‘when a human and MusicLM are exposed to the same music, the internal representations of MusicLM are correlated with brain activity in certain regions’. Yu Takagi, professor of computational neuroscience and Al at Osaka University and the co-author of the paper told LiveScience that the main aim of the project was to understand how the brain processes music.
However, the research paper suggests that because every person’s brain is differently wired, it won’t be possible to apply a model created for an individual to another person.
Also, it is highly unlikely that this type of technology will become practical in the near future since recording fMRI signals requires users to spend hours in a fMRI scanner. But future studies might be able to explore if AI can reconstruct music people imagine in their heads.