Published on February 11, 2021
AHMEDABAD: American poet Henry Wadsworth Longfellow famously said ‘Music is the universal language of mankind’. A local research has found the brain of every person, however, listens to this language differently.
A study ‘Guess the Music: Song Identification from EEG responses’ by a team from IIT Gandhinagar and Delft University, Netherlands, highlighted that brain of different participants stored sample songs in different manner – so much that researchers could identify songs just by a one-second sample with 85% accuracy.
“We chose 12 tracks from different genres – from EDM (Concept 15 by Kodomo to Indian classical (Albela Sajan by Shankar Mahadevan), and rock (Red Suit by DJ David G) to New Age (Aurore by Claire David),” said Krishna Prasad Miyapuram, associate professor of computer science & engineering and coordinator of Centre for Cognitive and Brain Sciences at IIT-Gn.
Music of the mind: Study explores how brain reacts to food of love
What if two persons, one initiated in Indian classical music and another having no knowledge of the discipline, listen to a track by a maestro? What’s different at the neurological level for the two?
Experts can now answer with some confidence that the ‘song picture’ — a term used by a group of researchers at IIT-Gandhinagar and Delft University from the Netherlands — of the same piece are different for these two individuals!
Recently, a paper titled ‘Guess the Music: Song Identification from EEG Responses’ was published. The authors are Dhananjay Sonawane and Bharatesh Rayappa, IIT-Gn students; Krishna Prasad Miyapuram, associate professor of computer science & engineering and coordinator of Centre for Cognitive and Brain Sciences at IITGn; and Derek J Lomas of Delft University. They released the results of EEG scans of 20 individuals who listened to 12 tracks of different genres.
“The responses of the participants when they listened to different tracks were recorded through EEG (electroencephalography) by the128-channel electrodes cap,” said Prof Miyapuram.
The songs were the same, but ‘Albela Sajan’, for example, created a different ‘song picture’ for different listeners. Once the EEG scans for all the songs were obtained, the researchers trained an artificial intelligence algorithm using one-second clips of the participants to see whether the dataset could reliably identify the same pattern.
“The identification, even with that limited input, was surprisingly accurate,” said Prof Miyapuram. “In general parlance, we can say that each participant’s brain — based on history, taste, and interest — encoded and stored the music differently.”
Why the difference? The paper says: “The possible reason could be people focus on a different tone, vocals during music entertainment, thereby reducing performance for cross-participant song identification tasks.”
The researchers said that’s the reason why those with specific interest in music perceive music differently.
“Our hypothesis is that deep learning neural networks will be able to pick up on the common frequency patterns in the brain and in music,” said Lomas. “However, machine learning algorithms seemed to identify other brain-based factors such as emotional associations or higher-order musical features.”
What are the implications of the study? The researchers said that in its most basic premise, the study gives an insight into tastes and preferences of a person in an area like music, which can be extended to other stimuli such as pictures or food. The model developed through deep learning technology can also be applied to fields such as brain-computer interfaces, music therapy, or rehabilitation of persons with movement-related disorders.