Researchers Use Unsupervised Machine Learning To Understand And Visualize The Evolution In Classical Music

Many people may not be capable of defining what a minor mode in music is. Still, they can intuitively differentiate between notes belonging to the minor scale (which tend to sound dark, tense, or sad) from those in the major scale (which more often appear light and happy). However, throughout history, when there was no clear distinction between the modes, many other modes were used along with major and minor.

The researchers at EPFL’s Digital and Cognitive Musicology Lab (DCML) have conducted a study to understand and visualize these differences over time. Daniel Harasim, Martin Rohrmeier Fabian Moss, and Matthias Ramirez have developed an unsupervised machine learning model to analyze more than 13,000 Western classical music pieces from the 15th to the 19th centuries. The study reveals how modes such as (minor and major) have changed throughout history.

The team employed mathematical modeling to infer the number and characteristics of modes in Western classical music’s five historical periods. The outcomes returned novel data visualizations that exhibited how musicians during the Renaissance period (like Giovanni Pierluigi da Palestrina) used four modes, while Baroque composers (like Johann Sebastian Bach) used the major and minor modes. They identified no clear separation into modes of the complex music written by Late Romantic composers, like Franz Liszt.

This is the first time that the unlabeled data have been used to analyze modes, implying that their music pieces in the dataset had not been previously categorized into modes by a human. The team says that they wanted to study the results when the computer could analyze the data without introducing human bias. Therefore, they applied unsupervised machine learning methods allowing the computer to ‘listens’ to the music and figure out these modes on its own, without metadata labels.

Although the unsupervised approach’s execution is complicated, it has yielded fascinating results that are more cognitively plausible about how humans hear and interpret music. The team explains that the musical structure can be very complex and that musicians need years of training. However, humans learn about these structures unconsciously, just as a child learns a native language. Therefore, they have built a simple model that reverse engineers this learning process using a Bayesian model class.

Harasim and his co-authors carried out this study as a class project as students in EPFL professor Robert West’s course, Applied Data Analysis. The team hopes to take the project further by applying their approach to other musical subjects and genres. Harasim states that for music pieces within which modes change, it would be interesting to identify precisely at what point such changes occur. He would like to apply the same methodology to jazz because jazz’s tonality is much richer than just two modes.



­čÉŁ Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...