Get In TouchJoin Beta Waitlist
The Biology of Music,Is there a common ground between Music AI and Biological AI research?

The Biology of Music,
Is there a common ground between Music AI and Biological AI research?

Biological research and music go hand in hand for years, as research explores the effect of music and sound waves on our bodies and minds, to the DNA level. Aiode's Music AI technology and tool, architecture around building a new LMM (Large Music Model) similar in it's composition to current DNA research.
Dr. Teddy Lazebnik, Aiode Advisory board
By Dr. Teddy Lazebnik, Aiode Advisory board

July 17, 2024

Music, an art form deeply rooted in human culture, has often been seen as a distinctly human endeavor. On the other hand, biology examines the complexities of living organisms and their interactions with the environment. At first glance, these two fields may seem worlds apart. After all, the latter is naturally occurring, while the first one is man-made.
One can, of course, see spontaneous examples of music in nature - such as birdsong and whale songs. These examples serve various purposes, including attracting mates, defending territories, and communicating. Hence, naturally occurring music has a lot in common with human-made music from social and cultural perspectives.This intriguing similarity raises a quirky question - is there a more fundamental connection between music and biology?

Many studies have tried to answer this question, coming up with a wide range of interesting answers [1].
However, for the scope of this article, we would like to focus on the mathematical fundamental connection between music and biology.
Indeed, when viewed through the lens of artificial intelligence (AI), fascinating connections and intersections emerge. This article explores how AI bridges the gap between biology and music.

The Intersection of Music and Biology

Let us start with an important note - music is more than just a series of pleasing sounds. It is a complex form of communication that resonates with our historical-evolutionary process. The simplest example, octaves (the interval between one musical pitch and another with double or half its frequency), results in a 2:1 ratio. I.e. the sound waves align very well, creating a sense of natural resonance and consonance. Our ears and brains are wired to recognize and be pleased by these harmonious relationships. Taking a step further this road, from a biological perspective, music also involves rhythm and timing, elements crucial for coordination and movement. These aspects of music can be linked to evolutionary biology, where rhythmic patterns and sounds have played roles in social bonding and communication within communities [2]. Understanding these connections helps AI engineers create more sophisticated algorithms that can compose, perform, and even appreciate music.
A visualization of neural networks (nodes and connections) superimposed on a background of a musical score, highlighting the role of AI in music creation.

Music AI - AI Role in Understanding Music

AI has made significant progress in analyzing the biological underpinnings of music. In particular, machine and deep learning algorithms can now process vast amounts of data to uncover how music is composed or what decisions musicians make. In particular, the complex and interchangeable decisions musicians make with respect to the music their team members are playing in order to create harmonic songs.

Technically, these tasks are mathematically defined as time-series analysis or self-referential analysis, which is also common in biology. For example, cancerous mutations in DNA can be viewed as wrong chords played in a musical piece.
Formally, DNA is commonly represented as a long sequence of 4-digits when cancerous mutations alter the sequence to create undesired biological properties (for example, uncontrolled and rapid cell division). When representing a song in an identical format, wrong chords in a song cause undesired breaks in the harmony of the song.

Taking it a step further, one can imagine a generated music AI model that gets famous musicians' playback and alters it to another musical style. Such an accomplishment is driven by the latest word in the field of computer science – the transformer models that are trained on vast amounts of tagged data.
Such abilities are developed by Aiode, an Israeli startup company focusing on AI for music.
With such capabilities, music producers can integrate virtual musicians into their songs, controlling their behavior with a simple control dashboard. In order to make sure the songs are unique for different virtual musicians, a distinct musical “DNA” can be extracted from the previous record of a musician and used during the generation of new music.

But why stop here? Today, machines do not understand emotions; however, thanks to Large Language Model (LLM) capabilities, this probably will not be the case in the (near) future.
Around the corner, music producers will be able to mix these two capabilities to “talk” with their virtual musicians and guide them with both technical (i.e., quicker, more bass, etc.) and emotional (i.e., “play it happier”) guidelines.
A close-up shot of a musician playing an acoustic guitar on stage. The focus is on the guitar and the musician's hands live performance

The Technicalities

Deep learning models have revolutionized the field of music in numerous ways, providing innovative tools for composition, performance, and analysis. At the core of these achievements are neural networks (NN), in general, and recurrent neural networks (RNNs), in particular, which are adept at handling sequential and spatial data. For instance, RNNs, including their more sophisticated variants like Long Short-Term Memory (LSTM) networks, excel in processing sequences of musical notes, enabling them to generate melodies and harmonies that exhibit coherence over time.
The same models are also common in biology and medicine in cases such as blood pressure prediction during surgical operations or multi-animal decision making. These models can be trained on extensive datasets of musical compositions, learning the intricate patterns and structures that characterize different genres and styles.
In a similar manner, interactions between social animals (for instance, apes) also follow these patterns. This capability allows them to produce original music that can mimic the style of classical composers like Bach or contemporary artists, offering composers and musicians powerful tools for creative experimentation.

At this point in the article, most of the technical readers expect to see the mention of the latest and greatest - the Transformer architecture.
Well, these readers are not wrong. Transformers, originally developed for natural language processing tasks, have emerged as powerful tools in the field of music. These models, such as Google's Music Transformer, leverage the self-attention mechanism to capture long-range dependencies within musical sequences.
It is worth mentioning that, unlike traditional RNNs that process data sequentially, transformers process the entire sequence simultaneously, allowing them to understand complex musical relationships over extended time periods. This capability is particularly beneficial for music, where patterns and motifs can span large segments of a composition. Just to emphasize the signal's length - imagine around 42,000 samples each second and a song of 3 minutes. An amazing 7.56 million value sequence emerges, a non-trivial task even for large-scale deep learning models.

Conclusion: Bridging the Gap

The biology of music and the field of AI are more interconnected than one might initially think. Through AI, we can be clearly exposed to the fundamental mathematical similarity between biology and music and even adopt methods from one to the other. Furthermore, AI's ability to create and personalize music opens new avenues for a wide range of applications, making the fusion of these fields not only fascinating but also beneficial.

To this end, we are currently seeing the rise of AI in the musical field with many promising startups and well-established companies in the field creating sophisticated AI tools for musicians, technicians, and even the audience. One such company is Aiode, which develops virtually modeled musicians. If you are reading this post during 2024, I strongly suggest you take a look at their product to see how AI revolutionizes the music industry.

References

[1] Fitch, W. T. (2006). The biology and evolution of music: A comparative perspective. Cognition. 100(1): 173-215.
[2] Tarr, B., Launay, J., Dunbar, R. I. M. (2014) Music and social bonding: “self-other” merging and neurohormonal mechanisms. Frontiers in Psychology.
Share to Socials