Music: A Synthetic Snapshot

Thomas Graham
Metaphysic.ai
Published in
3 min readFeb 15, 2022

--

In the 1960s, synthesizers started to change how people thought about the creation of music. Where musicians once relied on organic instruments by plucking a string or hitting a drum, synthesizers opened a new world of sounds that defined many of the pioneering artists of the 70s and 80s.

Analog synthesizers, such as the Mogg 960, were first released in 1968.

Fast forward to 2022, and almost all modern pop and dance music is defined by digital synthesizers, drum machines, and complex digital production tools such as Ableton or Logic.

Now, AI is at the forefront of musical innovation in the form of generative music, where algorithms can generate entirely new synthetic musical compositions and mediate the creation of new sonic tools.

Old artists, New music

In 2021, an algorithm was trained on a broad range of classical music and was then tasked with completing Beethoven’s unfinished 10th symphony based on the machine’s understanding of classical composition and the maestro’s unique style.

This is the world’s first hyperreal symphony and an incredible example of the promise of generative music. Only a few months prior, a non-profit called Over the Bridge that helps people in the music industry struggling with mental health released a series of synthesized songs in styles that mimicked Kurt Cobain, Amy Winehouse, Jimmy Hendrix, and other members of the so-called “27 Club.” To make it happen, the organization used Magenta, an open-source project that runs on Google’s TensorFlow that is capable of producing new songs in the style of any artist that is used as training data.

Credit: Over the Bridge

Synthetic media as a new kind of instrument

Yet generative music isn’t necessarily about replicating musicians’ work, but also providing new forms of expression and ‘co-creation’ between humans and machines. In late 2021, The American musician Holly Herndon created Holly plus, a tool that uses deep neural networks to transform uploaded music and other recordings into Herndon’s singing voice. Herndon refers to Holly plus as her ‘digital vocal twin’ and its pioneering conception as ‘the first tool of many to allow for others to make artwork with my voice’.

Holly Herndon’s Holly+ tool allows users to create music and sounds with her ‘digital vocal twin’. Credit: Andrés Mañón

As generative music improves and becomes more accessible, it will undoubtedly become an increasing part of our everyday sonic landscape. While generative music may replace humans in some contexts, the technology is already revealing a new creative frontier for musicians and artists to experiment with previously unheard ways of creating music, just as technological advances with analog synthesizers and digital software have in the past.

Metaphysic builds software to help creators make incredible content with the help of artificial intelligence. Find out more: www.metaphysic.ai

For more info please contact info@metaphysic.ai or press@metaphysic.ai

About the author: in a galaxy far away, I was a lawyer turned internet & society researcher. In the 7 years before co-founding Metaphysic, I built tech companies in SF and London. I have always been obsessed with computational photography and computer vision, so it is a thrill to work alongside amazing people on the next evolution in how we build and perceive reality — one pixel at a time.

© Thomas Graham & Metaphysic Limited 2021.

--

--