Are there algorithms that can write music?

The bio-based composers will survive

Digital algorithms compose emotion-free and without consciousness. It is particularly disturbing that the results can convince us humans.

by Herbert Köhler

At the latest since the last CeBIT and this year's Hanover Fair, terms such as big data, artificial intelligence (AI), algorithms, deep learning and machine learning, platform economics and the “Internet of Things” have become part of the discourse of a broader public. The research union of the German federal government is already talking about Industry 4.0 in order to anticipate the future of a high-tech society.

If you want to take a realistic look into such a possible world now, you can watch the films “Her” (2013) and “Ex Machina” (2014). They make both sensitive to an initially cool to cold and hybrid-looking, but plausible situation of anthropocentric and technocentric collisions, in which the central question is: Can consciousness be simulated by a purely techno-digital system? If so, are we not hindering development of the extra-biological, far more efficient kind with such a requirement?

For the programming part of the IT industry, the Bramar-based lay crowd on AI is just a hype that will fade back to the level of reality. But the thrust is there with full force.

Everything will change, so the prognosis, and much is already irreversible or within reach. Time not only to deal with the effects on society as a whole, but also to take a closer look at the possibly impending paradigm shifts within cultural sub-areas. The music is one of those. Especially the composers who usually bring music into the world.

However, no composer should feel threatened or superfluous in the face of the algorithmic relentlessness presumed to be within reach. Not every future is strong competition. But the composing self-image will soon have to be completely reorganized. It could become a trap to view artificial intelligence exclusively as a simulation of anthropological specifications, i.e. exclusively as support to increase efficiency. It is still difficult for us to understand that there should even be a non-systemic simulative for human intelligence - especially since the anthropological term of it is still very frayed.

Urgent questions for composers in the world could be: Does music that has been generated independently by depersonalized systems reach people's emotional world to the same extent as that of conventional composers? Doesn't the music lose its emotional and ideal significance to the same extent? Or does it rub off straight away to pure functional music?

You can already see yourself in the discourse on the concept of creativity. After all, the creative activity is an undisputed human unique selling point, so far. And no dream of the future has to be a digitally generated algorithm from now on. So why not be more optimistic? Perhaps the new possibilities are even prostheses to optimize your own creativity? See support! Wasn't that also the case with the advent of electronic media in music decades ago? Pierre Schaeffer, Karlheinz Stockhausen, Walter Wendy Carlos, Isao Tomita, for example, just to remember the early celebrities.

Here too, creative activity still meant using an abstract idea to launch a concrete product, i.e. creating something out of nothing. Until recently, the ability to be creative was considered a human domain and not transferable to computer systems.

In the digitally generated world of algorithms, the nothing of immateriality and ideality assumed by humans is taken over by algorithmic access to big data. Human inspiration and ingenuity change the cognitive system and become stochastics, pattern recognition and consequential output in computer science. The self-learning programs of the AI ​​seem to do this convincingly. And so in the future it could be about the separation of two categorical forms of reality: that of the biological-analog and that of the inorganic-digital.

Artificial intelligence calls for dealing with precisely this system change. What consequences does this have for conventional composition, listening reality and media consumption, for example, if musical competence is delegated in whole or in part to an inorganic digital algorithm? Take a look back.

The American David Cope wrote a computer program for generating music back in the 1980s and called it "Experiments in Musical Intelligence", or EMI for short. It was able to analyze composition patterns in Johann Sebastian Bach's music in order to then use an algorithm to compose “new” pieces of music in the style of the composer.

Artificial intelligence has been able to produce music without any brains or interest, without any human talent and without any awareness. In test performances it was even identified as particularly typical of Bach. The new thing about it: David Cope's program did not imitate music, but simulated it in a completely alien way using algorithms.

How could an artificial unit or a digital-graphic representative of a specific person be so musically convincing? Here an abstraction of algorithms had written music devoid of any sense perception, intuition, emotion and without any understanding of the world. And not only that. This music had evoked exactly what the algorithm was completely lacking in the recipient: empathy and feeling, yes, identification.

Here the categorical system change from biochemical to digital algorithms becomes comprehensible.

David Cope's pioneering work already aimed at explosive questions of extra-human composing: Can a soulless something create feelings without having feelings, and why is it supposed to do that? Or more specifically: Can the unique ability of humans to invent sound relationships and to shape them in music be taken over by computers with the same persuasive power?

Humans and computers are still symbiotic, coexisting in wait. Opposite are the bio-neural intelligence of the human brain and the artificial intelligence, which is one hundred percent alien to the operating mode.

Research into this artificial intelligence is concerned with the simulation of replacing human intelligence with machines, i.e. using suitable software and hardware. At first it is quite simply a kind of copying into a more efficient medium. Only then does machine learning begin. These are processes that should enable machines to independently generate knowledge from experience and vice versa. The term for this is called deep learning.

It seems like a by-product that a host of developers cavort in the field of computer-generated composition. They still mainly produce programs that support amateur and professional musicians in composing. It is about algorithmic support for those with a lack of creativity or about self-optimizers who are determined to do everything. Program-assisted composing, a kind of composition doping?

In 2010 software specialists at the Universidad de Màlaga developed a program for their “Iamus” supercomputer. The program was able to independently compose music in all known genres. With their Opus One, the developers succeeded for the first time in using artificial intelligence to have new music composed without having to access a pool of stored big data and thus without any compositional model, completely autonomously, solely on the basis of musical parameters. The programmers' AI was able to show that it can generate innovative music independently, i.e. does not have to rely on access to a volume of data from previous music history. Their development is not accessed in order to move in its sequence, but the singular position of a composer is taken. Here, too, the program did not imitate, but simulated it on a new level that is neutral in terms of music history. The algorithm called "Melomics" has composed countless pieces in the past few years. The better ones were also performed with classical instruments. Even orchestras like the London Symphony Orchestra have already recorded pieces from this algorithmic music pool with well-known soloists. That’s a milestone.

But can the consequences of such advances in learning in the technosphere be thought further in terms of humanity? It looks as if it will stay with amazement and assistance for the time being. However, it could soon become apparent that two lines of development will fork here.

Perhaps the memory of the protagonist in George Orwell's dystopian novel "1984", Winston Smith, who looks through a window into the courtyard and watches a singing woman hanging up the laundry, helps. He listens to her, fascinated. Something seems to be gripping him. Then Smith says sadly, "How is it possible that a song that was written by a machine sounds so beautiful?" So - there is still hope, even if it continues to doze like Pandora.

What to expect AI will broaden the peak performance of the composing world as long as coexistence on a hybrid basis is guaranteed. After that, the paths will have to part. But the figure of the bio-musical composer is not yet finished, just because feasibility fantasies in the form of simulative digital intelligence could make him obsolete. This may also be significantly due to the energy sources. AI is fed with electrical energy and is not (yet) able to procure it autonomously. So people have to deliver them. In this way they slip into a role for the technosphere that is similar to that provided by nature for the biosphere.

One thing is relatively certain: as long as the composers of this world succeed in securing direct control over the energy issue, the digitally simulated algorithms cannot slip away from them. If not, there is only one thing left: pull out all the plugs!