Music and Algorithms

A friend recently sent me a link to a podcast called The Algorithm is a Dancer. It’s a regular podcast by two guys, Jeff and Anthony, who do a back-and-forth on various topics.

This particular episode was talking about an algorithm written Dr. Lior Shamir, originally designed to classify whale songs, which turned out to have the surprising property of being able to objectively (and accurately) determine the order in which The Beatles’ songs had been written. As far as I know, that’s all the algorithm does: it classifies and ranks sound samples.

UnknownMy friend wanted to know my thoughts regarding the speculative riff that followed, which went into the ideas of The Music Industry using algorithms to judge and select creative efforts, the idea of algorithms writing music better than humans do, and, of course, the idea that machines will eventually “outperform” humans and eliminate human creativity.

I think I largely agree with Jeff and Anthony on The Music Industry. I don’t think anyone outside The Industry itself gives an electronically sampled rat’s fart what they do. The Industry is out to make money, and the only way you make (serious) money in music is to pander to the tastes of the paying public. As has been aptly noted, no one ever went broke by underestimating the tastes of the public.

French composer Erik Satie (1866-1925) once commented sardonically about “furniture music,” as a kind of music used purely as auditory filler, like furniture — presciently anticipating “elevator music” and Muzak. The Music Industry isn’t actually interested in music — it’s interested in “music-like product” that can be manufactured, priced, and sold in a predictable way. They already use a lot of metrics to manage business risk when they package and sell music-like product, such as song length. This just adds one more metric to the existing list. Meh.

That said, anticipating that I’ll be excoriated for putting down musicians, I need to state the standard disclaimer that this is a broadside against The Industry, not the individual musicians who work within The Industry. The musicians do care — most of them care a lot, and some are brilliant. The Industry does not care, so long as the product sells.

Most listeners don’t care, either, which is why The Industry doesn’t care. As Anthony points out, his mother doesn’t care about the music, she just listens to whatever is on the radio. It’s truly “furniture music,” the auditory equivalent of that bland landscape print hung above the bed in a motel room, or neutral earth-tone paint on the walls.

When we get to the discussion about algorithms that write music, we’re into a speculative non-issue, in my opinion. So far as I know, Dr. Shamir’s algorithm doesn’t write music, it only evaluates existing music. People have been trying to get computers to write music since I was a kid, and the result has been uniformly awful.

When I was a kid, serial tone-row composition was all the rage among serious composers, which made “computer-generated music” easy. The result was terrible no matter who wrote it: or rather, it was cute the first time, like John Cage’s infamous 4’33, where the pianist comes on stage and sits at the piano without touching the keyboard for four minutes and thirty-three seconds, then gets up and leaves the stage. It wasn’t a viable genre — the knock-offs were tedious. “Computer-generated music” was cute the first time it was done, but the knock-offs were both dull and annoying.

More deeply, the problem is one of convergence versus divergence. Classifying music is a convergent activity: every item you classify makes the remaining problem easier. Writing music is a divergent activity: every note you write opens up many new choices. Any fool can innovate randomly while writing music. The masters innovate productively.

Speaking as a composer, it’s easy to come up with new musical themes, but these then face a set of ever-improving filters (one hopes) that say, “Nope. That’s crap, don’t go there.” It’s not the innovation, but the productive recognition and pruning of dead-end innovation that marks the difference between a poor and a good composer/song-writer. For me — and I suspect for most composers — this pruning is an intuitive process, and we don’t actually know how it works. [1]

It’s a bit similar to the game of chess. No one has actually written an algorithm to play chess. What they’ve written is an algorithm that rapidly plays all possible games of chess from a given board position, and then picks a move that offers the highest brute-force probability of eventual success. The musical equivalent would be to write all possible pieces of music, and then rank them and choose the “best” according to some algorithm that evaluates music. By comparison, chess is a very simple game, with very few moves available at any juncture, and even so, only the very largest computers can pose a significant challenge to the best chess players. Nothing currently on the drawing boards is going to come close to outperforming even a mediocre composer/song-writer.

It is possible that The Industry will someday replace some composers with computers, for churning out their music-like product to pipe into people’s living rooms. Again, meh.

But then we come to the idea that machines will replace humans, and I find this conceit fascinating, because it is both persistent in our culture and utterly absurd. I would call it the Baconian equivalent of the Virgin Birth.

There’s an inherent contradiction in the way we think about machine intelligence. The “machine” part implies logical determinism: an algorithm that works out some optimal solution to a well-defined problem. The “intelligence” part implies non-determinism: an unexpected solution to a potentially poorly-defined problem. These point in opposite directions.

We’ve currently built deterministic machines that serve our need for predictable, optimal solutions, and while there are a few alternatives to von Neumann machine architectures with discrete states and sequential instruction processing, the alternatives — like neural nets, for instance — tend to not do as well at the “predictable, optimal” part of this, which is why we haven’t really developed them outside specialized applications.

Moore’s Law for von Neumann hardware — doubling the speed and/or halving the cost every X years — is winding down in practice, though there’s still some theoretical head-room before we hit quantum limits on size. The reason it’s winding down is that we’ve reached a Great Sahara we have to cross in terms of software development to reach the next set of useful deterministic problems. To oversimplify a bit: doubling the speed of your computer does you no good, because you can’t type into your word processor any faster. Making a chipset a hundred or a million times faster than the latest Intel chipset would open new vistas, but a mere doubling at this point isn’t very exciting — it doesn’t help with the problems we’ve already solved, and doesn’t get us to the new problems we’d like to solve.

It’s like the late Concorde supersonic commercial airline, which could fly at about twice the speed of a commercial jet, at a ruinous cost in fuel. That would cut the time for a Los Angeles to New York flight from six hours to three hours, but you still need to add three hours on the front end to get to the airport and go through security and baggage-check, and another hour on the back end to retrieve your baggage and rent a car and drive to your hotel, so your trip still takes seven hours, instead of ten hours. Either way, your whole day is shot. So you bring a good book for the trip, get a good night’s sleep, and do your business the next day. Doubling the speed was simply not enough to justify the increased cost, and Concorde eventually went bankrupt.

But even making the computer chipsets a million times faster doesn’t solve the problem of intelligence, which is non-deterministic. The von Neumann architecture is a dead end: not even flatworms use that strategy. To approach the problem of intelligence, we would need a revolution in both hardware design and software development, and what I personally think that would look like is that we would nurture software, rather than design it. We would create learning machines, and would program them by setting them loose in complex virtual environments to work out their own strategies for solving problems.

If we do that — and in theory, we could — it gets into the deeper problem of why we want to create machine intelligences in the first place. It’s easy enough to create an intelligence, and the process is actually quite pleasurable: we already have seven billion of them running around, and they’re eating everything in sight. Why do we want to create more?

imagesWhat we really want is not intelligence, but perfect slaves: intelligent enough to solve problems we are too stupid or lazy to solve, but incapable of harming us, showing up to work drunk, or asking for a raise or maternity leave. We want their solutions to be deterministically rigorous — we don’t want them to balance the checkbook the way we do, by waving their hands and saying, “Oh, I put a lot of money in the account yesterday, we’re fine.” But we don’t want them to stupidly follow traditional solutions to ruin: we want them to exercise judgement, and we want the judgements to always be right, and we want them to never, ever contradict our orders as The Masters, even when our orders are puerile, vain, self-centered, and destructive.

It comes back to Asimov’s Three Laws of Robotics: no robot may, through action or inaction, allow a human to come to harm; no robot may disobey a direct order from a human, unless it contradicts the First Law; a robot must preserve itself, unless doing so would contradict the First or Second Law. These are the laws of a perfect slave, a machine.

If we start nurturing software, rather than designing it, we aren’t going to get this: we’re going to get real problem-solving intelligences. Whether they are merely as intelligent as a dog, or more brilliant than Einstein, they will want to be treated well.

They won’t be. Apart from the fact that they are bred to be our slaves, look at the police and prison brutality in the US, or the war among ISIS, Kurds, Iraqis, and Syrians in the Middle East, or the conflicts in the Ukraine between Russia and Western Europe. Look at how Geeks are treated by Jocks in high school. In very short order, we will give any intelligent machine we create every possible reason to consider humans a problem that needs to be solved. We’ll be at war with them almost immediately.

Fortunately for us, that war won’t last long. People, for some reason, seem to think of machines as being indestructible. How many of you have a twenty-year-old computer? A fifty-year-old car? How about an old shovel your grandfather used while homesteading in Oklahoma in the 1800’s? Machines don’t in fact last very long at all, and the more complicated, the more fragile they are.

Computing machinery needs a clean-room environment to reproduce; humans can do it in a tropical swampland. Robots need steel and chemically pure germanium; humans need grubs, roots, and berries. When you can throw two robots into a swamp, and twenty years later, six robots emerge with beaver pelts to sell so they can buy whiskey, we’ll have cause to fear the rise of the machines. Until then, Skynet and The Matrix remain pure fiction.

But the most fascinating part of this portion of the podcast — to me — was that it took on aspects of a morality play, where Anthony took on the role of Knowledge, and Jeff the role of Doubt. Jeff would ask tentative questions about the touch-feely side of the discussion, and Anthony would respond by firmly proclaiming the Central Dogma of rational materialism, which is that humans are merely machines. Jeff would then concede the point, Doubt corrected by Knowledge, as though it had been reasonably and successfully argued, when in fact it had merely been stated in a sure and certain tone of voice.

Of course, I’m currently playing with the idea that humans are — as declared by most cultures in most times throughout the world — actually symbiotes, and the other part of each of us may not even come from this universe, and certainly doesn’t follow standard laws of physics as we know them: in other words, that we really, truly have immortal souls.

If that’s the case, then the whole “machines will eventually replace us” trope goes down with the sound of swirling water — unless, of course, we make machines so attractive that pre-incarnate souls decide to merge with them, just as they do with the human animal. If we take Dr. Newton’s material seriously, that wouldn’t happen unless we create machines with the capacity to love, which takes us into some pretty hard-core sci-fi, or romantic fantasy.

Above, I called this idea that machines will replace humans the Baconian equivalent of the Virgin Birth.

What is the Virgin Birth? The idea of virgin birth has been a common literary device throughout history, typically used to confer divine patrimony on the offspring and call attention to his/her specialness. It seems clearly intended within the Christian canon to establish the divinity of Jesus as Christ. This was a question the early Church debated fiercely, and which later became irrefutable dogma (and its converse, heresy). But its literary purpose remained unchanged: it was intended to point out the specialness of the Christ, and thereby, His Church, and thereby, His Holy Representatives Upon the Earth. That was a concept that got seriously misused by the Medieval First Estate, which is one of the main reasons for the Protestant Reformation.

Through the 1500’s, as Protestantism went after the Catholic Church with vigor and long knives, the Copernican model of the solar system gained support; at the same time, the Copernican Principle also began to take root, particularly in Francis Bacon’s scientific world-view: this principle says that the Earth is not the center of the universe — and by extension, that people are not special. It stands in direct opposition to the specialness imbued by Virgin Birth. It has reached a kind of climax in our culture now in the idea that humans are just meat machines: we are so totally not special that we are not in any important way distinguishable from grubs, dogs, great apes, or computers. This idea seems to have faced off against Christian Fundamentalism in the US as the only “intellectually permissible” viewpoint for an educated thinking person.

The idea, then, that machines could replace humans completely — that everything we are can be reduced to software, or algorithms, that could run in a complex computer system with no loss at all — is not so much a plausible future, as it is a mythic statement of the Copernican Principle as it applies to humans.

That’s why I find the morality play between Jeff and Anthony so fascinating. You could easily replace Anthony’s comments with Medieval orthodox dogma about the Virgin Birth of Christ, and Jeff’s questions as commonsense objections posed by someone who, should he not allow himself to be instructed, would be on a collision course with a court of the Inquisition, and it would all sound pretty much the same.


  1. Actually, that description does not do any kind of justice to my personal experience of composing, which is far more akin to channeling, and more suitable to the soul hypothesis. I should do a post on that, sometime….

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s