• Member Since 11th Apr, 2012
  • offline last seen Yesterday

Bad Horse


Beneath the microscope, you contain galaxies.

More Blog Posts758

Apr
28th
2017

Writing (and composing): Mahler, Beethoven, Faulkner, House of Dawn, & the Wundt curve · 4:25am Apr 28th, 2017

The first part of this post is long, and explains how bad mathematics can lead to bad music. The second part is short, and uses the first part to explain something that I think Beethoven did right in his Moonlight Sonata, and that Faulkner and Momaday did poorly in As I Lay Dying and House of Dawn.

But first, a word about our sponsors.


You’ve probably noticed I’ve made a lot of blog posts this month--about 40,000 words. I ought to be trying to start my career as a freelance writer, but I’m in the middle of this big project on figuring out what can be said about what is good and bad art and literature. I want to get enough of it written down that I can move on before I forget it all.

My problem has been where to start. Everything I want to post seems to depend on everything else. Meanwhile I kept reading and building up a bigger backlog of things to blog about, until I realized I couldn’t remember what the things I put on my list of things-to-blog a year ago meant.

This month I gave up on integrating everything neatly, and have mostly just fixed up and posted blog posts I already had in draft form. Unfortunately it still takes too much time. When I have to read books and look stuff up and cite references, I can write 200-300 words of blog post an hour. If you want me to keep posting more, money speaks louder than words.

Thanks very much to my Patreon supporters: equestrian_sen, horizon, Bradel, chukker, Forderz, Hupman, Wallacoloo, Super Trampoline, AugieDog, and pwildani. Patreon is the only way I make money right now. The smallest amount you can pledge is $1 per 10,000 words that I post. (This stuff above the line doesn't count, BTW.) That's like paying $10 for a 100,000-word novel, so it's not bad, but not cheap, either. You can be cheap by turning it on and off, or by setting a max of one payment per month.


Music and the Wundt curve

In Thoughts on listening to Mahler's Fifth Symphony three times in a row, I talked about how artists had come to praise complexity, failed to distinguish complexity from unpredictability, and so idolized randomness. I mean “idolize” (or “fetishize”) literally; they made a shoddy graven image of what they worshiped, and worshiped the image (randomness) instead of the original thing (complexity). I used Mahler’s Fifth Symphony as an example:

It seemed like a good rule to say that the less-predictable music became, the more complex and better it would be. And in fact, the commentaries on Mahler’s Fifth are full of references to the “complexity” and “interest” generated by its dissonances and irregularities.

But after some point, increasing unpredictability makes music less complex. Instead of complexity, we get mere noise. [Assuming other people use the word "complexity" the way I do.]

That’s what happened. Composers internalized the theoretical belief that unexpectedness made music more complex and interesting... They kept making things less and less predictable, even after passing the point where complexity was maximal.

Once they’d passed that point, unpredictability only made the music boring, not complex. Like Mahler’s Fifth.

A day or two after posting that, I read the first chapter of Muses and Measures: Empirical research methods for the humanities (M&M). It talked about the Wundt curve. In its most basic form, the curve plots enjoyment against signal intensity.  The data comes out like a bell curve:

This applies to things like the pleasantness of a particular temperature, or the appeal of music at a given volume (low to high). M&M said that this curve applied in general to human aesthetics, and also used Mahler’s symphonies as an example:

Suppose you ask people to listen to a simple song. Chances are high that most people will find it of medium to high hedonic value. Now have them listen to a Mahler symphony. In all likelihood, the ratings for hedonic value will be lower. The explanation for these different ratings lies in the different complexities of the two pieces of music. For various reasons, we must call the Mahler symphony more complex than the song: It is much longer, is executed by a much larger orchestra, containing more different instruments that build, moreover, ever–changing combinations, and its melodic patterns are more intricate and unusual (hence it is also more “novel”). ...

Suppose we expose listeners to the song repetitively, and we do the same with the Symphony. What one will observe is that after several trials, the hedonic value ratings for the song will start falling, while those of the Symphony may start rising. The complexity of the sound texture of the Symphony makes it nearly impossible for most untrained listeners to be appreciative on a first or second hearing: its richness is simply not taking in. With repeated exposures, listeners may begin to grasp its variations of melodic and orchestral patterns, it’s structure of repetitions and contrasts, and its multilayered levels of tone and rhythm.

In this standard view, complexity equals unpredictability, which equals the opposite of novelty; and where the peak in the curve is depends on how novel the stimulus is to the observer. In my view, the peak appeal in the curve is at the amount of unpredictability (or, equivalently, information) where complexity (relative to the listener) is maximal.

So far we just use different terms: the standard view uses the word “complex” to mean “unpredictable”, and says that people have some arbitrary level of unpredictability that they like best. I use the word “complex” to mean “aesthetic appeal as a function of unpredictability.”

The difference is nominal, but not trivial. I’m naming the amount of appeal some artwork has due just to its degree of unpredictability “complexity”, to make it a thing we can study. If we can reliably predict where its maximum will be, we will thereby know, if not understand, part of what makes good art good [1].

In practice, the disagreement is worse. Instead of teasing out the relations between complexity / appeal, randomness, and novelty, people using the standard view usually simply declare they all mean the same thing [2], as van Peer absent-mindedly does in the paragraphs above, and as (Berlyne 1970) and (Heyduk 1975) do as well--as if the curve were simply appeal = randomness. (In which case it would not be a curve, but a straight line. The labelling of the Y-axis on the figure above makes it not an axis at all, but a mysterious mixture of two components--all to preserve the absurd belief that "complexity" means "randomness".)

This leads to bad music. It says that, if you start with something random enough, it starts out sounding ugly, but every time you play it, it becomes less novel, and thus more beautiful, until after some number of exposures it becomes…

the most beautiful thing ever.

Becoming an art connoisseur then means training yourself to like more and more noise and randomness. No justification is ever offered for why we don’t stop liking the canonized noise-art after hearing or viewing it enough times.

It’s true that people will come to better like a Mahler symphony, or noise by Ferneyhough, the more often they listen to it. But if we take this as proof of beauty, it would mean that anything we dislike at first is beautiful, while the things we used to call beautiful--songs, paintings by Dutch masters--are so ugly that they rate a zero on the objective scale of artistic merit. Mathematically speaking, the fraction of all possible songs or paintings that are less than or as random as those songs and those paintings, and hence no more beautiful than they are, is zero.



Complexity Structure

As I explained in that post, in the bad old days before 1992, “complexity” meant computational complexity, and “complexity” measures like entropy and Kolmogorov complexity are actually measures of information. They say that random sequences of numbers have the most bits of information (which they do), but it sounded like they were saying they have the highest possible complexity.

Mathematicians in the study of complex systems knew that was wrong. “Complexity”, if it means the adaptability or interestingness of a system, is maximal at a point between between the realms of boring, dead stasis and random chaos.

Coming up with a definition of complexity that didn’t give random sequences high complexity wasn’t hard. The hard thing is that there are lots of measures that do that, and it isn’t obvious that one is more right than the other. (Feldman & Crutchfield 1997) concluded that we ought to stop using the term “complexity”, and say more specifically what we’re trying to do and what we want to measure. Feldman & Crutchfield suggested using the term “structure” instead.

“Complexity” had always seemed intuitively clear to me before, but it derives from Latin “complex” (a collection of parts). It’s an interesting comment on how new our notions of structure and organization are, that we had to appropriate the word “complex” to mean a thing with an intricate causal structure and many behaviors, when in the Middle Ages it just meant a thing with many parts. “Complicated”, perhaps a related word, meant “things folded together”, which again does not have any notion of complex function or causality. “Structure” meant “to build”; “organism” and “organization” come from “organ”, which means “an instrument”. “Elaborate” is from the 16th century; “mechanism” from the 17th. Latin and Middle English had an abundance of synonyms for “complicated”, including “intricate” and “perplexing”, but neither medieval Latin nor Middle English seem to have had any word for productive complication, in which the number of behaviors, or the sophistication of behavior, grows faster than the number of parts. (I'm making a big deal of this because it's another indication that the medievals didn't understand creativity.) The closest they had to a word for describing complex organization was “hierarchy”, from “hierarch” (sacred ruler), meaning a top-down chain of command like that of the angels and heavenly beings.

This may be why, as I argued in Modernist and Medieval Art, one of the tenets of modern art and modernist writing is the rejection of structure in art. The purpose of structure, once we get past the Renaissance with its mystical principles of composition in paintings, is to combine parts in relationships that multiply rather than merely add their power. Examples include the structures and dynamic arcs built out of repetitions, inversions, and variations on the theme in a Bach fugue, or the interlocking plots, themes, and character arcs in a novel. If modernism is a reversion to medieval thought patterns, which focused on timeless, hierarchical relationships between abstract essences or types, rather than dynamic interactions between real individuals, then modernism will be similarly less interested in structuring components in space and time.

This prediction is confirmed by Schoenberg’s modernist 12-tone music. Its basic principle is to use all the tones in a scale before re-using any of them. This tends to maximize the entropy and randomness of the music. It’s as if he designed his theory specifically to make structure in music impossible.



Maximal musical complexity: Already attained

Let’s say music can be complex in three ways: Melody, rhythm, and harmony. Consider the first movement of Beethoven’s moonlight sonata:

Melodically and rhythmically, this music is dead simple. Why did Beethoven write such a boring, simple melody and rhythm?

Because harmonically, it’s crazy. Here’s the chord progression in the first 13 measures:

c#m c#m7
A D G#7 c#m G#7 Cdim c#m/E c#m
G#7 c#m f#m
E B7 E em
G7 C em F#7 bm

I’ve never heard any other tune use the chord progression C#m F#m E B7 E Em G7 C Em F#7 Bm. Pick any equally-long pattern using C, F, G, Em, and Am, and you could find thousands of songs that used it.

The melody and rhythm are simple because otherwise you wouldn’t even be able to tell what the chord transitions were. You wouldn’t know where the transitions between chords are (they’re played one note at a time).

Beethoven decided that the chord progression was so unpredictable that it used up all his allowable unpredictability. He simplified everything else so that people could perceive the chord progression.

Similarly, a Bach fugue will have great polyphonic novelty, with perhaps eight different voices at the same time, but based on a repetitive melody, with a constant rhythm and few key changes. Negro spirituals and ragtime usually have simple tonality and melody, but complex rhythm and timing. Dixieland and some other forms of jazz alternate between complex and simple stretches. Pushing the novelty envelope in one way to achieve a distinctive effect always requires scaling back the novelty somewhere else.



Faulkner and Momaday: Not Enough Structure

Which brings us to Faulkner’s As I Lay Dying and Momaday’s House Made of Dawn. Both of them had all of the following:

- strangely-styled, strangled sentences
- connections between events that were not revealed until much later
- chapters narrated by tertiary characters whose relevance to the story wasn’t revealed until later

I realized this was too much unpredictability when I was trying to figure out whether I bought equestrian_sen’s explanation of the second passage I quoted in my review of House Made of Dawn. In most novels, I’d have been able to guess whether his interpretation was correct by how well it fit the events that led up to the passage, and what the old man did afterwards. But I couldn’t do that with HMoD, because there was no continuity in time between the scenes. One person does one thing at one time in one place, then some other person does something else in some other place at some other time. If the meaning were clear, I could use it to figure out the connections between the scenes. If the connections between the scenes were clear, I could use them to figure out the meaning of each scene. As nothing was clear, it was difficult to figure out anything.

I think that these modernist books, like Mahler's Fifth, have objectively too little structure for contemporary American readers. They are so unpredictable that, though this high unpredictability means they convey many bits of information, they convey less meaning than they could with less unpredictability--where "meaning" is, as with "complexity" or "structure", information minus randomness.


[1] Did you think it was odd that I said we might “know, if not understand” something? Later, I’m going to talk about the difference between rationality and empiricism. (It’s super-important, honest!) One of the non-obvious ways of distinguishing them is that rationalists believe you must understand things before you can know anything about them. This is epitomized by the obsession that Socrates and medieval scholastics had with defining terms. Empiricists, by contrast, believe you must begin with a collection of known facts, some of which might use unanalyzable terms you just made up, before you can hope to understand things. A classic example is gravity. Isaac Newton just made it up to name a force in his equations, with no understanding of how it worked. We know a lot about gravity, but we still don’t understand it. (This observation is from Popper 1966, vol. 2 chapter 11, “The Aristotelian Roots of Hegelianism”, part 2. Its truth is due to the scientific practice of operationalization, which means you create terms as shorthand for what you can measure rather than as shorthand for definitions of what you want to measure. When I say complexity is what the Wundt curve measures, I'm operationalizing complexity rather than defining it.)

[2] With novelty being 1 - randomness, or 1 / randomness, or some other inverse measure of randomness.


References

Berlyne, D. E. “Novelty, Complexity and Hedonic Value.” Perception and Psychophysics 1970, 8: 279-286.

David Feldman, James Crutchfield 1997. Measures of statistical complexity: Why?

Heyduk, Ronald. “Rated preference for musical compositions as it relates to complexity and exposure frequency.” Perception & Psychophysics 1975, Vol. 17(1), 84-91.

Karl Popper 1966. The Open Society And Its Enemies, 5th ed., Princeton University Press. 1st ed. 1945.

Willie van Peer, Jemeljan Hakemulder, & Sonia Zyngier 2007. Muses and Measures: Empirical research methods for the humanities. Cambridge Scholars Publishing.

Comments ( 31 )

I continue:

To disagree with you vehemently about Mahler--I always think of his symphonies as popcorn movies for the era before movies existed--but I'm so glad to see you getting your thoughts organized and out into written form!

Mike

Making a prediction that bears out gives the human brain a little hit of happy chemicals. Structure in writing and music provides the framework that we use as a basis to make little micro-predictions into the immediate future of the work. Removing the reader or listener's ability to do that removes a major source of hedonic value.

If it's too easy to make predictions, the certainty breaks the loop and the work feels boring. If it's too hard, the reward is never dispensed. In either case, the reader or listener doesn't get as much from the work.

4512353 I think so. I originally had about 2000 words in the middle of the post arguing that the right amount of unpredictability has aesthetic value because it triggers our natural pleasure in learning things (even if we're not really learning anything applicable outside of a song). It was too wordy, though. What you just wrote is better.

4512358
Hmm, that makes me think of using something like conditional entropy rather than just pure entropy as a measure of "complexity." Like, I can imagine that something with good "structure" would be something for which there are lots of possibilities, but which isn't constantly randomly changing. So, before listening to the song, you have no idea what it is, but once you start listening to it, you get a better idea of what it is and what's going to happen next. To put it in mathematical terms, the entropy of the "state" of a piece of music at "time" t is high, but the entropy of the "state" at "time" t given that you know "time" t-1,t-2,...,0 is low.

And of course, if we're comparing entropy and conditional entropy, the natural measure that arises is mutual information, which is the difference between the entropy and the conditional entropy and gives an idea of how much information you can glean from one variable if you know something about the other one. In mathematical terms:

I(X;Y) = H(X) - H(X|Y)

I think we can try to define "structure" as something for which there is some Y that the consumer possesses that leads to a high mutual information. Or, to put it in another way, if the mutual information is low, one of two things must be true: either the entropy of the system is just low in general, which makes it very predictable and thus boring, or the entropy of the system is very high but there is also no side information we can use to predict it, which makes is "random," and thus also boring. The sweet spot requires high entropy of the art piece, but also allow the viewers to have some way of predicting it ("Y").

Not sure if this is useful, but as someone who dabbles in Information Theory, it's cool to try to phrase these high-level art questions in terms of pure mathematics.

4512358

I think it's an important component. It helps explain how art communities disappear up their own butts.

Someone who gets immersed in twelve-tone composition will develop an ear that picks out the initial sequence and the reversals and inversions that will appear in the rest of the work, and suddenly they'll start to be able to make some predictions and get those pleasure center hits while listening to Schoenberg.

So it goes for other art, as well--a lot of ridiculous-on-its-face art is actually under twenty layers of prior work, all of which must be internalized in order to recognize the structure that the artist is providing. Every art community that disappears up its own backside does so because the artists became their own primary audience, basically. That makes the standards for sufficient novelty into a runaway arms race, while also allowing artists to rely on much deeper knowledge on the part of the viewer/listener/reader. With everyone competing for attention under those conditions, the general result starts to look inevitable.

Thank you for the kind words regarding my concision, by the way.

Melodically and rhythmically, this music is dead simple. Why did Beethoven write such a boring, simple melody and rhythm?

Because harmonically, it’s crazy.

I dunno, is "aesthetic tension" not a thing anymore? Or just not an artistic virtue?

4512401
:rainbowlaugh:

Seems to me the Moonlight Sonata's not about 'chord progression' nearly as much as it's about picking out voices (top note, bottom note etc) and sending them on little chromatic journeys. This also happens a lot in Chopin: three note 'chord progressions' but the top note's doing some kind of ramp, or some other recognizable pattern. Patterns against patterns: even works for Deadmau5 .

Listen to Chopin's nocturnes, for how often he goes on a little 'ramp' that's chromatic but is obviously not a chord progression so much as it's a melodic line somewhere that's just slowly ascending or descending. You'll also hear the little patterns within lines, over and over: regular variations in the top line, especially quick lines, producing a variation that itself has regularity in its departure from the pattern.

For really wacked out chord progressions you have to look to jazz: stuff like Giant Steps improvises melodies around chord sequences that don't keep tonal centers, so you get stuff that sounds like 12-tone because it's covering such a range of notes, but the logic isn't 'play all the tones', it's 'stay on the shifting harmonic center as it rapidly cycles, and show where it went'. One of my all-time favorites is John Scofield's 'Protocol' for how the guitar produces seemingly dissonant lines that strikingly highlight where the chords went. Compare the guitar lines with the keyboard lines for different approaches to the same chords: it really takes on the mantle of Giant Steps. It's very much more Coltrane than Schoenberg: if you have a set of strange chords you can accentuate common notes (the keyboard) or the more outre notes (the guitar, and the written lead lines).

Part of what's going on here (and in Chopin, and Beethoven) is a boundary-pushing behavior: intentionally finding ways to offend what the mainstream ear will accept, but with a hidden structure not apparent to the uninitiated. (sounds like your earlier posts!) In particular, bebop was known to do this, very much on purpose: at the time it was developed, it departed from the mainstream of jazz by finding harmonic progressions and chords beyond the norm (say, going from minors to sevenths, going from sevenths to ninths: look into the circle of fifths and how you can change keys with either minimal alterations of notes, or drastic alterations. Giant Steps produces giant shifts in the underlying 'scale', or you can just change a chord from a minor to a major (or vice versa) which will then imply a key change and different chords to accompany.

The bebop jazz guys doing this were not part of any cabal and had no power of any sort, so they CREATED their own hip cabal through innovating into difficult chords and key changes. If you were a traditional jazz player and you tried to solo over Giant Steps, it would do your head in: the mental workout of making a melodic line across a chord sequence where all melodies suddenly become 'wrong notes' is a brutal shock, and when you hear people effortlessly doing just that, it's daunting. Hence, the cabal that bebop created, complete with a very serious 'yeah man, maybe you're not smart enough to get it' arrogance as both bait and barrier :)

And what is good, Phaedrus,
And what is not good-
Need we ask anyone to tell us these things?

4512432

Seems to me the Moonlight Sonata's not about 'chord progression' nearly as much as it's about picking out voices (top note, bottom note etc) and sending them on little chromatic journeys.

Jinx, I have to say outrageous things to get you to talk. You know a lot, but you don't blog it.

Actually I didn't say the Moonlight is "about" chord progression. The left hand plays tritone sequences, which we can call chords, and the right hand stays in tune with them. It's a useful way to compare the complexity of different pieces, but not a critique or appreciation.

>One of my all-time favorites is John Scofield's 'Protocol' for how the guitar produces seemingly dissonant lines that strikingly highlight where the chords went.

I tried it. Didn't get it. I probably need to study Coltrane first. Jazz is hard. I have intellectual reservations about art where you have to understand the theory to understand the art. I'm okay with it as long as people don't say "this theory-inspired art is better." Actually it relates to the long footnote in my post. The idea that you can construct a theory first, and then construct art to fit that theory, is a rationalist way of thinking. The idea that you observe a lot of art, then fit a theory to the art that you liked, is an empiricist way. I favor the empiricist way as more legitimate and historically less likely to end up in killing people.

4512422 >I dunno, is "aesthetic tension" not a thing anymore? Or just not an artistic virtue?

I don't understand aesthetic tension well. You want to blog about it? :pinkiehappy:

PresentPerfect
Author Interviewer

I enjoy listening to the various Youtube videos you use in these. Which is to say I heard both about 40 minutes of Mahler's 5th (minus 15 minutes in the middle to eat an artichoke) and maybe 2 minutes of Ferny-hoo. And I'll say the former at least has some structure in comparison, randomly applied though it may be. You're not gonna get it stuck in your head, but it was at least pleasant to experience for a while. (Gotta love that dude in the background with the rapid-fire clapping instrument whose name I cannot bring to mind.)

4512762 The thing you have to understand about me and 'Protocol' (which I've loved madly since I was a teenager) is this: I'm also the guy who was a homeless drug addict, who grew up with severe-ish autism when they treated and trained you like an animal to minimize your strangeness. It's NOT the same today as it was in the eighties. I barely survived it, and I'm still more than a bit weird :rainbowdetermined2:

That's why, when interfacing with your analyses of art and creation, I sometimes ricochet off your theses quite violently. If artists and musicians and creators are rational actors trying to connect with an audience, their choices can be interpreted as negotiations with that audience, expectations of the audience. But I crave 'Protocol' not because it's difficult for me, but because it's a musical expression of how the world seemed to me as a teenager. Everything seemed wrong and rather dangerous and twisted in a way I couldn't understand. 'Pretty' music was like a mockery of how easy things were supposed to be… but this, this daunting outburst of dissonance and strange harmony, this spoke directly to me. That's how I felt.

In that light, there's no point 'studying', say, Ornette Coleman or Eric Dolphy or Coltrane: the idea that you have to work and study is an excuse, sort of a dominance behavior. What happens is this: people make their art, and when they're strange sick people (quite a few bebop jazzers were heroin addicts: they felt like that, it wasn't an exercise in alienation) the art comes out real twisted and odd. And other twisted people naturally connect to that, and go 'yes! that is MY music!' (and yes, you do have to work at it to get it 'right')

Then, when a normal person comes and goes 'I think it's crap. It's tainted and sick-sounding, and what do you have to say for yourself?' because they have no such connection… that's when you get the argument 'you just aren't advanced enough to understand it, maaaaan'. And this doesn't only come from the artist, but also from the listeners and audience. If you connect with the art you're put on the defensive by challenges, and you seek to rearrange the authority structure because it's invalidating you and your preference. So you say, 'no, it's not that what I like is bad, it's that YOU don't understand it'.

The opposite can be true. Matisse (iirc) struggled for a while, because his work and vision was too pretty, and the times were geared towards tough challenging art. Pretty wasn't cool. He had to persist and get better at what he did, because he was too NICE for the zeitgeist. And as you might imagine he came into his own in the age of reproduction, and people loved to buy postcards of his work because it was pretty. But that was his vision.

Don't study Coltrane (to learn to appreciate it). If it lights your fires, then it's for you. Otherwise you're a carpetbagger and should focus on the stuff that gives you the 'aha!' feeling, where you want to write blog posts about how it's clearly the best sort of art.

If you also remember the sheer relativity of it all, then you'll really be on to something :ajsmug:

You know, Bad Horse, I just realized something. You talk about understanding why 'good art is good', but I've been reading your blog posts for a while, and I don't think I know what you consider 'good art'. Do you have a definition you use, when you're trying to figure out what makes it good? Is it in here somewhere, and I just missed it? Popularly acclaimed? Critically acclaimed? Personally enjoyable? Some mix? Gut instinct?

Also, the more I read your posts, the more tempted I am to start writing my own blogs on this stuff, since my ideas are somewhat different - or, at least, approaching the same ideas from somewhat different (read: less academic :P ) directions. I should probably start taking notes, so I can forget what they mean in a year or so when I get around to starting. :P

4512996 No, I don't have a definition. See footnote 1 about definitions; it applies here. To start with a definition would just be to declare my own preferences.

I have a few different ideas or theories. One of them, which this post leads to, is the idea that humans derive pleasure from learning to better-predict the world, and this leads to them liking things which let them feel successful at making predictions. We can use information theory to say what kind of art might make people feel successful at making predictions, and what dynamics artistic styles would follow as people acquired experience with them.

Another totally unexpected theory is that there are two opposed personality types or theories or sets of circumstances which have caused fights over art as far back as we have records, one being the Platonic static idealist totalitarian rationalist, and the other being (very roughly) the empiricist liberal progressive humanist. The former is the most-popular worldview in the West, and if it has political power, it always tries to get exclusive control over art and use it as propaganda to sustain a totalitarian regime, its art always has certain fundamental philosophical assumptions (at least in the West), and it always perpetuates its own power by developing an epistemology and system of philosophy that is closed to external influence or falsification, self-consistent, and destroys its adherents' abilities to think heretical thoughts or to understand its own contradictions. Its most notable and prototypical representatives are Plato, the Catholic Church, Puritanism, Hegel, Marx, Nazism, and Stalinism. That art which was meant to perpetuate these regimes seems to on average just plain be bad for people, making their lives or the lives of those around them unhappier and shorter.

Cultural battles are often not between Platonists and humanists, but between Platonists and other Platonists, and the "humanists" sometimes manage to get enough control of the intelligentsia to let their art be made while the various Platonist factions are fighting each other, as for instance Renaissance Italy in the 14th century was artists and the middle class taking advantage of fighting between the Church, the King, and nobility.

4513129

One of them, which this post leads to, is the idea that humans derive pleasure from learning to better-predict the world, and this leads to them liking things which let them feel successful at making predictions. We can use information theory to say what kind of art might make people feel successful at making predictions, and what dynamics artistic styles would follow as people acquired experience with them.

Keep in mind, with this theory, that people seem to learn in different styles, and we haven't yet been able to figure out exactly how to define those styles. It might be relevant in terms of different kinds of art appealing to different people, as they're better able to predict some things than others.

4513129

Cultural battles are often not between Platonists and humanists, but between Platonists and other Platonists, and the "humanists" sometimes manage to get enough control of the intelligentsia to let their art be made while the various Platonist factions are fighting each other, as for instance Renaissance Italy in the 14th century was artists and the middle class taking advantage of fighting between the Church, the King, and nobility.

Keeping powerful parts of society in civil conflict with each other so they don't have time to mess with normal people keeps looking like a good goal when I contemplate political philosophy.

Complexity Structure

I've sunk more hours on this subject than I'd like to admit. Specifically, the relationship between complexity and structure.

Isaac Newton just made it up to name a force in his equations, with no understanding of how it worked.

I guess, but it works both ways. On one side, Newton observed gravity and decided to give it a label rather than decompose it. On the other, Newton observed a failure of his conservation and optimality laws, and he theorized a force to explain it. (Note that Newton didn't have to call gravity a force, as he did with various other things in his theory of mechanics. There are other ways to deal with the effects of gravity.)

Newton is a complicated case when considering the merits of empiricism and rationalism. If you see science as a bridge between observation and theory, then science (the bridge itself, not a particular theory or field) can seemingly be constructed from equivalence and optimality relations. Newton happened to have found a close approximation, which he then exploited to the fullest extent he was able.*

* The adjunction between syntax and semantics seems to be well-accepted. There's a similar adjunction between theory and experiment.** Adjunctions can be constructed from equivalence and optimality relations. The simplest such case is where equivalence and optimality imply one another.
** For any theory, there is a maximal collection of experiments consistent with it, and for any collection of experiments, there is a minimal theory consistent with all of them

Preference for one over the other isn't really necessary though. Empiricism is the fuel that adds to scientific understanding, and burning that fuel (converting observations to plausible theories) is pretty squarely in the domain of rationalism. You need both to make meaningful progress.

Having said that, I guess your preference for empiricism is a reflection of the excess of untested theories that you see (presumably in literature) whereas my preference for rationalism is reflection of the dearth of theories that I see in my own field. I think ultimately what we want to do is unify all the gunk we see so we can explain things simply and consistently. Theories unify experiments, and experiments unify theories.

What complexity gives us is the chance to unify at any level of scale we choose to focus on. In music, that focus can be at the level of notes or chords up to the level of albums or beyond. In writing, that can translate to an appreciation of the structure of individual sentence or even the usage of individual words, up to an appreciation of the collective vision of a group of authors.

If someone uses a author's tendency to create new words as a basis for saying that the author is any good, I would consider that someone a reductionist because they believe that the value of a piece of writing can be determined from its smallest units. Same for someone that argues for a musician's use of unusual chords. I don't think it's bad, in the same sense that I don't think quantum mechanics is bad. There's a limit though to the amount we can learn of fluid dynamics from quantum mechanics. In the same sense, I imagine there's a limit to the amount we can learn of stories from reductionist literature, or the amount we can learn of songs from the study of chords.

And this is where excessive navel gazing gets you, folks. I'm going to stop myself there before I go too far off the deep end.

4513129

Another totally unexpected theory is that there are two opposed personality types or theories or sets of circumstances which have caused fights over art as far back as we have records, one being the Platonic static idealist totalitarian rationalist, and the other being (very roughly) the empiricist liberal progressive humanist. The former is the most-popular worldview in the West, and if it has political power, it always tries to get exclusive control over art and use it as propaganda to sustain a totalitarian regime, its art always has certain fundamental philosophical assumptions (at least in the West), and it always perpetuates its own power by developing an epistemology and system of philosophy that is closed to external influence or falsification, self-consistent, and destroys its adherents' abilities to think heretical thoughts or to understand its own contradictions. Its most notable and prototypical representatives are Plato, the Catholic Church, Puritanism, Hegel, Marx, Nazism, and Stalinism. That art which was meant to perpetuate these regimes seems to on average just plain be bad for people, making their lives or the lives of those around them unhappier and shorter.

But look--Communist Christmas kittens!

4513905 What do you mean by Newton's optimality laws? I don't know what you mean by "optimality relations." You seem to be talking category theory, so I doubt I'm going to be able to understand.

Empiricism is the fuel that adds to scientific understanding, and burning that fuel (converting observations to plausible theories) is pretty squarely in the domain of rationalism. You need both to make meaningful progress.

Except that by "rationalism" I mean what has historically been called rationalism, which does not convert observations into theories, but denies that one should even do that. It is a collection of philosophical ideas which claim that axiomatic logic is the right representation for real life, and empirical observation will only mislead you. It often includes some form of Platonism (belief in a transcendental realm), religious components such as teleology and eschatology, and disbelief in real numbers and avoidance of physical measurements.

4515414

Bad Horse What do you mean by Newton's optimality laws?

I am talking about category theory, but it includes statements like "X follows the path that minimizes the time it takes to get from A to B." The Principle of Least Action is also covered. They're statements of the form "Physical observable X is the optimal Y."

Feynman had a brief explanation about the relationship between Least Action and Newton's three laws of motion.

Quoted from Wikipedia.

According to Richard Feynman, the principle of least action is mathematically more specific than Newton's 2nd law and more fundamental in theoretical physics because it explains a wider range of physical law. You can derive Newton's 2nd law from least action, but the converse is not true without also applying Newton's 1st and 3rd laws and disallowing non-conservative forces like friction. By being more specific and thereby explaining only conservative forces, the principle of least action is able to solve problems Newton's 2nd law can't, but the converse is not true. The principle of least action can be used to derive the conservation of momentum and energy if its symmetry in space and time are assumed (see Noether's theorem). It correctly does not allow non-conservative potential fields, but Newton's 2nd law allows for them by allowing for non-conservative momenta and forces (such as friction) which are not fundamental forces. The mathematical basis for the difference is that Newton's 2nd law (stated as F=dp/dt instead of F=ma) allows for momentums p(t)=q(t)+C where q(t) are conserved momenta allowed by least action and C is a constant that can be non-zero in Newton's 2nd law but not in least action. The constant allows for non-conservative momenta and therefore non-conservative forces and potentials in Newton's 2nd law. Newton's 2nd law explains conservation of energy and momentum and can be used to show equivalency with least action when forces are properly conserved, e.g. when forces are summed to zero in accordance with Newton's 1st and 3rd laws and when accounting for heat generated by friction. Derivations of Lagrangian and Hamiltonian methods do not begin with Newton's 2nd law, but with a more modern mathematical formulation of it that requires forces to be conservative.

Bad Horse Except that by "rationalism" I mean what has historically been called rationalism, which does not convert observations into theories, but denies that one should even do that.

Fair enough. I think you mentioned that in an earlier post, but I can't find it now. I think most people refer to the apriori-vs-posteriori knowledge distinction when they talk about the merits of rationalism and empiricism.

4514071 The artistic greatness of that video has destroyed my theory.

It's nice to read your old blogs and find that I actually understand a decent amount of the math behind them now. Maybe in a few more years I'll be able to reasonably understand your dissertations on art.

One of the things that confused me is why modern visual art is increasingly simple but modern orchestral music is increasingly unpredictable. From what I understand of your post, they are similar in that they lack structure. That makes sense, but why do they diverge in direction? How come stupidly simple orchestral music isn't critically acclaimed, or likewise with extremely unpredictable art (I mean, maybe it is--which side do Jackson Pollock paintings lie on?). I imagine the reasons are related to your posts on modernism, but I have trouble grokking them (feel free to direct me to the appropriate post if you've already explained it).

Re: maximal complexity, how does this affect the idea of "progress" in art? Did Beethoven achieve maximal complexity, and if so, does that mean that his works are forever the pinnacle of orchestral music? Not that I mean Beethoven specifically, but if the quality of orchestral music has diminished over time, then we must have reached a high point somewhere. If it was the highest possible point, it makes sense that over time the quality of orchestral music would degrade. Should it have stayed at that point instead?
How does this compare with literature? You can say Death of a Salesman > Hamlet (I don't know enough to say how (i.e. which direction) it became more complex), so at some point we must have been approaching the right direction. Is it possible we reached maximal complexity (or a close approximation of it) with literature, and thus it follows the same path as orchestral music?

Maybe I'm taking the bell shape of the Wundt curve too literally.

Becoming an art connoisseur then means training yourself to like more and more noise and randomness.

How does this apply to modern visual art, which is ostensibly not noisy at all? Maybe I just don't understand the difference between simple/random very well, but this piece looks simple enough. Do modern visual artists train themselves to like less randomness? I'm guessing that ultimately it's about training oneself to resist structure.

4530659

How come stupidly simple orchestral music isn't critically acclaimed, or likewise with extremely unpredictable art (I mean, maybe it is--which side do Jackson Pollock paintings lie on?)

Good question. I can only speculate. There are 2 "different" modernist pieces of music which consist of nothing but silence, but nobody AFAIK has had an orchestra play just one note, then a different note, then stop.

Maybe, if you do that, you'll be famous. :rainbowderp:

I'd guess it's because the people acclaiming that music would have to sit thru it now and then, and it would be long and boring.

Pollock is extremely unpredictable. You can tell that his paintings are on the high-unpredictability end by looking at the size of them as compressed JPG files. Let's compare these paintings:

jackson-pollock.org/images/paintings/number-5.jpg
Pollock 1948, "Number 5"

artsunlight.com/artist-photo/Rembrandt-van-Rijn/aristotle-with-bust-of-homer-by-Rembrandt-van-Rijn-218.jpg
Rembrandt 1653, "Aristotle before the bust of Homer"

i1.wp.com/kevinrmuller.net/wp-content/uploads/2015/01/Rothko-No-14-1960.jpg
Rothko 1960, "Number 14"

They were probably compressed with different parameters, so I opened each in MS Paint, reduced them to 500 pixels wide, and saved them as JPGs. Their filesizes were then:

Pollock: 500*294 = 147000 pixels, 170,297 bytes, 147000 / 170297 = 0.863 pixels / byte
Rembrandt: 500*525 = 262500 pixels, 70,787 bytes, 70787 / 262500 = 3.71 pixels / byte
Rothko: 500*537 = 268500 pixels, 44,306 bytes, 268500 / 44306 = 6.06 pixels / byte

So the Pollock is ~4.3 times as random as the Rembrandt, which is ~1.6 times as random as the Rothko, when we're considering a nearly pixel-perfect compression. My claims about complexity or structure amount to saying that if you only had to reconstruct something that most people who had seen the original painting would recognize as being that painting--for instance, the Pollock could be turned upside-down and few people would notice--then both the Pollock and the Rothko would have more pixels / byte than the Rembrandt.

Did Beethoven achieve maximal complexity, and if so, does that mean that his works are forever the pinnacle of orchestral music? Not that I mean Beethoven specifically, but if the quality of orchestral music has diminished over time, then we must have reached a high point somewhere. If it was the highest possible point, it makes sense that over time the quality of orchestral music would degrade. Should it have stayed at that point instead?

My argument is that, when people said "complexity" was good, they should have specified that complexity meant structure, not unpredictability. However, complexity is relative to the observer. A chess master considers some chess positions complex, and others random, that seem the same to chess novices, because the chess master can compress the complex position (express it as a few common structures, plus remaining details of difference) using his or her knowledge of common chess structures.

We don't know whether maximum complexity is really best, or whether instead there's some maximum amount of complexity that each human wants to deal with. It would be very neat and easy to implement if maximum complexity (meaning the stimulus which takes eg the most bytes/pixel in the observer's brain) were best, but we don't know that.

Either way, I think the quality of orchestral music has diminished (subjective judgement). At the same time (judging both from the music itself and from what critics have said about it) it has become more random. I would guess its quality has diminished because it became too random and passed the maximum on the Wundt curve, whether that maximum is indicated by maximum structure, or by the amount of structure that most humans like the most, and that music should have stayed at that level of complexity.

That wouldn't mean it should have stayed the same, or that Beethoven's kind of music is the best kind of music, or that nobody can make better music than Beethoven. It would mean that you couldn't make better music simply by making it more or less predictable.

>>Becoming an art connoisseur then means training yourself to like more and more noise and randomness.
>How does this apply to modern visual art, which is ostensibly not noisy at all?

Yeah, good point. I retract that statement. That is often what people do, but there's a lot more to it than that. Also see modernist sculpture and architecture for more cases where things got simpler instead of more complex. I think now that the most-basic modernist principles aren't to go in one particular direction with complexity--that's just a thing that some people did, in some art forms. What gets something accepted or rejected as modernist art is whether it breaks down the existing order and structure. (That, for instance, makes war poems get counted as modernist even if they're not at all modernist in style.)

One of my large-scale hypotheses is that modernism is a resurgence of very old, very conservative religious ideas, descending from Plato, and the unifying theme in all modernist and post-modernist art is just that it is attempting to destroy liberal humanist civilization, so that we can return to something like a tribalist, religious, or totalitarian state, in which knowledge is certain and all society is unified and in agreement. I came to this idea very slowly, over several years, beginning by noticing that when I studied all the art forms (literature, drawing, sculpture, music) across Western history, primitive, ancient Egyptian, archaic Greek, medieval Christian, Puritan, Restoration / Jacobite / neo-classical, modern, Nazi, Stalinist, and post-modern art and art theory all cluster together due to their similar philosophical / religious / political assumptions. Modernism and post-modernism are unique in that they are the art forms of an elite that is "in power" of art, but not of society--and so it is designed less to express tribalist / Platonist / communitarian views, than to critique and destroy whatever social order we have.

4512372 Wow, I'm sorry I didn't reply to this earlier. I have the bad habit of procrastinating replies to complex comments.

To put it in mathematical terms, the entropy of the "state" of a piece of music at "time" t is high, but the entropy of the "state" at "time" t given that you know "time" t-1,t-2,...,0 is low.

Yes. To state it more simply, "predictable" = "low information content". Shannon's first studies of information content had humans read texts that had been cut off, and try to guess what the next word was.

I think we can try to define "structure" as something for which there is some Y that the consumer possesses that leads to a high mutual information.

Yes. And the trick is finding out anything about Y. This is a bit puzzling, because MI seems like the "correct" approach, but Kullback–Leibler divergence also seems like the "correct" approach, and they're different. Kullback–Leibler divergence is often easier to use--you have a software agent that's compressing some signal using some probability distribution, so you can compute the K-L divergence.

4534878
I mean, MI can be written in terms of K-L divergence, and at the end of the day I'm thinking in higher-level terms than pure numbers. After all, I'm pretty sure K-L divergence exists purely because it makes a lot of our theorems work out nicely and because it fits nicely into the MI definition. Personally, I'm more fond of Earth-Mover's distance for comparing probability distributions since K-L does weird things when zeros are in play.

Not that it matters because Y is likely some very complex thing that's a product of biology, history, sociology, and other factors that we won't get anywhere close to quantifying in a way that makes sense to us. Maybe a neural net might come up with some features that can capture Y, but it sure as hell won't be human-readable.

4535031 I wasn't aware of EM distance. It's a minimization, so it takes a great deal more computation. It's also not information-theoretical, so it doesn't give me that nice feeling of being correct, but it's not obvious to me that it would fail to serve as well for any particular purpose. Also I don't see how you'd apply EM distance in real-time if you're getting one datapoint after another and must update after each datapoint.

Probabilities of zero are always a problem; you shouldn't normally allow them anyway.

All 3 approaches have the same problem, which is that they count configurations as different that humans don't recognize as different, like a Jackson Pollock painting vs. the same painting turned upside-down.

4536209
If you want to apply any of these concepts to anything in the real world in a way that isn't just assuming their Gaussians or Binary Erasure Channels, you're going to have to solve a lot of problems about what information is and how to deal with the vast network of interconnectedness between people, cultures, and data, which the AI community has been throwing neural nets at for the past decade. My research is on data compression, and at the heart of it is essentially the question "what is the entropy of the set of all natural images?" and it's amazing (though unsurprising) how difficult of a question it is to answer, due to the complexity of the natural world. Also, an upside-down painting isn't the same as a right-side up one; the entropy of "Jackson Pollock's paintings" is much lower than "all linear transformations of Jackson Pollock's paintings," and I'm sure some post-modernist has flipped or otherwise modified an existing painting in a minor way and called it an original piece of art, so that's not really a problem for the info theory thing.

And yeah, EM distance is terrible for a lot of things due to the optimization thing, but it's much more intuitively obvious, whereas KL is some weird term that comes up when doing entropy/MI stuff.

4536382

Also, an upside-down painting isn't the same as a right-side up one; the entropy of "Jackson Pollock's paintings" is much lower than "all linear transformations of Jackson Pollock's paintings," and I'm sure some post-modernist has flipped or otherwise modified an existing painting in a minor way and called it an original piece of art, so that's not really a problem for the info theory thing.

The problem with all information measures is that some information is more informational than other. Convert the set of all random bit strings to paintings by setting one pixel to black for 0 and white for 1. Most of the paintings in that set would be recognized as equivalent by a human observer. That's the basic problem with using information measures to measure "complexity". You could rearrange all the lines on a Pollock and few people would notice, so the large amount of information it takes to reproduce it pixel-by-pixel is a huge overestimate of the information conveyed or remembered or usable by a typical human. For the purposes of art, the upside-down Pollock is equivalent to the original, while an upside-down Rembrandt (or Thomas KIncade) is not. (A left-right-flipped copy of either probably is equivalent to the original.)

BTW, u may find 4534855 interesting.

4537313
I already replied to you post on my user page about this, but for the sake of continuity I'll quote some major points that might be relevant here:

There's a fundamental problem in trying to use image compression algorithms to determine the amount of information in a image, which comes from the idea of a source model. All image compression models are designed with some kind of underlying probabilistic model of the data it is trying to compress. In the case of, say, JPEG, the underlying assumption is that the image is a "natural image," which is to say a picture of something you might encounter in the real-world. JPEG2000 and a couple other standards use this same assumption, and develop different models based on it. JPEG, for instance, decided that natural images tended to have a lot of energy in its low frequency components, and very little in the high frequency, and so developed a frequency-based compression scheme for the images.
So in essence, all image compression algorithms give you can estimate of entropy conditioned on the source model. And since the Pollock painting deviates greatly from this source model of "stuff you might see in real life," it will naturally be difficult to compress using any common compression algorithm.
And yes, you could design a parameterized system for storing paintings based on brush-strokes, which would work very well on paintings but not on real-world images, because now you're conditioning on the fact that it is a painting.

What I'm trying to say is that trying to use domain-specific data compression algorithms to determine entropy isn't going to work in many cases, because all you'll get is an estimate of entropy conditioned on an underlying assumption (which means that modern art will likely be less compressible, because they deviate more from our idea of natural images, which is another interesting discussion one could have if so inclined).

I think it all comes down to source models, which are our ideas of what should be, or at least what we expect to see. If you use that as your "Y" for Mutual Information, you'll immediately see that the really noisy thing isn't interesting because it has high entropy whether or not you have any expectations, and the really plain things aren't interesting because they have low entropy whether you have expectations or not. It's the in-between that's interesting, where are there are lots of possibilities, but all these possibilities at least make some kind of sense to us.

Login or register to comment