• Member Since 11th Apr, 2012
  • offline last seen 31 minutes ago

Bad Horse


Just an honest businesspony with a Patreon.

More Blog Posts623

  • 2 weeks
    Bad Horse's Bad Advice--Avoiding comparisons

    For some reason, B_25 thought it might a good idea to ask me for advice.  The best advice I could give would probably be not to ask an unemployed ponyfic writer living on oatmeal and fimfiction notifications for advice.  But I could hardly turn down an invitation to indoctrinate someone with my opinions.  So here is the first installment of

    Bad Advice

    Read More

    37 comments · 569 views
  • 5 weeks
    Meetup this Friday in Redwood City, CA

    GaPJaxie, equestrian_sen, horizon, and I hope to meet up this Friday, at 7PM Pacific time (maybe a little earlier to get a table), at the Denny's in Redwood City, CA. That's near San Francisco.
    horizon doesn't know for sure yet if he's coming.

    Any other bronies who can make it are welcome. Reply in the comments if you plan to come.

    Read More

    13 comments · 365 views
  • 9 weeks
    Recs: An even worse self insert, The making of "Spring is Dumb"

    Two recommendations:

    HoofBitingActionOverload's two blog posts on how he wrote "Spring is Dumb".  (At first he said there would be 3 posts, but I believe it's finished with these two.  There's also an introductory post, which you don't need to read.)

    Read More

    7 comments · 507 views
  • 9 weeks
    Share a Smile! Also, the worst dump I ever took.

    Crystal Wishes did a thing, as pones do. As an old curmudgeon, I'm obligated to hate these things, but I have some people to thank, so... this time. :trixieshiftleft:

    The genealogy of this smile is:
    Crystal Wishes -> Undome Tinwe -> Bad Horse

    Read More

    34 comments · 608 views
Feb
13th
2014

Thoughts on listening to Mahler's Fifth Symphony three times in a row · 3:37am Feb 13th, 2014

In “The annihilation of art”, I griped about the path toward ever greater chaos and dissonance that orchestral composition has taken, to the point where it sounds random to me. I tried to appreciate Brian Ferneyhough’s music, but couldn’t. The folks who like it claim that it’s a natural progression from Beethoven to Ferneyhough. I figured that to understand Ferneyhough, I’d have to back up a half-century or so and first try to appreciate something in-between Beethoven and Ferneyhough. So while driving across Pennsylvania, I popped in a CD of Mahler’s Fifth Symphony (1902).

I’ve long been frustrated by my inability to remember Mahler’s compositions. Beethoven’s can get stuck in my head for days, to the point where they give me migraines. Mahler’s, I can only remember snatches of. I was determined to play the CD until I could remember how it went.

I played it all the way to Pittsburgh, and still can’t remember it. Mahler’s Fifth isn’t going to get stuck in my head anytime soon.

The symphony opens with single trumpet repeating a few ambiguous notes, then rising in a dramatic minor chord. Suddenly, the entire orchestra joins in a triumphant shift to a major key. And just as suddenly, it shifts back to minor. That exemplifies everything that is wrong with Mahler’s fifth symphony.

When you have a host of brass make a sudden dramatic reversal like that shift from minor to major, it should mean something. But it doesn’t, because we only stay there for a few seconds before there’s another, equally-dramatic reversal by that same brass section back into a minor key. And that doesn’t mean anything either, because we were in major for all of about two measures.

Observer 1: Look, up in the sky!
Observer 2: It’s a bird!
Observer 3: It’s a plane!
Observer 1: Naw, it’s a bird.

The dramatic equivalent of the opening of Mahler’s Fifth.

The piece didn’t earn that shift back to minor. And that’s what it’s like throughout: Sudden, ostensibly dramatic transitions between keys, tempos, rhythms, and motifs, in a desperate attempt to be unpredictable. All those transitions did nothing for me, because they were so unpredictable that I didn’t care where the music went. It was like an action adventure flick that, to keep you entertained, jumps from one cliff-hanging action sequence to another without ever letting you find out who the characters are. Too try-hard, Gustav.

This is especially apparent in the fourth movement, which is the most boring piece of classical music I’ve ever heard. I am definitely in the minority about this, as it’s regularly found on “The Most Soothing Classical Music” collections, but then I don’t listen to music in order to cure insomnia. I could not pay attention to nine minutes of very pretty but disorganized wandering about in various major and minor keys. I find myself repeatedly zoning out and ignoring the music every time I listen to it. Music this slow and lacking in harmony needs more repetition and regularity for me to grasp hold of.

In “Information theory and writing”, I said art should have high entropy. The entropy of a thing is the number of bits of information you would need to replicate that thing. Something with high entropy is unpredictable. The huge caveat is that random strings have very high entropy, and yet random strings are boring.

The British mathematician G. H. Hardy once visited the Indian mathematician Srinivasa Ramanujan in the hospital:

I remember once going to see him when he was ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. "No," he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways."

If we could perceive the unique qualities of each random string, we might find each random string as interesting as Ramanudran found each number. But we don’t. Random strings are boring because we can’t tell them apart. What we want is an entropy measurement that tells us how many bits of information it would take to replicate something like the item of interest, from an equivalence class for that item. Something sufficiently similar that we wouldn’t care if one were substituted for the other. (Assume we have a random number generator available for free; randomness does not require information.) A random string of 16 bits has 16 bits of information, but it would take zero bits of information to make another string “like” it, if any string will do.

This equivalence-adjusted entropy would be a measurement of complexity. Measuring complexity is a difficult problem in the study of complex systems (an early review of the problem is here).

Cellular automata (CAs) are simple model complex systems. A CA is a set of rules that operate on cells. The cells are usually laid out as squares. Each cell is in one of K states. (For the game of Life, the most-famous CA, K = 2.) Each rule says which state a cell in state k should change to on the next turn, given the states of itself and of its neighbors in the current turn.

Steve Wolfram, studying cellular automata (CAs), found that there was a class of rules that quickly produced static, unchanging CAs, and a class that quickly produced random noise, and a narrow class in-between that produced strange, beautiful, non-repeating patterns. He called these patterns “complex”. He found a single parameter that predicted whether a CA would be complex. Probably he could have used entropy, but he did not. He used λ (lambda), which he defined as the fraction of transition rules that turn a cell “off”.

These three graphs below from (Langton 1992) show typical results, for four-state CAs: A set of rules with λ = .40 quickly leads to a static, “dead” state, and a set with λ = .65 quickly blows up into random noise, while a set with λ = .50 shows interesting, non-repeating patterns for quite some time:



The curious thing is that entropy (unpredictability) is maximal for these four-state CAs when  λ = .75. Increasing λ increases the apparent complexity up to a point, but past that point, although it  it is still increasing unpredictability, it generates noise, not complexity.

Figure 3 from (Langton 1992) plots transient length (one measure of complexity) versus lambda. Transient length peaks suddenly in the area with middling lambda, then just as suddenly falls off again as lambda and unpredictability continue to increase:

Gregorian chant was very predictable: one part only, no instruments, and almost no rhythmic or dynamic variation. Music became steadily more complex and less predictable over the next several hundred years.

It seemed like a good rule to say that the less-predictable music became, the more complex and better it would be. And in fact, the commentaries on Mahler’s Fifth are full of references to the “complexity” and “interest” generated by its dissonances and irregularities.

But music does not become more complex the more unpredictable it is. After some point, increasing unpredictability makes it less complex. Instead of complexity, we get mere noise.

This, I speculate, is what happened to music. Composers internalized the theoretical belief that unexpectedness made music more complex and interesting, rather than just listening to it and saying whether they liked it or not. They kept making things less and less predictable, even after passing the point where complexity was maximal.

Once they’d passed that point, unpredictability only made the music boring, not complex. Like Mahler’s Fifth. That created a vicious circle: New music was noisy, unstructured, and boring. Composers believed the way to make it less boring was to make it less predictable, which only made it even more boring, pushing them to make newer music that was even less predictable. This led inevitably to Ferneyhough’s random-sounding music.

And the inevitability of the entire progression was taken as evidence that this was progress!

“But, Bad Horse,” you might protest, “you’ve based this on the idea that there are equivalence classes of musical compositions. But what counts as equivalent depends on the listener. To someone who understands music perfectly, each composition might be distinct! Then each equivalence class has exactly one member, and randomness equals complexity.”

There is something to that objection. The more one studies music, the more distinctions one can easily make in music. But if you really believe that’s a valid objection, you must conclude that all possible music is equally good.

I don’t know how to deal with subjective equivalence classes, but we don’t have to base our measurements on something subjective. We can use an objective information-theoretic measure of complexity. Mutual information, for instance. The mutual information between two variables is the information they have in common. If both are very low-entropy, this is low, since neither contains much information. But if both are high-entropy and uncorrelated, it’s low again, since you can’t predict one from the other. Here’s a plot of mutual information versus lambda, again from (Langton 1992):

This appears to have a maximum around lambda = .25 instead of .5, which might be a problem. But I don’t think lambda makes sense as our measurement, since it depends so much on the arbitrary choice of which state is the “off” state. Entropy would probably be a better measure, and using it might remove the discrepancy between which lambda givese maximum MI and which gives maximum transient length.

My point is that we can choose some objective scheme for measuring the complexity in a score. For instance, go through the score three measures at a time. Call three measures in a row A, B, and C. You can measure P(C|A,B) and P(C|A) for each set of three measures, and then compute how much information about measure C you get from measure B but not from measure A. This will be small for compositions so predictable that measure B doesn’t add much information, and it will be small for compositions that are so random that neither B nor A helps you predict C.

We could argue about how to make the measurement, but we could actually make such measurements (if, say, you got an NEA grant to spend a few months on the problem). I believe that any reasonable measurement would prove that Ferneyhough’s compositions are less, not more, complex than Beethoven’s.

ADDED: That wouldn't mean everyone should start chasing complexity. I think the problems with modernism that I complained about can be summarized as "doing art according to a theory rather than according to what seems good". Ideally, the result of proving this would be to incline people to trust their feelings more and their theories less.


Chris Langton (1992). Life at the edge of chaos. Artificial Life II.

Report Bad Horse · 1,181 views ·
Comments ( 71 )

Ice--not conducive to life

Steam--not conducive to life.

Water--conducive to life.

Scotch--why, yes, thank you...

I can't help but feel that trying to objectively measure music (or art, or literature) is kind of missing the point.

So, according to a friend, you are looking for the mathematical equations that will make you a good writer.

Now, I'm no fancy, big-city scientist, but I'm going to go ahead and guess that you won't be finding'em :ajsmug:

A very interesting notion, and I quite like the concept of subjective equivalence classes, but I'm wary of your formalism. It seems to work okay right now, but isn't that the sort of insufficient evidence that lead to the "entropy is interesting" error?

(For the sake of completeness, a counterexample: random noise is not made interesting by the use of error correction.)

1828660 No, I'm not looking for equations to make me a good writer. Not currently. Maybe later. But all writing advice, such as "show, don't tell", can be expressed as equations.

1828651 There's nothing more objective about saying "Unpredictability is interesting!" And that's what composers have been doing.

Comment posted by Bad Horse deleted Feb 13th, 2014

It seems to work okay right now, but isn't that the sort of insufficient evidence that lead to the "entropy is interesting" error?

I don't quite follow, but if we come across more evidence later that contradicts it, we can deal with it them. :ajsmug: Right now, I think composers aren't using the evidence that they already have.

1828664 (For the sake of completeness, a counterexample: random noise is not made interesting by the use of error correction.)

I don't understand what that's a counterexample to.

1828697 Well, that just means that they're missing the point too then. :pinkiehappy:

1828708

It seems to work okay right now, but isn't that the sort of insufficient evidence that lead to the "entropy is interesting" error?

I don't quite follow, but if we come across more evidence later that contradicts it, we can deal with it them. :ajsmug: Right now, I think composers aren't using the evidence that they already have.

The "more entropy" meme was a good rule of thumb when it was first introduced, but people kept using it even after it stopped being a good idea. Similarly, the "more complexity" rule seems like a good idea now, but in the future – well, humans demonstrably can't be trusted to deal with things that need fixing.

Maybe I'm just being overcautious.

(For the sake of completeness, a counterexample: random noise is not made interesting by the use of error correction.)

I don't understand what that's a counterexample to.

To the mutual-information measure of interestingness. The five bits of a two-out-of-five code have mutual information, but if the encoding is being used to represent random noise, it's still not interesting.

I like Eliezer Yudkowsky's definition of creative surprise as "the idea that ranks high in your preference ordering but low in your search ordering". Worthwhile music/art/literature/information is data that you weren't expecting to find, but that you are pleased to have found. (Edit: or, to put it another way, one's subjective equivalence classes depend directly on one's utility function.)

...except that doesn't quite cover it either, I think. I still like listening to songs that I've heard before. Or is that only because I don't have eidetic memory? Generally there's little value in redownloading a file that I already have on my hard drive.

This is too thought-heavy for me to have any sort of decent response to it right now. I went and listened to some of Mahler's Fifth Symphony, and the word I'd tend toward would be "forgettable". Most of the thoughts I had about it were things you expanded on in the blog post, and interestingly (to me), you basically stood my own idea of what makes for interesting music on its head. I've been looking for "unexpected" for a while, and it's why I greatly prefer a Styx to a Foreigner, but the flip side of that (which you're hitting here) is that I really have no appreciation for La Boheme.

This refines my ideas considerably, and I look forward to spending some more time with this.

Also, some nights it's Paris in the '20s. Some nights it's Los Alamos in the '40s.

It's a crazy place.

I'm not good enough at math to fully understand your point, but I have read that lifelike systems -- ones which are both meaningful and dynamic -- lie at a sort of "sweet spot" between chaos and order. Highly-chaotic systems are dynamic but meaningless, while highly-ordered systems may have meaning but are static. Between them is the realm in which systems can be both meaningful and dynamic.

Perhaps music is a lifelike system? It's a meme -- and it definitely reproduces and evolves (look at variations or remixes of popular symphonies and tunes). As a lifelike system, if too orderly it is meaningful but static (the most orderly possible "music" would be one note endlessly repeated in the same rhythym); if too chaotic it is dynamic but meaningless (a hash of randomly-altering noise). Between them is the realm at which music can actually be interesting and enjoyable, because it is dynamic but meaningful.

BH, I know a guy who'd like to hire you to inspect his island of cloned dinosaurs. I honestly can't imagine why.

Math is the language of the universe. Its principles are perfect and uniform, and they are discovered, not invented. It is as alien and unchanging as the stars.

Storytelling, on the other hand, is profoundly human. It is predicated upon conflict and imperfection. It is the system by which humans understand and relate to the world around them, so human imperfection is an essential part of that system.

Science and mathematics are invaluable tools. They can supply us with an endless quantity of facts, which we can use to inform our decisions. But the one thing science and math cannot to is tell us how we should feel about those facts. They can tell us "If you do X, then Y will happen." But if we ask "Would Y be a good thing?" they are silent.

Bad Horse, if you're not trying to determine "What would be the best story/symphony?" mathematically, then I'm not sure what you are doing. And that approach is doomed to failure, because art is a medium of communication, and deciding what you value and what you want to communicate is an essential first step in the storytelling process.

I honestly think you have something here. It's worth researching, if only to start an interesting discussion. And 'Computational Aesthetics" has a nice ring to it. :twilightsmile:

A more serious comment has to wait, of course. I need to think on this, and figure out if I have anything of value to say. I do think music might be your best bet for research on this.

1828651
Well that is the received wisdom on the matter, sure. But think about it for a moment. Either it is 100% subjective, in which case saying something is good or bad is meaningless and all the critics are just putting on airs or there is some influence from an objective metric of some sort which people of a certain training can access, possibly in a faulty manner.

And if there is any objectivity to it, there must be a way to quantify it. Or, at the very least, it is possible to discuss quantifying it in a meaningful way.

1828660
Rather obviously, Bad Horse doesn't really need equations that might make him a good writer, now does he? Equations that might teach him about this human thing called 'happiness,' now, that's a different kettle of fish. :pinkiehappy:

I'm not even going to touch the theoretical end of this, since 1) it's about music, and 2) involves math I'm unfamiliar with. Plus I'm probably too dumb haha.

But I will say: I admire your attempts to show you can objectively determine the complexity of something, and in regards to music I believe you're right. Yet I don't see complexity equaling quality. Furthermore, I'm always hesitant to bring something as regimental as math or science into any form of art (and I went to college for physics).

Out of curiosity, can I ask: why might you (as in you, Bad Horse) want to objectively determine the complexity of something like a piece of music?

1828932 Out of curiosity, can I ask: why might you (as in you, Bad Horse) want to objectively determine the complexity of something like a piece of music?

One motivation is that I think composers have fetishized unpredictability, calling it "complexity", and as a result have wasted a century of musical potential. If I can show that the pieces they call highly complex are merely highly unpredictable, they may rethink that.

That doesn't mean that they should all then chase complexity as the measure of music. Ideally, they'd listen more to intuitions about what music feels like, and less to theory.

But the main motivation is just that aesthetics is one of the great mysteries, and any insights into it would teach us a lot more about the world than most other discoveries would.

1828779 To the mutual-information measure of interestingness. The five bits of a two-out-of-five code have mutual information, but if the encoding is being used to represent random noise, it's still not interesting.

Ah. But those five bits are a single digit. It wouldn't make any sense to look for the mutual information between them. The digits are what the message is made of; you'd look for MI between digits.

1829114
That was only an example.

A message of b bits can be treated as a single number from 0 to 2^b - 1, and an m-of-n (or other redundant) encoding can be chosen to represent such a number. So regardless of the size of the message, it can be treated as a single "digit" in some enormous base, and then MI within the message is necessarily MI within a digit.

1829097
Ah, in that case, I wish you the best of luck. I understand how...irritating...it can be when people take an idea which they don't think all the way through and turn it into an industry which continually feeds itself. I'm sure you're not alone in your feelings on this.

As stated in your last blog, you believe that beauty (or aesthetic) is, at least on some level, objective, yes?

First off, I really liked this blog post, and it is very interesting to think about. I've heard people talk about similar things before, but applying it to music, specifically, seems very reasonable, and I suspect you could actually make a reasonable case for this.

Is there some sort of like, musical informatics journal out there somewhere that this sort of thing might have been researched in? Because if not, it really should be.

1828660
I think the more realistic question (which is what he is posing here) is whether it is possible to analyze something and figure out if there is some sort of optimal range. In other words, the numbers here are not a means of making something, but a means of analyzing it quantitatively.

Technically speaking there must be some sort of equation for producing good writing, given that A) good writing exists and B) it is created by a computer (namely, the human brain). Whether we will ever determine said equation is difficult to say, though.

1828779
Well, the question is, will decreasing the entropy of the music make it sound better to people?

Really to determine this, the best way of doing it would be to take a bunch of music, and give it to a bunch of people, and have them rank it best to worst, as well as a possible 2 category characterization (liked it or didn't like it). Include in it some really random stuff (possibly even machine created randomized music), random noise even possibly, as well as a variety of actual human composed pieces.

Another interesting thing to do would be to create a program which created highly random musical pieces (possibly following some basic rules), and then fed it to said musicians and see if they liked it, and if it could be distinguished from human-created pieces. If the random music did well compared to the human pieces, then we would either have to assume that randomization is good according to those people (which confirms the theory that that is the way they're going) and/or that possibly they have gotten past the point where they even can recognize genuine talent (which would indicate they are incompetent to actually rate music in the first place).

1828779

I like Eliezer Yudkowsky's definition of creative surprise as "the idea that ranks high in your preference ordering but low in your search ordering". Worthwhile music/art/literature/information is data that you weren't expecting to find, but that you are pleased to have found. (Edit: or, to put it another way, one's subjective equivalence classes depend directly on one's utility function.)

Unfortunately, this is precisely the sort of false profundity I expect from Yudkowsky; it sounds very clever, but when you think about it, you realize it is actually an entirely useless thing to say, as it is merely rephrasing creativity from "thinking of something useful that others haven't thought of before", which is easier to grok (and therefore a better definition). But actually creativity isn't even about thinking of useful things, necessarily, but thinking about new things in general, some of which happen to be useful. Moreover, it is possible for someone to come up with a lot of new ideas, but some people are better at separating the chaff from the wheat than others; I suspect that these two skills are, in fact, entirely unrelated. I think a lot of people are capable of coming up with good ideas; I think fewer people are good at actually recognizing a good idea right off the bat. And fewer still are good at both. And in truth, a lot of creative people ARE highly prolific and do end up putting out garbage as well - there are lots of garbage patents, after all, even from very respected people, who patented ideas that never ended up panning out.

1828910

Math is the language of the universe. Its principles are perfect and uniform, and they are discovered, not invented. It is as alien and unchanging as the stars.

Physics, not math, is the language of the universe. You can create math systems which have absolutely nothing whatsoever to do with reality, and indeed, people have done so. They aren't especially useful, mostly.

And indeed, physics tells us that uh... well, the universe is really, really weird. Most of space is empty, and yet empty space, isn't. The position of particles is uncertain, and indeed, it is impossible to measure some properties of particles simultaneously with perfect accuracy. Time is not a constant. The very fabric of reality is bent by mass. Going very quickly changes the shape of objects as percieved by outside observers.

Storytelling, on the other hand, is profoundly human. It is predicated upon conflict and imperfection. It is the system by which humans understand and relate to the world around them, so human imperfection is an essential part of that system.

Humans are a product of the universe, though. We run off of the same physics as literally everything else. Humans are not separate from the universe; we're an inherent part of it. An especially complicated part of it, and possibly the most extreme example of evolution in existence - intelligence allows us to adapt ourselves within our own lifetime. We don't HAVE to rely on gradual change - if we're cold, we can build a fire, or make warm clothing, or build a house, or do any number of other things. We're incredibly adaptable.

But that doesn't mean we don't run off of the same thing as the rest of the universe. Humans are, in essence, incredibly complicated meat machines with the most powerful computers in existence running in our skulls, and those computers run off of Doritos and Mountain Dew. But we're still computers, made out of meat.

That means, technically speaking, anything we do CAN be modelled by a computer, if we know the proper way to program it. We don't, obviously, and we don't have a computer which is capable of simulating a human brain in anything even approximating real time... but that doesn't mean we're somehow utterly different. In reality, there IS some sort of rule (or rules) for making good music. We don't know what those rules are, but if they are rules, we could write a program that did it for us, in principle.

Given an infinite amount of time and an infinite amount of computing power, we could even figure it out. We lack both, of course, so I don't know if it will really be solved, at least not within the next century, possibly not within the next millenium... but it may be possible even within the next DECADE to dynamically create music via a computer that works for creating some sort of ambience. I suspect you could probably do it today, to some extent. It wouldn't be real music, but it would work for, say, a video game's BGM, I'd wager.

But the one thing science and math cannot to is tell us how we should feel about those facts. They can tell us "If you do X, then Y will happen." But if we ask "Would Y be a good thing?" they are silent.

Actually, this is untrue. While science cannot tell us what our values should be, it is in fact quite good at answering questions posed by values. Say we take the question, "Should we vaccinate the population against smallpox?" The question is, what is the cost imposed by smallpox, what is the cost imposed by the possibility of smallpox (which is on top of the actual cost of smallpox), and what is the cost of the vaccination program? Exterminating smallpox was an easy question, economically, and given our goal (minimize cost/maximize productivity), the expense of a worldwide smallpox program was obviously worthwhile. Indeed, eradicating almost all such diseases is worth the expense; sadly, many are nearly impossible to eradicate due to animal reservoirs, but eliminating and controlling them is very worthwhile, by and large.

Now, you may say "well, maximizing productivity is an arbitrary goal", which is somewhat true (only somewhat, though, because maximizing productivity is actually selected for by nature if it improves your reproductive success, so there is some empirical basis to it - and indeed it is worth noting that agricultural societies have completely taken over the globe as a result of our way of doing things). But if you have a goal, science can tell you what to do, and science can even help you figure out which goals you should have.

1829138 The original 5-bit digit is a unit of information taken from the real world, which it does not make sense to look inside of. The message glommed into one very large number is not an atomic unit in that way.

1829207
Concepts like sense and meaning don't apply to random noise.

My point is that it's possible to construct a "message" which passes any given set of tests for the formal structure of a worthwhile message, but which is in fact empty of meaning. MI is insufficient as a measure, because the presence of mutual information does not ensure that the information being mooted is worth knowing.

1829236 If you try to game the system, sure. Don't do that. You seem to be asking for some measure that's powerful enough to construct music, which is not what I have in mind.

Concepts like sense and meaning don't apply to random noise.

They do, because "noise" is noise within a formal system. In the case of music, it is a series of notes. You can't look inside the individual notes! What you were proposing is like taking the fact that there's a middle-C sixteenth note, finding that's encoded as 011010, and looking for mutual info between those bits. That would be silly.

1829250

If you try to game the system, sure. Don't do that.

And yet, isn't that exactly what modern orchestral composers have done – tried to game the system? If the purpose of this exercise wasn't to create a measure that's resistant to idiot geniuses, then what was it?

1829190 And very quickly we get to the heart of the matter: I don't think humans are simply products of the universe. I believe we're the product of a Creator.

If humans are simply a product of evolution, then you're 100% correct. If we're not--if there's something about being human that transcends the mechanical dance of atoms--then science can't tell us everything we need to know about being human.

While science cannot tell us what our values should be, it is in fact quite good at answering questions posed by values.

That's exactly my point. Science provides answers; values provide questions. Two people with different values can look at the same science and come to very different conclusions about the right thing to do.

As you say, given the goal of "We want people to not die", science says eradicating smallpox is a good idea. What I'm saying is that we need to look carefully at where we're getting that goal. Is maximizing reproductive success really the ultimate goal of the human race? If that was what people really cared about, wouldn't they be having as many kids as possible before death, to the exclusion of most other goals?

Science can help us achieve any goal we set before ourselves. What those goals are is up to us.

1829190

You can create math systems which have absolutely nothing whatsoever to do with reality, and indeed, people have done so. They aren't especially useful, mostly.

Mostly. Then again, if memory serves, matrix multiplication was dreamed up by someone who just wanted to show you could come up with a meaning for multiplication that did dumb things like lose commutativity. No possible practical use.

We all know how that turned out.

That means, technically speaking, anything we do CAN be modelled by a computer, if we know the proper way to program it. We don't, obviously, and we don't have a computer which is capable of simulating a human brain in anything even approximating real time... but that doesn't mean we're somehow utterly different. In reality, there IS some sort of rule (or rules) for making good music. We don't know what those rules are, but if they are rules, we could write a program that did it for us, in principle.

To be fair, this is heavily speculative. The idea that the universe is fundamentally mechanistic can have a certain appeal, but the only real evidence backing it up is "hey, I can explain some things!" and "physics measurements look pretty repeatable."

I'm not saying it's not a perfectly good stance to take, and I'm probably on board for the idea that you can effectively capture the quality of music by doing something like this; I just think it bears pointing out that you seem to be taking something unprovable as axiomatic.


1829280

If the purpose of this exercise wasn't to create a measure that's resistant to idiot geniuses, then what was it?

I thought the point was more to uncover whether a wholly objective criterion measure of music quality could be created—not so much as a way of saying, "Hey, your stuff sucks." but more as a way of indicating to the field that conventional notions about quality may leave something to be desired, and that trying to pull the musicians of the world in the direction of "more complexity" may be counterproductive.

My brain fried halfway through that last sentence, and it took me 5-10 minutes to write on. I officially need to retire for the night.

1829359

I believe we're the product of a Creator.

Yeah that's... not going to lead to a productive argument.

That's exactly my point. Science provides answers; values provide questions. Two people with different values can look at the same science and come to very different conclusions about the right thing to do.

Ehhhhh...

I've generally found that in reality, what ends up happening here is actually quite the opposite. Where people differ is where science cannot provide an answer.

For instance, is the proper response to global warming to A) cut back on greenhouse gas emissions or B) say "screw it" and prepare for the impacts of global warming? Science cannot tell you the answer to this because the answer depends on assumptions which, by their very nature, aren't scientific. Our assumptions may be based on science, but that doesn't mean that they are scientific themselves, and the answer of what you should do depends on what the cost of losing that much land is versus the cost of cutting back on greenhouse gas emissions and forever capping them at a certain level, and whether or not you think it is realistic that people will do so. If you don't think someone major (say, China or the US or India or the EU) will deal with it properly then no one else should even bother because it won't actually stop anything from happening. Switching to better energy sources should happen regardless but the level of investment necessary to stop global warming from occurring is very large, and an argument can be made that we shouldn't spend that money because it is cheaper just to deal with it, or that people will refuse to deal with it properly in the present.

But this all depends on what your assumptions are and how bad you think it is likely to get before we naturally reduce our greenhouse gas emissions. Depending on said assumptions varies how much land we lose, and differences in land loss make a pretty big difference as well. And it also depends on whether you think people will realistically do what needs to be done to prevent the bad stuff from happening, and whether our estimates on the thresholds for bad stuff are accurate enough and a lot of other stuff.

If science has an answer (say, we can stop smallpox via a vaccine, it has a fatal reaction rate of one per one hundred thousand people who gets it) you can easily figure out whether or not you should eradicate smallpox because the cost is very small relative to the benefit (seriously, smallpox sucks). And indeed almost every major communicable illness causes immense expense, both to individuals and to corporations and businesses and governments and hospitals and everyone else, so if the cost is even remotely reasonable it is generally advisable to eradicate the disease entirely, if that is possible, because it saves money for the foreseeable future.

As you say, given the goal of "We want people to not die", science says eradicating smallpox is a good idea. What I'm saying is that we need to look carefully at where we're getting that goal. Is maximizing reproductive success really the ultimate goal of the human race? If that was what people really cared about, wouldn't they be having as many kids as possible before death, to the exclusion of most other goals?

Well, if you think about it, if humanity lasts ten times as long with a population of 1/10th as many people, then you're no better off one way or the other from the standpoint of average number of offspring - and indeed, if you feel like you have particularly good genes, you're actually better off with the longer timespan/smaller population because your genes are more likely to outcompete other genes over time.

Though you could argue that the goal is arbitrary (there is, in fact, no actual purpose to life in terms of some sort of reason of being, save that which we ascribe to ourselves) there are reasons why life exists in the empirical, scientific sense - it is self-replicating and self-perpetuating and good at both of those things.

Science can help us achieve any goal we set before ourselves. What those goals are is up to us.

Well, yes, but on the other hand, science can tell us "killing everyone" isn't a very good goal because you're very likely to fail at it. I mean, again, this depends on how you're defining terms, but there's a lot of ways in which science can inform your goals.

1829364

Mostly. Then again, if memory serves, matrix multiplication was dreamed up by someone who just wanted to show you could come up with a meaning for multiplication that did dumb things like lose commutativity. No possible practical use.

Matrix multiplication is commutable with other math, though. Like, you can use it to solve systems of equations, and you can prove why it works using math. It isn't actually a separate KIND of math; it is just a different way of looking at the same math.

One of my professors in college worked on infinite matricies and mentioned that they found some use for them in quantum mechanics. I asked him if that was why he had worked on them originally, and he said no; they just happened to find out that they were useful.

There are mathematical systems which don't have anything to do with reality and which are purely abstract. The universe appears to use a certain set of mathematical postulates in its operation, but you could make different postulates which contradict the ones we have and create a logical mathematical system with no relationship with reality.

The bigger problem with math is that it is based on assumptions and is, in fact, purely logical, which is why the universe doesn't run on it - the universe runs on probability, not logic.

In fact...

To be fair, this is heavily speculative. The idea that the universe is fundamentally mechanistic can have a certain appeal, but the only real evidence backing it up is "hey, I can explain some things!" and "physics measurements look pretty repeatable."

It is, in fact, certain that the universe is not, in fact, deterministic. Science can never absolutely prove anything, but reality does, in fact, appear to be mechanistic, to a very high level of probability, at least on the levels which we study.

I'm not saying it's not a perfectly good stance to take, and I'm probably on board for the idea that you can effectively capture the quality of music by doing something like this; I just think it bears pointing out that you seem to be taking something unprovable as axiomatic.

It is falsifiable, though. Determining whether or not a system is mechanistic is definitely something you can, in fact, do.

If you're talking about "proof" in the mathematical sense, no, obviously not. But if you're talking about proof in the more layman's term, science proves all sorts of things (and disproves all sorts of things), and there's no particular reason that this should be impossible to prove.

Indeed, it should be possible to simulate a human brain inside a computer someday. If we cannot do so, then it means that either the human brain is more complicated than computers are capable of simulating via silicon transistor based architecture, or that the human brain is not mechanistic, and there should be ways of distinguishing between the two via comparison.

I am not holding my breath for said simulation, though; I would say that it is at least 20 years off, and possibly several centuries off, depending on how the brain functions and what level of simulation is necessary.

1829097

One motivation is that I think composers have fetishized unpredictability, calling it "complexity", and as a result have wasted a century of musical potential.

Because as we all know, solo piano composition is the only genre of music!

In all seriousness, I can see what you're getting at here. I personally know very little about this variety of music, having almost called it "classical music" even though a fact check told me that it's not the right term to apply to Mahler or Ferneyhough. The only reason I recognized Gustav Mahler's name is because I heard it referenced on Brows Held High, and even then I didn't know he was a composer. But I am a bit worried about promising musicians going too far up their own asses, the example that comes to mind being that of The Knife.

The Knife are a brother-sister team of electronica musicians from Sweden who I've been interested in ever since I first heard the sister's solo Fever Ray album. They started out with two albums that were okayish but nothing special, the products of an act that was still getting into the swing of things. Then they released Silent Shout, a pitch-perfect, cold-blooded grotesquerie that did everything right and stands out as my favorite electronica album ever. I breathlessly recommend it to anyone who's interested, though I gotta warn you that Karin Dreijer Andersson's vocals can be hard to get used to. If you hate Bjork's singing, you probably won't like hers either, especially because she loves pitch-shifting it up and down several octaves.

I bring them up because after Silent Shout, they spent time apart, and after nothing but that solo album and a commissioned opera score, they finally released another album, Shaking the Habitual, last year. I've tried my best to like it, and there are a couple of good tracks on it. "A Tooth for an Eye" and "Without You My Life Would Be Boring," while not as immediate and impactful as "Marble House" and "We Share Our Mother's Health," are catchy and rhythmic and memorable. But a lot of Shaking the Habitual is ambient and droning and unengaging and while it's cool that they've frontloaded the album with feminist, anarchist, and revolutionary spirit, I can't help but feel that Silent Shout's own feminist touches are being undersold by comparison.

I feel bad typing this all out because I really want to like Shaking the Habitual, but your talk about music couldn't help but remind me of it. Nothing about the album's note-by-note composition seems as random and careless as the Ferneyhough piece you linked, but the part about inaccessibility to the average listener and pie-in-the-sky ambition is something I can't ignore (Shaking is about twice as long as Silent). Maybe when I learn more about music theory I can appreciate the album a little more, but I doubt I'll ever say it's as good as Silent Shout. Even critics who like both albums seem to be in agreement with me there. And I guess the worst-case scenario of The Knife just being one album wonders wouldn't be so bad, considering what a wonder Silent Shout is.

Watch how good I am at going to bed.

1829404

It is falsifiable, though. Determining whether or not a system is mechanistic is definitely something you can, in fact, do.

It's falsifiable (for probabilistic epistemology) in the small-scale sense. My point was just that you seem to be taking it as axiomatic in the larger sense, which runs afoul of two things: (1) nothing is axiomatic in science and (2) we really have no way of saying anything meaningful about how the universe works short of wild extrapolation and/or speculation.

But I think we agree on most points here. I'm just very wary of counterfactual claims: we could do Y if we had X. We had a guy from Harvard in for a seminar a couple weeks ago, and one of the the things he's famous for is trying to justify causal inference. His talk was... crap, basically[1]. I'm always very wary of anyone claiming that science, at the limit, can do really anything. We have no idea what science can and can't do at the limit, and it's not a terribly fruitful discussion. But it's the best tool we have for dealing with the data we can get our hands on, and it'd be crazy not to use it within that context.


[1] People from Harvard seem to be shit researchers, in general. I've never had a run-in with a person employed there where I didn't come away feeling like said person was skipping over a lot of work and getting away with it just because OMG Harvard. I'm sure this can't hold up for everyone there, but it's held up with a very uncomfortable number of people so far, in a whole range of disciplines.

Based on the progression of BH's posts, I expect an objective music quality evaluator within a year, an objective visual art evaluator within two, and an objective literature evaluator shortly thereafter.

1829437
1829364
Honestly I'm not even sure what you're arguing here.

If you're arguing that the universe does not behave according to physical laws, we have better than six sigmas that say you're wrong. Everything from particle accellerators to the space program wouldn't work if the universe did not.

If you're saying the universe is not deterministic, then yes, we know that is absolutely the case.

If you're saying that the universe is not deterministic on the macro scale, then you're right, but you're not really correct. Yes, the universe is not, in the long term, perfectly predictable on the macro scale because it is not perfectly predictable on the micro scale. But even on the scale of human beings, while yes, there is some chance involved - DNA doesn't replicate perfectly and the like - the reality is that we're pretty bloody deterministic. All life is, really; we work in very orderly and organized ways, and while there is an error rate, it is small. If we didn't, we wouldn't be able to be alive.

In other words, while yes, the universe isn't perfectly deterministic, that doesn't mean it cannot be modeled and we do, in fact, model it, in many cases to an extremely high level of fidelity. As there is no evidence that the human brain is somehow fundamentally different from everything else in the universe, and considerable evidence that it isn't (evolution, material composition, signal measurements, ect.), it is pretty unreasonable to argue that we are not, in the end, meat machines, because according to all available science, that is exactly what we are.

Whether we will ever be able to replicate the human brain well in real time via transistor-based technology is an open question to which the answer is a big fat "maybe", but I don't think anyone in the relevant fields doubts that it is possible from a programming point of view, but rather possible from a computational resources/economical point of view; if there are certain things we have to model, it simply isn't feasible, but it isn't clear what level of fidelity IS required yet. We'll probably not know for at least 10-20 years, at which point we'll have a better idea of the level of fidelity which is required.

But given that we are essentially computers made out of meat (albeit not transistor based ones), and given that we do produce said works, it is obviously true - it must be true, in fact - that there is some "program" for writing said works, because it has been done. Whether or not we can replicate said program is an open question, but whether it exists is pretty much certain, because we've produced said works. Indeed, one could, from a certain point of view, view the entire earth as a computer of sorts, with various individual processing units doing independent calculations which are later shared via various means. It isn't a computer in the same sense as the computer sitting on your desk, but it can be seen as such from a certain point of view.

Good heavens. :duck: Artificial Life II? I've only read the Steven Levy one.

http://www.amazon.com/Artificial-Life-Frontier-Computers-Biology/dp/0679743898

Clearly one of us should be charging the other rent for living in one's head, but I'm not sure who. Next you'll tell me you are into Douglas Hofstadter as well as classy gimp suits and then where shall we be? :raritywink:

This is perfectly valid when considered inside a closed system (ie., a single work of art). However, art never exists in a vacuum, so external context must be taken into account. Maybe, given a large enough sample and some kind of feedback or initial condition taken into account, you can end up in a situation where more unpredictability will, to some extent, result in more complexity.

1829190 Can't believe you used the phrase "musical informatics journal" here. Some durned fool will go make one now and set the field of music back another century.

So BH, what you're saying roughly is that Music seems to have taken random noise and determined it to be art? If so, it's not really a new thing. Visual art has done that for a very, very long time, and made quite a bit of money at it. (I have photographs that will prove that, and I live in Kansas. I can barely imagine what the East and West coasts have to suffer through.)

1829593
https://en.wikipedia.org/wiki/Music_informatics

Too late~

Also, really, as far as that sort of "art" goes, there's actually not that much out here. Almost all art is not part of the modern art movement, but rather part of the movements which greatly post-date it. Honestly, digital art of actual stuff is by far the most common and profitable as far as I can tell. The whole "random paint spatters" thing honestly is basically a very cliquish community which probably can't sell to anyone but very wealthy people trying to impress other very wealthy people.

1829473
I think you seem to be misinterpreting me. I'm not arguing the philosophical point, at all.

If you're saying the universe is not deterministic, then yes, we know that is absolutely the case.

I'm arguing that you sound awfully sure about a lot of things that fundamentally cannot be known. Unless you've completely switched over to a probabilistic epistemology and you mean something wholly different from other people when you use phrases like "we know that is absolutely the case," I don't know how you expect to back that up. Absolute statements have always seemed, to me, to mark the end of scientific inquiry. If you think you've already got the truth with p=1, then there's no amount of data in the world that can make you change your mind about something.

Absolutism sets off big red flags for me, especially when used in the context of science. I always come back to the George Box quote: "all models are wrong, but some are useful".

As there is no evidence that the human brain is somehow fundamentally different from everything else in the universe, and considerable evidence that it isn't (evolution, material composition, signal measurements, ect.), it is pretty unreasonable to argue that we are not, in the end, meat machines, because according to all available science, that is exactly what we are.

...

But given that we are essentially computers made out of meat (albeit not transistor based ones), and given that we do produce said works, it is obviously true - it must be true, in fact - that there is some "program" for writing said works, because it has been done.

This is the sort of reasoning I'm talking about. This reads, at least to me, like you're saying that strong probabilistic evidence is sufficient to make an idea axiomatic. I'm on board with you, right up until you say that your conclusion is obviously and necessarily true. In effect, you're saying, "I'm pretty sure A is true. A logically implies B. So B must be absolutely true, because A looks like it's probably true."

I don't even need to wade into the mire of how messy our data on some of this stuff is. The point I'm trying to make holds for things as non-controversial as gravity, and tends to be why I find myself a little flummoxed whenever I hear people acting like they've pinned down the structure of the universe. All our theories about how things work three superclusters over are based on what we can figure out sitting on our little rock. We have some darn good guesses, but we're extrapolating so far away from what we know, with no ability for experimental verification. We can't even rule out variable speed of light hypotheses.

There are lots of things we can do well. Low-error modeling in science is what gives us the ability to develop technology. But if we can't at present get anywhere close to low-error modeling, however good our theory may look, I don't know how we can extrapolate to the point of saying we know with absolute certainty that we'll get there.

Yes, I'm aware that I'm harping on a point a lot of people would consider purely semantic: the difference between things being probably true and absolutely true. But that's kind of a critical distinction in science.

1829280 The purpose is to understand music and ourselves. One more specific purpose is to figure out whether Ferneyhough's music really is great, as part of a larger project to understand the whole modernist / postmodernist movement.

1829404 The bigger problem with math is that it is based on assumptions and is, in fact, purely logical, which is why the universe doesn't run on it - the universe runs on probability, not logic.

You just told a statistician that statistics isn't math. :rainbowderp:

1829404 I think we're pretty much on the same page. Well said!

1829473

it is pretty unreasonable to argue that we are not, in the end, meat machines, because according to all available science, that is exactly what we are.

This, I guess:

Is what always confuses me. If the computer we're trying to emulate is made out of meat, why are the computers we build made out of plastic and metal? Plastic and metal, it seems to me are, are very different substances from meat: they behave in different ways when subjected to heat and cold, for instance, or when plugged into a wall socket. To expect a computer made of plastic and metal to work the same way as a computer made of meat just seems odd to a poor little humanities major like me. To propose a bad analogy, isn't it like expecting a pebble to work the same way as an apple seed?

Mike, Wondering

1829920
I think my point was that the universe runs off of science, which uses math to represent itself, but is not itself math, as it is not axiomatic. So while we use math to model science, science is not math.

Which is kind of pointless as I'm sure everyone already knows that, and thus is an utterly worthless statement to make.

It was midnight and I was rambling. I'm sorry. :fluttercry:

1829687

Yes, I'm aware that I'm harping on a point a lot of people would consider purely semantic: the difference between things being probably true and absolutely true. But that's kind of a critical distinction in science.

In science, absolutely nothing is absolutely certain. Ergo, any argument you have with anyone over their use of the term absolutely is automatically going to be incredibly dumb, because they obviously are using it in the standard colloquial sense where it is used emphatically. There is no "critical distinction" here because science is based on probability.

Lord knows I do this too, but I'm calling "you're arguing with someone on the internet without disagreeing with them" on this one.

Unless you're suggesting that brain damage doesn't cause people to lose function and memory and cause personality changes and mental illness, in which case I can point you towards a great deal of research which says that, no, the structure and signal transmission in the brain are what causes it to function. That there is no ghost in the shell is incredibly likely - we're talking many sigmas of accuracy here. We know that the brain is the seat of consciousness, we know that the brain is the end product of evolution, we know that the structure of the brain and the signals which pass through it are what allow it to process information because we know that damaging various parts of the brain causes loss in function, we know that the brain can reprogram itself to some extent both via memory forming and learning, as well as responses to brain damage at times where other areas are repurposed, but the original ability does not regenerate and instead it has to be relearned, the list goes on. The brain shows no sign of exotic physics in its functionality as far as anyone has observed. All of this suggests very strongly that nothing untoward is happening here.

But I think we're wandering wildly off-topic here, and I'm not even sure if you're arguing the above or not, and I don't think that you actually are, I think you just got wrapped up in an argument. If you do disagree and you want to argue with me about this further, PM me. And I'll try not to respond today as I want to get something else done.

:ajbemused:Aaw shucks! I'm terrible at mathematics, hard to wrap my head around it. But I shouldn't give up, I guess, so could anyone recommend literature on the subject at hand?

At this point of personal understanding, it seems that higher complexity in any given system will cancel itself out, demanding more and more energy to produce more effect. Inherently, all things in the universe seeks out inert uniformity, so to keep a system from reaching that inactive state you'll add work, or energy, to keep it going, to a point where adding can't increase what's being produced or is deemed valuable.
So, if I understands this — which I see as improbable at this point —, there's a maximum of complexity to music, and genres of music, that'll produce interest, attentiveness,longevity and admiration in the listener. But will lose these desirable traits if the composer crawls too far up his/hers ass or play it too safe, am I right? :ajsmug:

1830678

In science, absolutely nothing is absolutely certain. Ergo, any argument you have with anyone over their use of the term absolutely is automatically going to be incredibly dumb, because they obviously are using it in the standard colloquial sense where it is used emphatically. There is no "critical distinction" here because science is based on probability.

Lord knows I do this too, but I'm calling "you're arguing with someone on the internet without disagreeing with them" on this one.

Yeah, that's basically what I was trying to say 13 hours ago, but you seemed to want to continue with it.

:rainbowhuh:

1830628

If the computer we're trying to emulate is made out of meat, why are the computers we build made out of plastic and metal?

The substrate doesn't matter; all computers are fundamentally equivalent (up to the limits of clock speed, memory capacity, etc.).

1829920

You just told a statistician that statistics isn't math.

Actually, I always believed that to be true myself, but never told that to a statistician. :rainbowlaugh:
Think about it: every other field of mathematics is based on unquestionable certainty, rock-solid rules that can give a variable its value with exact precision. Statistics and the whole probability calculus? It throws it all out of the window, stating that something could be or not, based on some unspecified probability. Where's the solid mathematical logic and certainty then? :pinkiecrazy:

"Getting stuck trying to understand Kullback–Leibler distance? Try reading this blog post about music on this fanfiction site."

It's amazing what you manage to sneak in here sometimes.

1830682 I'm saying that people are mistaking unpredictability for complexity. "Complex" should mean something more like "interesting". Increasing unpredictability increases something's interest up to a point, and then makes it less interesting as it becomes so unpredictable that we call it noise.

1830628
So, meat machines and silicon transistor-based computers work pretty differently, which is why humans are only sort of smarter than computers are.

Computers are extremely good at precise arithmetic, and indeed, at many forms of math. They are very good at modelling physical systems. This is a consequence of the way that they function.

Human brains, on the other hand, seem to be very good at quick and dirty estimations, lateral thinking, modelling 3D spaces, and modelling very abstract situations. Humans are not very good at calculation relative to computers, but are extremely good at reasoning - a computer cannot easily do many things that humans take for granted, or put things together as easily on its own.

However, in the end, the way the human brain works is via electrochemical signalling between neurons. Every neuron has thousands of connections to other neurons, and they send signals between each other by sending a signal known as an action potential along the connections between neurons. This signal is either on, or off... which is exactly what a transistor is, 0 or 1. While the actual "calculation" that neurons do on whether or not to send along another signal afterwards is more complicated than this, in the end this can be modelled via a computer.

The issue is that our brain probably is closer to analog than it is to digital; while the action potentials are digital, whether or not a neuron sends signals elsewhere is probably not digital but rather analog in nature, which means that modelling it requires more than just a 0 and a 1, but enough precision to reasonably model the analog nature of reality.

The human brain, according to some estimates, runs at about 1 exaflop. This sounds pretty ridiculous - and honestly is pretty ridiculous - but it also probably isn't really an accurate way of modelling how our brain works, because we're actually really terrible at FLOPs. It is more or less an approximation of how much work our brain does, and the equivalent amount of work a computer would need to do.

In reality, the brain is about 100 billion (or 10^11) neurons, each with 10,000 or so (10^4) connections, running at about 60 Hz, meaning you're talking about at least 6*10^16 potential signals you have to be concerned about per second. But this is merely talking about how many signals could be sent across the brain at once; in reality, most of those are going to be 0s at any given moment in time, though over a period of time your brain actually does get used in whole and this means you can't be super lazy in your modelling because the rest of the brain matters. So at a minimum you need to model 6*10^16 bits per second to model a human brain in real time... though it actually is probably even worse than that because action potentials are not simultaneous and the time lag between different neurons talking to each other is very likely to become significant.

This gets worse when you realize that this isn't all that is happening, though, and the neurons which receive the signals then have to decide whether or not they need to fire off in response, which is itself some sort of calculation - basically determining how much of a positive vs negative signal it has gotten (as there are signals which tell other nerves not to fire as well), and this means more numbers. While there are only 10^11 neurons or so, this adds a bunch more bits to the calculation, because you might need to add things up or add and subtract or look at different thresholds or something. And they might also resist signalling more often than at a certain rate, which also would have to be included in your model.

Thus while there's nothing in there which is impossible for a computer to replicate, we're talking about an extraordinarily large amount of information. 6*10^16, or 10^15 bits at any given time, is only a terabit of information, but in reality you have to model the time that the action potentials reach the neruons at and the various states of all the neurons and very possibly the interactions of non-neuronal cells with the neurons as well, and possibly the status of the intercellular medium because of the rate of uptake of the various chemicals used to signal between cells, and the impact of hormones on the cells' function... well, there's a lot of stuff up there, and we aren't sure exactly on what level we have to model everything to get a functional brain model. According to the most common supposition about the level of fidelity necessary, a supercomputer comparable to those of today running off of future hardware at the peak of what is possible with transistors is within an order of magnitude of doing said simulation in real time.

However, there are other models which suggest modelling the human brain is several orders of magnitude harder than that, and if those models are correct, then said hypothetical supercomputer may still be anywhere from a thousand to a billion times too slow. This is not to say that the human brain is, in fact, that much more intelligent than said computer, but rather that because of the differences between how the analog, ridiculously parallel human brain and the digital, far more linear computer work, the human brain is difficult to simulate exactly. It may be possible to create something with human-like intelligence which behaves in real time well before that point, but without any real understanding of how humans work, designing a truly intelligent AI would be... incredibly difficult. This is indeed part of why understanding how human brains function is helpful, as it allows us to create intelligences via another method other than the most popular one used today, though given the general inefficiency of silicon computers versus meat-based ones, fears of mechanical overlords are likely unfounded. The joke about how the best computers in existence are produced via unskilled labor which is generally found highly pleasurable by those involved, and runs off of Doritos and Mountain Dew, has more than a grain of truth to it.

So, it is very possible in principle because any given neuron is basically a very simple computer connected to a bunch of other computers, but in practice is extremely difficult because there are 10^11 of said very simple computers hooked together and that a bunch of other considerations (signal speed, chemistry of the intercellular medium, absorption rate of chemicals, ect.) apply.

1830708
The math underlying statistics is as certain as any other math. The complexity lies in that you end up with a distribution of answers, rather than a single answer.

The real tragedy of stats is that most people never really get much education in them, but they're by far the most useful form of math in real life.

1830628
Creating computation devices out of meat is slow, inefficient, expensive, and generally considered immoral. It took off because it was shown that, as far as employers and consumers are concerned, some meat computers (formerly known as Computers, people that were trained to do computations) were entirely replaceable by metal computers. We continued doing more of it because it was shown that many kinds meat computers were replaceable by metal computers for specific tasks.

It is suspected that arbitrary physical systems can be simulated by metal computers. If this is true, and if meat computers are purely physical systems, then it follows that metal computers can simulate meat computers.

So it's suspected to be possible, fast, efficient, cheap, and moral. That's why we try to do it.

Login or register to comment