• Member Since 11th Apr, 2012
  • offline last seen 1 hour ago

Bad Horse


Beneath the microscope, you contain galaxies.

More Blog Posts758

Sep
11th
2017

How and Why Humans Dichotomize · 2:23am Sep 11th, 2017

Language Decay

Someone--I can't remember who--observed that adjectives and adverbs in all languages gradually decay and must be replaced with new ones, because they are gravitationally drawn over the centuries towards meaning simply "good" or "bad".  Consider the original meanings of these words:

Good: From Proto-Indo-European *gʰedʰ- (“to unite, be associated, suit”)
Wonderful: Full of wonder
Awful: Full of awe, so holy or transcendent as to inspire fear
Awesome: Full of awe, so holy or transcendent as to inspire fear
Terrible: Terrifying
Terrific: Terrifying
Horrible: Barbarically hairy
Rotten: Decayed, putrified
Lousy: 14th century, infested with lice
Great: Of extreme importance or status; from Old English "big", from German.  "Great" in German is still a synonym for "large".
Stupid: From Latin stupidus, struck senseless (by a sudden shock)
Idiot: From Greek idiōtēs, a person who did not participate in politics
Amazing: From a Proto-Indo-European root meaning "counfounding".  "Maze" is probably derived from "amaze".
Bad: Possibly from old English bǣddel, a womanish man or hermaphrodite

(Yeah, yeah.  Laugh it up in the comments.)

Adjectives & adverbs in general become simpler over time.  They often arise from some common understanding of the world which is then cast off, and the word remains, its meaning now only inferred from context.  So "lunatic" once meant "one under the influence of the moon", which was associated with insanity.  That belief was the remains of an earlier enumeration of the qualities of the moon, based on viewing the sun as masculine (orderly, powerful, harsh) and the moon as feminine (hard to predict, gentle, subordinate, a little crazy).

Note in some cases a word split into two words with the same meaning which drifted in opposite directions, until one became "good" and another became "bad" (awful / awesome, terrible / terrific).  We have other word pairs which mean the same thing, except that one also means "good" while the other also means "bad": thrifty / miserly, prudent / cowardly, inquisitive / nosey.


Principal Components of English

In "Information theory and writing" I wrote,

(Samsonovic & Ascoli 2010) used energy-minimization (one use of thermodynamics and information theory) to find the first three principal dimensions of human language. They threw words into a ten-dimensional space, then pushed them around in a way that put similar words close together. Then they contrasted the words at the different ends of each dimension, to figure out what each dimension meant.

They found, in English, French, German, and Spanish, that the first three dimensions are valence (good/bad), arousal (calm/excited), and freedom (open/closed). That means there are a whole lot of words with connotations along those dimensions, and owing to their commonality, they seldom surprise us. Read an emotional, badly-written text—a bad romance novel or a political tract will do—and you'll find a lot of words that mostly tell you that something is good or bad, exciting or boring, and freeing or constrictive.

Look at the first two figures from that paper:

How to interpret these graphs:  In 1A, in the upper-left corner, the X-axis represents PC1 ("valence"; - = bad, + = good), and the Y-axis represents PC2 ("arousal"; - = calm, + = exciting).  So "calm" is at <1, -2.5>, indicating it's pretty good, and very, very calm.  "Terrible" (valence = -1.2, arousal = .8) is bad but exciting.  The fuzzy blue dots each represent one word, so there are a lot of words wherever the dot cloud is dense.


Here you want to pay attention to the inset graph, which shows a separate curve for each of the first three principal components (valence, arousal, freedom).  The height of the curve at a point along the X-axis, from -2 to 2, shows what proportion of English words have that value for that principal component.

Both figures show, in different ways, that the first principal component (or dimension) of human language is valence, the goodness or badness of a thing; and that, unlike the other principal dimensions of language, the valence dimension has a hole in the middle of it.  You can see the hole clearly in figure 1A; the cloud of words has a hole in the middle.  You can also see it in the curve labeled "PC1" (Principal Component 1 = valence) in figure 2, which has 2 humps, instead of having just 1 hump in the middle like PC2 (arousal) and PC3 (freedom) do.  That shows that most words have a "freedom" (PC3) value close to 0 (neutral), and an "arousal" (PC2) value between -.5 and .5, but a "valence" (PC1) value that's close to either -0.8 or +0.8, with relatively few having a neutral value.

There are many words that imply a thing is good or bad; there are, surprisingly, few words that are neutral--as nearly all words should be, if the purpose of language were to be precise or efficient.  Humans really do over-dichotomize, but only along the good-bad dimension.

Why might they do this?


Why Insects (Probably) Dichotomize

Drosophila melanogaster, the fruit fly, is a relatively simple animal, and we know a great deal about its neurophysiology, yet they are still beyond the current state-of-the-art in artificial intelligence.  (Arena & Patanè 2014) summarizes the experimental data so far about the behavior and neurophysiology of the fruit fly, and describes a computational model for its behavior and some results using that model with simulated and robotic fruit flies. [4]

One inelegant aspect of their model is the frequent use of binary rather than continuous classification. Rather than recording how good or bad an encounter with an object was, their model either records in memory merely that an encounter with an object was bad, or good (Arena & Patanè 2014 chapter 1). If it was indifferent, it does not remember it at all.


The top row of boxes in figure 1.8 represents the protocerebral bridge (pb), a series of 16 nerve clusters which hold the left/right position of the object (represented by a black circle) ahead and to the left of the fly.  The two activated clusters in the pb (indicated here by 2 large arrows) transmit their data on the object to the fan-shape body (fb), which Arena & Patanè believe classifies the object by several scalar or cardinal properties such as color and size, and by the binary property "good" or "bad".  If the object was good, neurons leading from fb to the flies legs on its right side are activated (dotted lines), causing it to take larger steps with its right legs than with its left legs and turn towards the object.  If it was bad, neurons leading to the legs on its left side are activated, causing it to turn away from the object.

There are 3 main advantages to dichotomizing (merely remembering whether an object was good or bad, rather than how good or bad it was):

  1. Dichotomizing saves energy.  If neurons stored an object's goodness as numbers on a scale, they would have to all be positive numbers, because you can't have a negative number of neural spikes.  Then neutral objects would trigger an intermediate level of spiking from the sensory neurons, but have no effect, and this would waste a great deal of energy.
  2. Dichotomizing saves time.  If neurons stored an object's goodness as numbers on a scale, either badness or goodness would have to be encoded by slow spiking, and so the fly would have to count spikes for a long time to discriminate between them.
  3. Neurons decide whether to fire based on how many incoming spikes they get per unit time.  If bad objects were represented by slow spiking, then several bad objects in the same place would look like one good object.  If good objects were represented by slow spiking, several good objects in the same place would look like one bad object.  That would be a problem.

Do Humans Dichotomize?

The neurophysiological evidence on whether humans dichotomize good and bad the same way is... inconclusive, but consistent with "humans dichotomize the high-level concepts that are important to them into good and bad".  I had to write this entire section to figure that out, but you don't really have to read it--you can skip ahead to the TL;DR.

There's no doubt that humans have a special system devoted to categorizing things as good or bad.  The question here is whether they do it in a way that keeps track of shades of gray and the different plusses and minuses of an object (or person), or if they have two separate tracks, so concepts get slotted into the "good" track or the "bad" track.

Dichotomy and abstraction

Humans think at much higher levels of abstraction  than flies do, so you might suppose that they don't need to categorize abstract concepts as "things to approach" and "things to avoid".  (Vigliocco et al. 2014) tested this by presenting people with concrete words, abstract words, and pseudo-words, and measuring how long it took them to decide whether they were words and checking with fMRI what areas were activated by abstract but not concrete words.  They found that people took only 0.95 as much time to recognize abstract words as concrete words, and that abstract words activated the rostral anterior cingulate cortex, an area associated with emotion processing, more than concrete words did.  They also checked some already-existing ratings of goodness / badness ("valence") and emotional intensity of abstract and concrete words, and found that abstract words were more emotional than concrete words.

From Vigliocco et al. 2014. Concrete words were more likely to have neutral valence and low emotional intensity.

That last conclusion was something of a given--there are words about emotions, like "love", "fear", and "hate", all of which were classified as "abstract". Part of speech also had an effect; "hatred" (a noun) was measured as being considerably more concrete than "hate" (a verb). But I think we can say that Vigliocco et al. at least showed that abstract words probably aren't especially unemotional. That's what we were worried about.

[Bloviation here saying emotions are concrete excised thanks to mishun's comment below].

In fact, if you look at the left graph in figure 1, you see that the valence (goodness / badness) of concrete words has the opposite distribution of that of English words as a whole shown in figures 1 & 2 from (Samsonovic & Ascoli 2010) above: most English words are strongly positive or negative, but most concrete English words are more neutral.  This is the opposite of the (hypothetical) situation in the fly brain, in which objects (which are concrete) are categorized as good or bad.

(Also, it seems impossible. Aren't there more nouns than verbs?)

This suggests that in all organisms, the concepts we decide to pursue or avoid are tagged as "good" or "bad", while we may feel neutral about the less-abstract concepts we use to identify the presence of those good or bad concepts.  Insects might make decisions along the lines of "fly towards things that might be bananas", so they might have strong "emotional" feelings about bananas and other concrete objects.

But even the perception of a banana is, computationally, very abstract; it is made up of millions of perceptions of lines, corners, colors, gradients, and movements--and insects, I'm sure, don't consider individual line segments good or bad.

For most organisms, the most-abstract concepts their mind can hold are probably the things they pursue or run away from, which their brains label "good" or "bad".  For humans, that can be abstract concepts like "safety", "social stability", or even "justice".  Your brain isn't "designed" to enable you to reason about how valuable social flexibility is, or how to make optimal trade-offs between that and social stability; it's "designed" to identify concepts as "good" or "bad" and then determine whether they are present or absent, Yes / No (check only one). [1]

Good / bad versus approach / avoid

Various work has shown that humans have more activity in left lateral prefrontal cortex (PFC) when they perceive things they like, and more in right lateral PFC when they perceive things that they don't like. This suggests separate systems for processing the good and the bad, which would suggest that we dichotomize perceptions into good and bad.

However, (Berkman & Lieberman 2010) (who reviewed the previous work) argued that humans can choose to approach things they don't like, or not to approach things they do like, and that since the prefrontal cortex is involved in task planning, these two systems could be for approach and avoidance goals rather than having anything to do with good versus bad.  I must object in passing to Berkman & Lieberman's de-anthropomorphizing [2] assumption that only humans can act that way.  But their simple experiment, which gave people a task in which they had to imagine approaching things they found repulsive, or avoiding things that they liked, provided some evidence for their very reasonable hypothesis--that the PFC asymmetry has to do with planning to approach or avoid, rather than with categorizing things as good or bad.

One linear system, two linear systems, or nonlinearity?

(Lindquist et al. 2015) did a big meta-analysis to answer whether humans dichotomize or not.  (A meta-analysis analyzes the data from lots of other analyses.) They reviewed the literature on whether humans judge "good" and "bad" along a single dimension (non-dichotomizing, called the "bipolarity hypothesis"), using two separate systems as we think the fruit fly does (dichotomizing, called the "bivalent hypothesis"), or using "a flexible set of valence-general regions" (the "affective workspace hypothesis").  This included reviewing 397 functional magnetic resonance imaging studies.  They wrote:

There has been a long and tortured debate over the structure of affect, largely because behavioral studies to date have been unable to show clear evidence for one model or the other (Barrett and Bliss-Moreau 2009). Bipolar and bivalence hypotheses are relatively untested in the domain of neuroscience, but each model makes unique predictions for how valence might be represented in neuronal activity. Support for the bipolarity hypothesis would be found if a given network of regions responds monotonically as affect changes from negative, to neutral, to positive or vice versa. In this view, neurons associated with increased positive affect would also be associated with reduced negative affect, and vice versa. Support for the bivalence hypothesis would be found in separate and independent networks for positivity and negativity, such that across studies, the same regions show consistent increases in activity for positive but not negative affect, and other regions show consistent increases in activity for negative but not positive affect.
--p. 2

Note that they unfortunately did not consider the possibility of using two dimensions within a single system, as the fruit fly does.  Using the above approach, the fruit fly model shown in figure 1.8 could not be identified as bivalent, because the neurons which categorize things as good or bad are in the same place.  What you'd have to look for to identify systems that work like that would be brain regions that were more active in both the "good vs. neutral" (being shown a good stimulus vs. being shown a neutral stimulus) and "bad vs. neutral" comparisons.

What they really wanted to do was test their own theory, the "affective workspace hypothesis", which says… um...

Taken together, the findings from non-human animals imply that a third hypothesis on the structure of valence is possible: A representation of positivity or negativity emerges at the population level, as a “brain state” (Salzman and Fusi 2010) but is not necessarily consistently associated with a specific brain region or set of regions….

A given neuron might participate in both instances of negativity and positivity across contexts, with its receptive field being determined by the neural context. Because neuronal assemblies are flexible, a given neuron need not participate in every brain state within a class (e.g., positivity), or even in the exact same mental state at 2 different points in time (e.g., positivity at seeing a friend at work vs. at a pub).

The discussion of the "affective workspace hypothesis" never makes it clear whether the key point of their hypothesis is that there are multiple systems, or that they use distributed representations.  Worse yet, the more-detailed explanation of it talks about individual neurons that respond differently to the same stimuli in different contexts, and implies that those are the sort of neurons they're looking for.

Unfortunately, this is not a testable theory, because it gives no way to distinguish representations that are spread out across different and varying brain regions, which process valence at the population level, in a way such that no neuron's output is predictable from the valence being computed, from, well, anything else the brain does.  Surely at least one of these representations represents the object, and will respond to its presence consistently.  Won't that get interpreted as affirming the affective workspace hypothesis?

I need to raise another problem with this study.  Remember how the entire point of (Berkman & Lieberman 2010) above was to carefully separate out the measurement of valence from the measurement of action motivation?  There are other papers in this area on isolating the measurement of valence from the measurement of arousal or intensity.  This study?  Doesn't do any kind of careful separation like that, because it is a study of other studies, and each study gathered different data.  That's the problem with big meta-studies:  You get a lot more data than in a single study, but you can never control it as well.  fMRI meta-analyses in particular are inherently sketchy--the correlations in the data depend critically on exactly what the experiment was, but each study used a different experiment.

But let's keep reading. I mean, 397 studies. That's a lot of data. Maybe we'll find something interesting. (Hint: We will.)

Results

The Bipolar Hypothesis: Regions that Respond Monotonically along a Single Valence Dimension?

First, we assessed whether any clusters of voxels were more frequently engaged during “positive” versus “negative” study contrasts than during “positive” versus “neutral” study contrasts across studies….  This analysis revealed a cluster in a ventral portion of the rostral anterior cingulate cortex (ACC) and medial prefrontal cortex (MPFC) ….  We next tested for clusters of voxels that were more often engaged during “negative” versus “positive” than by “negative” versus “neutral” study contrasts... but were unsuccessful in identifying any. These findings suggest that the ventral MPFC and ACC areas may be candidate regions of interest coding for valence along the lines specified by the bipolarity hypothesis.

So, they found two areas that might be doing non-dichotomous good / bad judgements.  ACC is involved in noticing when you make mistakes.  Could some of these fMRI studies have confounded "good things" with "right judgements" and "bad things" with "mistakes"?  I don't know, but the ACC seems an unlikely candidate for computing valence.

The precise function of medial prefrontal cortex is still unknown.  One highly cited paper suggests it is "to recall the best action or emotional response to specific events in a particular place and time."  That would include remembering whether events were good or bad.  It sounds a lot like what our flies were doing. So, maybe.  Non-dichotomous processing is not ruled out.

One intriguing thing is that this area is very far forward in the brain, so if it's tagging things as good or bad, it's not tagging things perceived through one sensory modality--that is, it's not recognizing physical objects, or sounds, or smells, as good or bad.  It would be tagging abstract concepts.

How about the dichotomous hypothesis?

The Bivalence Hypothesis: Two Unipolar Dimensions?

First, we tested for voxels that responded exclusively to positive affect (i.e., a unipolar dimension ranging from positive to neutral), by assessing whether any voxels were more frequently engaged during “positive” versus “negative” study contrasts than during “negative” versus “neutral” study contrasts. Contrary to the bivalence hypothesis, no voxels displayed a significant profile of increased activation exclusively for positivity across studies. Next, we performed a complimentary analysis to test for voxels that responded selectively to negative affect (i.e., a second unipolar dimension); we were again unsuccessful in identifying any. These findings suggest that the bivalence view that positivity and negativity correspond to spatially separable and distinct brain systems is not a viable framework for understanding the brain basis of valence. [5]

The idea that humans dichotomize into two spatially-separated good-and-bad systems is not a viable framework!  But let's keep reading.

The Affective Workspace Hypothesis

We found the conjunction of ["valence-general"] voxels that showed consistent increases in activation during study contrasts comparing “positive” versus “neutral” baselines and “negative” versus “neutral” baselines using the global null conjunction [3].... In essence, these are regions of the brain that respond more frequently to positive AND negative valence than to neutral valence....  Consistent with the hypothesis that valence-general voxels make up the brain’s affective workspace, our conjunction revealed valence general increases in activity in the bilateral anterior insula, bilateral lateral orbitofrontal cortex, bilateral amygdala, the ventral striatum, thalamus, dorsomedial prefrontal cortex (∼BA 9), dorsal ACC, supplementary motor area (∼BA 6), bilateral ventrolateral prefrontal cortex, and lateral portions of the right temporal/occipital cortex.

This means Lindquist et al. are expecting the entire brain area that does brain-state computations to have neurons that have higher activations when they produce a positive or negative valence result than when they produce a neutral result.  But that's not how distributed representations work.  If the outputs of all your neurons are correlated, there's no point using a distributed representation; you only need one neuron.  Distributed representations decompose a classification problem into many dimensions, spreading out all of the training examples roughly equally in the space represented by spiking rate at each neuron.  Activity levels should be uncorrelated with the computed valence result.  Whatever they found in these regions, it isn't what they were looking for.

Wait a minute…  the set of all brain regions which had higher activation both in good versus neutral, and bad vs. neutral comparisons...

Yes, that was what I said we should look for if we wanted to find bivalent representation of good and bad which occupied the same physical space (and / or neurons), just like in Drosophila.  The thing I said they should check for but weren't checking for.

So, though the study claims to have found a brain region which computes goodness and badness of stimuli in a complex, holistic, emergent, non-dichotomizing way… it didn't.  It found something that might be doing it in a non-dichotomous way (ventral MPFC and ACC), and it found areas that look like what a single brain region that categorizes stimuli into good and bad, just like in Drosophila, would look like.

The results are, however, impossible to evaluate as they didn't give any statistical measures such as p-values or Z-values to indicate how likely it would be for them to have arisen by chance. This is a shocking failure, and the paper shouldn't have passed peer review without them.

But the regions they found are mostly regions you'd expect to be involved in valence, being ones associated with valence labeling and go / no-go judgements (ventrolateral prefrontal cortex), emotion (insula, amygdala), attention (orbitofrontal cortex, thalamus), & motivation and reward (ventral striatum, dorsal anterior cingulate cortex).  Dorsomedial prefrontal cortex, supplementary motor area, and right temporal/occipital cortex are unexpected, but meh.

On the other other hand, it would be hard to imagine a brain architecture consistent with what we know so far in which those areas didn't have greater activation for +/- valence stimuli than for neutral ones.  You would have to get those results, regardless of how valence was computed, because when you encounter an emotionally-charged stimulus and might have to do something about it, all those areas will become more active then when you encounter an emotionally-neutral stimulus that you can ignore.

TL;DR: The hypothesis that the human brain dichotomizes many concepts stimuli into "good" and "bad" in a way beyond your conscious control is not confirmed.  Some predictions made by supposing that the human brain has a valence system analogous to Drosophila's have been confirmed, but they were predictions likely to be verified regardless of how the brain judges valence.


Conclusion

Whatever the neuroscience eventually concludes, the linguistic data indicates that humans dichotomize more than reality does. This is a good thing if you're running away from cave lions, for the same reasons that it's a good thing for Drosophila. But it's a bad thing if you're trying to manage a complex civilization. You need accurate information about the world, and reasoned responses that optimally distribute your resources among the multiple activities, but your brain keeps labeling things "good" and "bad" and telling you to run after the former and away from the latter, right now.

BTW, when we're talking about human behavior rather than neurophysiology, we usually call it "dualism" instead of "dichotomizing".  (Probably to help maintain the dichotomy between humans and non-humans.)

Next time I'll talk about one or more of: how dualism ruins philosophy and religion, how the ancient Greek skeptics were right about everything, why the most-reputable colleges often give the worst educations, and the 2000-year-old conspiracy to prevent you from noticing any of this.  Eventually I hope to explain how Plato's philosophy led to Trump's election.

The truth is out there, man.

That will take at least 20 long posts, though.  Shit.  That's like two books.  What am I doing with my life?


[1] If this rule is generally true, scientists and mathematicians are an exception to it.  They construct abstractions more abstract than their goals.  In basic research, they don't really have any goals; they're just trying to build further abstractions.  This causes a lot of misunderstandings with people in the humanities, who often begin trying to understand someone's words not by understanding their content, but by trying to figure out what goal they're directed toward.

[2] The term "anthropomorphism" is used to refer to people attributing human attributes to animals. It is supposedly a bad thing, and scientists are supposed to avoid doing it. However, it is mostly used to criticize people for supposing that humans are pretty much like other animals in situations where we have every reason for believing that they are, and no reasons for suspecting any difference.   For example, someone who claims that a dog feels "fear" at the approach of the human who has beaten it frequently, and that we may infer this from the way it whines, shivers, and tries to hide, is likely to be told he is anthropomorphizing the dog.

To instead describe the dog's reactions without calling them fear I call de-anthropomorphizing, and that is IMHO usually the worse error.  The real purpose of harping on anthropomorphizing is not to reduce errors, but to preserve the dichotomy between humans and other animals.

[3] A little Googling shows that a "global null conjunction" is, counter-intuitively, a disjunction of failed null hypotheses (all the elements in a data set which failed at least one out of a set of null hypotheses). From the context, it's clear that Lindquist et al. mean "conjunction null conjunction", which uses the word "conjunction" twice to say that, honest, guys, we're really using a conjunction this time. I just wish Schoolhouse Rock would do a song about it.

In this case, that means it's the set of all brain regions which had higher activation ( = failed the null hypothesis of having no significant difference) both in good ("positive") versus neutral, and bad ("negative") vs. neutral comparisons.  Which is what they said it was, so... good.  Just checking.

[4] If you're interested in philosophy, there are few things more helpful in resolving epistemological questions than studying the neurophysiology of insects.  They are simple enough to understand, but complex enough to address any question about the acquisition of knowledge that any ancient Greek ever thought up.

[5] (Berkman & Lieberman 2010) said there was a consistent association in the literature of left lateral prefrontal cortex (PFC) with good things, and of right lateral PFC with bad things, but that this could be explained as being caused by plans to approach or avoid the things, rather than as systems representing goodness and badness.  But Lindquist et al. didn't, so far as I noticed, eliminate studies that confounded good/bad with approach/avoid.  Why didn't they rediscover this "consistent association" here?  I don't know.


References

Paolo Arena & Luca Patanè, 2014. Spatial Temporal Patterns for Action-Oriented Perception in Roving Robots 2: An insect brain computational model. Heidelberg: Springer.

Elliot T. Berkman & Matthew D. Lieberman, 2010. Approaching the Bad and Avoiding the Good: Lateral Prefrontal Cortical Asymmetry Distinguishes between Action and Valence. J Cogn Neurosci. 2010 September ; 22(9): 1970–1979. doi:10.1162/jocn.2009.21317.

Kristen A. Lindquist, Ajay B. Satpute, Tor D. Wager, Jochen Weber, and Lisa Feldman Barrett, 2015. The Brain Basis of Positive and Negative Affect: Evidence from a Meta-Analysis of the Human Neuroimaging Literature. Cerebral Cortex 2015, 1–13 (Advanced Access pagination). doi: 10.1093/cercor/bhv001

Alexei V. Samsonovic & Giorgio A. Ascoli, 2010. Principal Semantic Components of Language and the Measurement of Meaning. PLoS One 5(6):e10921, June 2010.

Gabriella Vigliocco, Stavroula-Thaleia Kousta, Pasquale Anthony Della Rosa, David P. Vinson, Marco Tettamanti, Joseph T. Devlin, & Stefano F. Cappa, 2014. The Neural Representation of Abstract Words: The Role of Emotion. Cerebral Cortex July 2014, 24:1767–1777. doi:10.1093/cercor/bht025

Comments ( 35 )

As always, you amaze me with your work. Really wish you could get these blogs published.

Have I mentioned before that neurolinguistics is really interesting?

Neurolinguistics is really interesting.

EDIT: Just to make this comment more relevant: The part of your brain that stores swears isn't the same part as the language center. So someone with expressive aphasia can still swear fluently and legibly, even as the rest of their speech is basically a liquid mess.

"onfournurnr FUCK onfurnurru"

Whatever the neuroscience eventually concludes, the linguistic data indicates that humans dichotomize more than reality does. This is a good thing if you're running away from cave lions, for the same reasons that it's a good thing for Drosophila. But it's a bad thing if you're trying to manage a complex civilization. You need accurate information about the world, and reasoned responses that optimally distribute your resources among the multiple activities, but your brain keeps labeling things "good" and "bad" and telling you to run after the former and away from the latter, right now.

Not unrelated: A Thrive/Survive Theory of the Political Spectrum

Before I explain, a story. Last night at a dinner party we discussed Dungeons and Dragons orientations. One guest declared that he thought Lawful Good was a contradiction in terms, very nearly at the same moment as a second guest declared that he thought Chaotic Good was a contradiction in terms. What’s up?

I think the first guest was expressing a basically leftist world view. It is a fact of nature that society will always be orderly, the economy always expanding. Crime will be a vague rumor but generally under control. All that the marginal unit of extra law enforcement adds to this pleasant state is cops beating up random black people, or throwing a teenager in jail because she wanted to try marijuana.

The second guest was expressing a basically rightist world view. The prosperous, orderly society we know and love is hanging by a frickin’ thread. At any moment, terrorists or criminals or just poor management could destroy everything. It is really really good that we have police in order to be the “thin blue line” between civilization and chaos, and we might sleep easier in our beds at night if that blue line were a little thicker and we had a little more buffer room.

I propose that the best way for leftists to get themselves in a rightist frame of mind is to imagine there is a zombie apocalypse tomorrow. It is a very big zombie apocalypse and it doesn’t look like it’s going to be one of those ones where a plucky band just has to keep themselves alive until the cavalry ride in and restore order. This is going to be one of your long-term zombie apocalypses. What are you going to want?

[...]

In other words, “take actions that would be beneficial to survival in case of a zombie apocalypse” seems to get us rightist positions on a lot of issues. We can generalize from zombie apocalypses to any desperate conditions in which you’re not sure that you’re going to make it and need to succeed at any cost.

What about the opposite? Let’s imagine a future utopia of infinite technology. Robotic factories produce far more wealth than anyone could possibly need. The laws of Nature have been altered to make crime and violence physically impossible (although this technology occasionally suffers glitches). Infinitely loving nurture-bots take over any portions of child-rearing that the parents find boring. And all traumatic events can be wiped from people’s minds, restoring them to a state of bliss. Even death itself has disappeared. What policies are useful for this happy state?

[...]

I was going to go for ten here too, but you get the picture. This world of infinite abundance is a great match for leftist values. I imagine even a lot of rightists and Reactionaries would be happy enough with leftism in a situation like this.

I should also mention what would no doubt be the main pastime of the people of this latter world: signaling.

When people are no longer constrained by reality, they spend most of their energy in signaling games. This is why rich people build ever-bigger yachts and fret over the parties they throw and who got invited where. It’s why heirs and heiresses so often become patrons of the art, or donors to major charities. Once you’ve got enough money, the next thing you need is status, and signaling is the way to get it.

[...]

Both rightists and leftists will find much to like in this idea. The rightists will ask: “So you mean that rightism is optimized for survival and effectiveness, and leftism is optimized for hedonism and signaling games?” And I will mostly endorse this conclusion.

On the other hand, the leftists will ask: “So you mean rightism is optimized for tiny unstable bands facing a hostile wilderness, and leftism is optimized for secure, technologically advanced societies like the ones we are actually in?” And this conclusion, too, I will mostly endorse.

How many truly neutral events do you really care to talk about? For something to be interesting, it tends to be something good or something bad. Only in pure science really is a truly neutral thing something to talk about and science has its own language.

I will also say that this is why zero was discovered last. The early numbering systems didn't have a number for zero because zero is just nothing -- your neutral value.

Well there's my deep intellectual reading done for this month. Back to pony

So we are all idiots... I'm okay with that.

 Eventually I hope to explain how Plato's philosophy led to Trump's election.

Will this involve the Allegory of the Cave?

Fascinating. I'm in no way an expert, so I'll ask: ultimately this is distilled down to adaptive "survival strategy"? In that case it wouldn't surprise me that many words would fall into a "good/bad" valence. Language is a part of our survival adaptation, and in humans it's highly evolved compared to communication in other creatures, but like ourselves (and our brains), it has very primitive roots, and I don't think we've had time as a species to develop our methods of communication very much. I seriously doubt that will happen significantly until (if we survive long enough) we're 'jacked in' and able to comprehend... everything better.

Sorry for the abstraction. I'm definitely going to have to read this again - probably a couple of times - to fully wrap my head around it. In any case, thanks for sharing. You made my brain explode again.

Fascinating. I don't really have anything of substance to say here, other than the fact that I really appreciate that you went through the time and effort to write this.

What am I doing with my life?

Making me cleverer or at least better informed, apparently.

Um. Thank you.

:twilightsmile:

Quick question: would you count something like the henology of Plotinus as being monist or dualist?

Horse gotta horse. In this case I mean that in the best possible sense :raritywink:

What's PC4, the fourth component?

Wouldn't the fact that language drifts in this fashion indicate that we do, in fact, have a great deal of capacity for nuance, and thus constantly invent new words because we need to express more nuanced concepts?

That would really be what I'd expect, honestly - humans have extremely well-developed brains, but in the end, developing novel structures totally ex nihilo is very difficult. Thus, we end up with pre-existing structures getting hijacked and developed - the reason why we feel all warm and fuzzy inside in response to certain emotions is that it is easier to simply add new stimuli to pre-existing positive sensations than it is to just create some entirely new feeling out of nothing.

Thus, while "good" or "bad" may be good enough for lesser animals, it isn't good enough for humans. However, all of our structures for creating nuance are ultimately laid over that original "good" and "bad" structure, so there is a natural tendency which, if left unchecked, causes things to drift into "good" and "bad". But the fact that we're constantly creating new things in order to allow us to create nuance - indeed, the fact that we can feel ambivalent about things, or feel multiple simultaneous conflicting emotions about things - suggests that we do indeed have the capacity for greater than dichotomous thinking, and indeed, a need for it, and that there is adaptive pressure in that direction, given that humans can feel that way at all and feel the need to create words like ambiguous and ambivalent.

(Yeah, yeah.  Laugh it up in the comments.)

:rainbowlaugh: I didn't actually notice that until you pointed it out.:rainbowlaugh:

...adjectives and adverbs in all languages gradually decay and must be replaced with new ones, because they are gravitationally drawn over the centuries towards meaning simply "good" or "bad".

Okay, but last time you said they all turn into near and far. :trollestia:

Do Humans Dichotomize?

Definitely, and at many levels. It takes conscious effort to not dichotomize in analysis, and it's an essential part of decision-making strategies. This is one of the reasons brainstorming is a valuable technique: if you start evaluating options before they're all on the table, you'll make premature judgments.

It's also relevant in peer groups and prejudice. Back when we lived in tribal groups, this was a handy thing. Today, it's not as useful because it gets turned on members of your own group, but still reasonably useful in some regards—hence its strong persistence.

That will take at least 20 long posts, though.  Shit.  That's like two books.  What am I doing with my life?

Winning :rainbowdetermined2:

4664952 Oh, yes.


4665009

Quick question: would you count something like the henology of Plotinus as being monist or dualist?

And you say you're not an AI. :pinkiesmile:

I refer you to meaningness on how monism often is effectively dualism. Seeking The One led Plotinus to reject the material world as evil and inferior, making him dualist.

Trying to categorize it as monist or dualist is, of course, dualist. :moustache:

Anyway, I introduced the subject not in order to talk about whether the content of various people's beliefs was dualistic, but whether their thinking process was dualistic. 3 key things are how dualistic they were about: the visible world versus the real world, the degree of certainty with which you can separate "true" from "false", and morality. All Platonists score very high for dualism on all these things.

4664980

ultimately this is distilled down to adaptive "survival strategy"?

Presumably.

Okay, but last time you said they all turn into near and far.

Oh, you. :trollestia: That's just within "construal", which is a particular mental operation.

What's PC4, the fourth component?

I don't remember--I think it was different for different data sets. But you could click on the link to the paper and find out.

...It is too early for me to understand any of this. I'll be back in a couple hours.

How would you say the creation of/repurposing of existing words as euphemisms relates to language decay? If nothing else, it requires the invention or presence of more-neutral (or opposite-valenced) words with the same concrete meaning, but need not imply anything about the relative ages of the replaced word and the euphemism.

I could see competitive interaction between euphemisms and the words they take the place of generating some of the noted effect: the development of less-charged alternatives drives the use of the more-charged to gradually greater extremes of valence such that the informational mix of valence and content of the replaced words becomes ever more weighted to valence, until only it remains.

4665145
That's why I asked: dualism has a very specific meaning in the history of philosophy, while dualism as used by you would totally apply to the system of Plotinus. I suspect that even a surface-complex system like the iterated tripartite division of Iamblichus and Proclus would fit because of the fundamental notion that the world isn't the world and that there's a single supreme quality or dimension along which one unfolds metaphysics (Form of the Good, oneness, &c).

(Also: Not AI. Real human. Well. Arguably real, probably human[1].)

[1] This needs to be my tagline. "Arguably real, probably human."

PresentPerfect
Author Interviewer

I have tried and failed twice to read this blog. ;_; I feel like a bad person. Can I get a tl;dr?

The term "anthropomorphism" is used to refer to people attributing human attributes to animals. It is supposedly a bad thing, and scientists are supposed to avoid doing it.

Furries ruin everything :trollestia:

4665420
There is a TL;DR in the post which covers about 2/3 of it. But the post isn't critical to what I'll post afterwards. Possibly it was a waste of time. I was hoping to find more dramatic results.

4665262

I suspect that even a surface-complex system like the iterated tripartite division of Iamblichus and Proclus would fit because of the fundamental notion that the world isn't the world and that there's a single supreme quality or dimension along which one unfolds metaphysics (Form of the Good, oneness, &c).

I'd have thought that Plotinus and Proclus would both be considered philosophical dualists, because they both think there are essences and manifestations? They have a complicated chain of being, but at every step along the chain, beings have a material form and a spiritual or Platonic form, don't they? I don't know much about them.

EDIT: Derp. I do remember one of them posited multiple levels of reality, as a kind of buffer between the nasty dirty material world and the sacred transcendent. Proclus, I think? You could call him pluralist, but I doubt I'd find the difference between his many worlds and 2 worlds interesting. I'm really more interested in people who dichotomize "good" and "evil" and "true" and "false".

4665259

the development of less-charged alternatives drives the use of the more-charged to gradually greater extremes of valence such that the informational mix of valence and content of the replaced words becomes ever more weighted to valence, until only it remains

Clever. Wouldn't the result be that the overall distribution would stay the same, but we would always get the impression it was drifting out towards 'good' and 'bad' if we traced the history of individual words?

Words get written
Words get twisted
Old meanings move in the drift of time.
Lift the the flickering torches
See candles' shadows change
The features of the faces cut in the living stone.

Finally managed to get prepared enough to read this and understand and... I find no fault in the logic and a sudden need to research this myself.

...and also comb through scientific papers to see why this versus those remains unpublished.

GROUCHO MARX: Time flies like an arrow--fruit flies like bananas!

BAD HORSE: Yes, but why do they like bananas?

GROUCHO: And they call me an absurdist.

4680348 Well, for example, Beethoven's "Große Fugue" is in English called his "Great Fugue", not his "Large Fugue".

I added this to my pocket account so I could read it on my e-reader rather than on my computer, then sorta forgot about it.

Now I read it and it was very interesting and you made some good points that seem reasonable, I 'm not well versed enough to have a proper opinion on this.

I have to say I never really expected things like this to pop up on this site.

The moment when you want to share an article but don't want to explain why it's on a pony fanfiction website.

That hole in the middle is unbelivable!

Emotions such as "fear" are not abstract at all; they are directly perceived.

I don't understand that --- fear seems to be precisely a complex abstraction over behavior patterns and is not directly perceivable (although it may kinda feel that way since there's selection pressure on brains towards recognizing fear)

5532720
I see what you mean.
I said fear is directly perceivable because we have a qualia for it. We "feel" it, whatever that means.

Color sense is also called a qualia--the "feeling" of green. So is feeling hunger or pain. Philosophers usually, I think, assume that all qualia are fundamental percepts. But the qualia of green is computed almost directly from specific photoreceptors firing, and isn't at all abstract--if these particular photoreceptors fire, it means there are green light wavelengths coming from that particular direction--while the qualia of fear feels the same for many different causes of fear, which means it is in some sense abstract.

But when you feel fear, is that feeling the activation of a distributed representation of an abstract concept, or is it direct sensory perception of the cortisol and adrenaline that the adrenal gland releases, and/or of your rising blood pressure, pulse, breathing, and so on? That is, does "fear" mean the abstract concept which, when recognized, triggers the fear response; or should it refer to our perception of the fear response?

I don't know. I think I may be stupider now than when I wrote this post. I think you're right that calling fear and love "abstract" is more helpful, but then shouldn't "lust" also be abstract when it arises from visual perception? Yet we also have sense receptors in our genitals which directly activate lust, making it as concrete as "pain", "hot", or "green". And there are pheromone receptors.
There seem to be several entirely separate pathways which all activate the same "lust" response, some long and some short.

So... I'll just say that some qualia are concrete and some are abstract, and some are both. Weird that no philosopher has ever noticed this AFAIK.

5535370

I said fear is directly perceivable because we have a qualia for it.

In previous comment assumed approach was something like "observe behaviour of animals (or humans) from third person, perform some unsupervised learning algorithm and notice that there are clusters" --- that is as objective as I could come up with. First person experience is much weirder: it adds speaking into picture (qualia that is not talked about may as well not exist). People while learning to speak should presumably first clusterize other people's behavior, correlate it with what is spoken about that behavior, transfer that somehow on their own behavior (that is observed from first person and looks different) and finally correlate it with some internal feeling. That seems like complicated and faulty process (otherwise psychoanalysis would have worked much-much better).

... if these particular photoreceptors fire, it means there are green light wavelengths coming from that particular direction ...

Well, there's also that:
media.wired.com/photos/59327b99a3126458449954cf/master/w_2560%2Cc_limit/Untitled-12.jpg
(and I remember reading long time ago some paper that showed that color perception is dependent upon words for colors which are present in individual's native language), but it's probably less abstract than fear in some numerical way that's possible to strictly define (how much it compresses empirical data? how difficult to build a robot to do that? Kolmogorov complexity? I now realize that I don't understand difference between abstract and concrete all that well)

But when you feel fear, is that feeling the activation of a distributed representation of an abstract concept, or is it direct sensory perception of the cortisol and adrenaline that the adrenal gland releases, and/or of your rising blood pressure, pulse, breathing, and so on?

Personally for me latter feels closer (I kinda don't have obvious internal feeling of that stuff and learned to somewhat figure emotions out by paying attention to physiological changes and behavior). I like how Scott Alexander put it:

So if we want to know why Wanda runs away from a wasp, saying "because her previous encounters wasps have been negatively reinforced" is more useful than "because she felt scared".

And if Wanda herself says "No, I ran away because I felt scared," we shouldn't be especially interested in her opinion: she has privileged access to a certain type of output of the process generating her behavior, but not to the process itself.

... but then shouldn't "lust" also be abstract when it arises from visual perception?

It seems not that far away from "fear" (pain can cause fear too)

I also notices that PCA paper used human-made dictionaries for analysis. I wonder if it's possible to get similar picture from pure machine learning, like run PCA on word2vec embeddings of adjectives.

Login or register to comment