• Member Since 11th Apr, 2012
  • offline last seen Last Thursday

Bad Horse

Beneath the microscope, you contain galaxies.

More Blog Posts700


Kidnapping, Human Trafficking, and Halloween: Where those big numbers come from · 6:30pm Nov 4th, 2021

Trigger warning: Counting of horrible things.

(I don't quite understand how to do a trigger warning. If a subject triggers someone, then just naming that subject will trigger them just as badly. So how are you supposed to write a trigger warning?)

I found a page on missingkids.org that answers the question: How often ARE children abducted in the US?

Six-year-old Louisville Girl Kidnapped – How often DOES it happen?

A “stranger abduction case” is very rare. At the National Center for Missing & Exploited Children (NCMEC), we receive reports of missing children that fall into one of five case types, including nonfamily abductions. A nonfamily abduction occurs when a child is taken by someone known, but not related, to the child, such as a neighbor or an online acquaintance, or by someone unknown to the child. Nonfamily abductions are the rarest type of case and make up only 1% of the missing children cases reported to NCMEC.

Based on over ten years of data, NCMEC identified that:

  • Attempted abductions occur more often when a child is going to or from school or school-related activities
  • School-age children are at greatest risk on school days before and after school (7-9 a.m. and 3-4 p.m.) and after dinner time (6-7 p.m.)
  • Attempted abductions most often occur on the street while children are playing, walking, or riding bikes

In 2020, more than 600 cases of attempted abductions were documented by NCMEC. We found that children evaded abduction in a variety of ways, including:

  • Ignoring or refusing the abductor
  • Using a cellphone to threaten or intervene
  • Fighting back
  • Screaming and/or making noise
  • Another adult or child intervened
  • Abductor left the area or voluntarily released the child

So, using the 1% figure in the first paragraph, the NCMEC documented about 6 attempted abductions of children by a stranger. In some cases the child escaped. In some cases the child was recovered quickly. The first half of that webpage is devoted to an abduction on July 2, 2021, in which case the police recovered the child in less than half an hour.

Getting children back quickly is the purpose of the AMBER Alert system, so the fact that at present there are very few open cases of child abduction from 2021 doesn't mean that it isn't working. If I were to take a wild guess, just using "50%" for all possibilities--this is dumb, but I have no priors--I'd get

6 attempted abductions
3 abductions
1.5 recovered by police
0.75 recovered by police due to aid from AMBER alerts

The photos in advertisement sections and on milk cartons of teenagers who went missing 10 years ago are, I expect, completely useless. If they're in their 20s and haven't contacted their family, either they're dead, or they don't want to contact their family.

But on the other hand, this 2002 publication from the US Dept. of Justice, "National Estimates of Missing Children: An Overview", says:


Caretaker Missing Children (n = 1,315,600)

Nonfamily abduction: 33,000 (2,000–64,000)
Family abduction: 117,200 (79,000–155,400)
Runaway/thrownaway: 628,900 (481,000–776,900)
Missing involuntary, lost, or injured: 198,300 (124,800–271,800)
Missing benign explanation: 374,700 (289,900–459,500)

Reported Missing Children (n = 797,500)

Nonfamily abduction: 12,100 (<100–31,000)
Family abduction: 56,500 (22,600–90,400)
Runaway/thrownaway: 357,600 (238,000–477,200)
Missing involuntary, lost, or injured: 61,900 (19,700–104,100)
Missing benign explanation: 340,500 (256,000–425,000)

They use the term "caretaker missing", they explain, because "missing" means "missing, from the caretaker's point of view". So if little Billy is hiding in the attic and mommy and daddy don't know it, Billy is "caretaker missing".

The data is presented as "caretaker missing" versus "reported missing". This is misleading; it should be "caretaker missing" versus "caretaker missing reported missing", but I guess that was too long to fit in the column header. "Reported missing" is the number of reported cases of caretaker-missing children. "Caretaker missing" turns out to be an estimate of the real total of caretaker-missing children, estimating unreported cases by interviewing random households to find out if their kids went missing.

BUT, "Reported missing" is ALSO an estimate, as shown by their (even larger!) 95% confidence intervals. So apparently they didn't check police reports, but surveyed families and asked if they'd reported a child missing?

Not quite. "[C]hildren who were missing because of stereotypical kidnappings were added from the Law Enforcement Study data. The Law Enforcement Study was used as the data source for this rare subset of nonfamily abducted children because no reliable estimate could be developed from the Household Surveys." I don't know what the Law Enforcement Study did, but I'm guessing it looked at police reports.

Then, at the bottom of that section, it says

Table 7: Estimated Total Number of Children With Episodes and the Percent Who Were Counted as Caretaker Missing and Reported Missing

Episode Type Total # Children
Nonfamily abduction 58,200
Family abduction 203,900
Runaway/thrownaway 1,682,900
Missing involuntary, lost, or injured 198,300
Missing benign explanation 374,700

So the US DOJ estimates that 58,200 children were abducted by strangers in 1999! This figure seems to come from a different world than the NCMEC's figure of 6 children in 2020.

(And this is a bit of an underestimate: it isn't counting the number of times a child went missing; it's counting the number of children who went missing. "Children who were missing on different occasions, because of multiple episodes, were only counted once in the unified estimate.")

But what is Table 7? Why is the estimated total twice as high as the "Caretaker Missing" total? The report says,

Some children experienced nonfamily or family abduction episodes or runaway/thrownaway episodes but were neither missing from their caretakers nor reported missing to authorities. Examples include children who ran away to the homes of relatives or friends, causing their caretakers little or no concern; children who were held by family members in known locations (e.g., an ex-spouse’s home); and children who were abducted by nonfamily perpetrators but released before anyone noticed that they were missing.

Okay; seems legit. But... 58,200?


  • There were only 12,100 estimated cases of reported nonfamily-abducted children. That means that only 1 in 5 cases where a child was abducted by a stranger were reported to police. That strikes me as unbelievable.
  • The lower bound of the 95% confidence interval for nonfamily-abducted children is 2000. The lower bound for reported nonfamily-abducted children is "<100". That means only 1 in 20 cases where a child was abducted by a stranger were reported to police! That's even more unbelievable.
  • The estimates for "Missing, benign explanation" indicate that 9 out of 10 cases when a child goes missing for some innocent reason are reported to the police, versus only 4 in 11 cases when they were abducted by a stranger. That's just crazy.
  • The lower bound on reported nonfamily-abducted children is <100. That must be at least as high as the number of reported nonfamily-abducted children they were able to count. So their study identified fewer than 100 actual cases in 1999.

DIgging through the report, I find these other relevant statements:

(The Law Enforcement Study found that an estimated 115 of the nonfamily abducted children were victims of stereotypical kidnappings and that 90 of these qualified as reported missing.)

"Stereotypical kidnappings" means "A stereotypical kidnapping occurs when a stranger or slight acquaintance perpetrates a nonfamily abduction in which the child is detained overnight, transported at least 50 miles, held for ransom, abducted with intent to keep the child permanently, or killed."

So they estimate there were 115 cases of "stereotypical kidnappings" in 1999, meaning a stranger abducted a child with an apparent intent to either ransom it, or keep it forever. So the biggest part of the answer to "where do they get the figure of 58,200 abducted children?" is that only 1 out of every 500 cases of "nonfamily abductions" count as "stereotypical kidnappings".

I'm puzzled as to how this could be. Obviously, the definition of "stereotypical kidnapping" is too restrictive: it doesn't count kidnappings where the kidnapper wants to abuse the child and then release him/her. It counts only the potential "your child is gone forever" scenarios. But still: 1 in 500? How many people want to borrow a child for an hour? I can only think of one possible reason.

And... it's still a LOT higher than the NCMEC's report of 6 attempted abductions by strangers in 2020, which does (I assume) include cases of child molestation and cases where the abductor's intent was never clarified.

Anyway. Looking into the question "How many people want to borrow a child?", I find it's the wrong question. The right question is, "How many children do they want to borrow per year?" From a random web hit on "number of convicted pedophiles":

In his study of 561 sex offenders, Dr. Gene Abel found pedophiles who targeted young boys outside the home committed the greatest number of crimes with an average of 281.7 acts …molesters who targeted girls within the family committed an average of 81.3 acts.

Wow... but it turns out that "281.7" doesn't mean 281.7 victims; it means a few victims molested repeatedly.

I found NO googlable information on how many pedophiles there actually are in the US, not even the number convicted or in prison. I've just spent half an hour wading through pages on pedophile statistics, and they seem maliciously designed to give no useful information about this. (Please do not crosspost that to a QAnon website. I'm speaking hyperbolically and in anger.) Either they give numbers for "sex offenders", which includes anybody caught urinating outside; or they talk about pedophiles, but only give statistics like "40% of pedophiles do this; 30% of them do that", and nothing at all about how many of them there are. Possibly because they looked those numbers up and decided they weren't scary enough.

Presumably this information exists somewhere--on the FBI UCR website; in scientific journal articles. A phone call to the police might give me some leads and get me on a watchlist. And I'm out of time.

So one answer to "Why are people saying hundreds of children are abducted by strangers every day in the US?" is "Because the DOJ said so". The answer to "Why did the DOJ say 58,200 children were abducted by strangers in 1999, but only 115 were stereotypical kidnappings?" is still a mystery, as is "Why is the DOJ's estimate of stranger-abductions per year 10,000 times as high as the NCEMC's count?" Here are my remaining hypotheses:

  1. The DOJ study is a statistical estimate which is much, much higher than the NCEMC data found by actually counting cases reported. This could be because

    1. 99.99% of cases are never reported in any news media (not credible)
    2. the NCMEC doesn't know how to use Google (kinda credible)
    3. the statistical confidence estimate is correct, and reality just happened to fall into the 1/20th of cases outside of it (probability 1/20)
    4. Lizardman's constant is 4%: In a 2013 poll, 4% of American survey respondents said "yes" to the question "Q13. Do you believe that shape-shifting reptilian people control our world by taking on human form and gaining political power to manipulate our societies, or not?" The DOJ got their estimates of child abductions by polling people. The number of actual abductions by strangers is much smaller than the number of respondents who like to mess with pollsters, and probably less than the number of respondents who were (a) psychotic and believed that their children had been abducted, or (b) communicated very unclearly, checked the wrong box, or whatever. The problem with this answer is that too few people (at most 1%) said one of their children had been abducted by a stranger. Maybe because saying "yes" to this is (a) less funny, and (b) more likely to get them in trouble, than saying they believe lizardmen rule the Earth.
    5. the DOJ's study did something else so wrong that it should be ignored
  2. Either the definition of "Nonfamily abduction" is so broad that it takes in mostly benign cases, or 499 out of every 500 stranger abductions is by a pedophile, with malign but short-term intent. This is possible; but would mean that the NCMEC was able to discover evidence of only 6 out of about 60,000 cases of stranger-abducted children in 2020.
  3. Most malicious child abductions are of children whose parents don't keep track of where they are and don't bother to call the police if they lose a child or two.
  4. Maybe the NCMEC article I quoted at the start of this post was using "attempted abductions" to mean "attempted nonfamily abduction". Then their figure would in fact be 600 attempted nonfamily abductions per year, and would be in line with the DOJ's figure of 115 accomplished non-temporary nonfamily abductions in 1999. But it would be strange to spend an entire paragraph defining the distinction between abduction and nonfamily abduction, and then immediately use the wrong term.

Re. (1d), my brother-in-law hires two different guys to mow his lawn who are both self-sufficient and work hard, but neither of whom could ever pass a Turing test. When they talk, what comes out is grammatical English, but most of it is incomprehensible. One speaks as little as possible, so that even trying to get a "yes" or "no" answer is difficult, and when you finally get it, you can't be sure which question it was a response to. The other often answers questions with a torrent of words, and so many incorrect word choices, strange metaphors, and private idioms that he needs an interpreter with him when he goes to the doctor. You'll never meet people like this if you go to college and get a nice job in the suburbs, but they are out there.

Re. (1) as a whole, the DOJ's enormous 95% confidence interval shows that their estimate is statistically a wild guess. You shouldn't take it as an actual 95% confidence interval, because having a CI that wide means the sample size is so small that the necessary randomness assumptions are almost certainly bogus.

In fact, the DOJ's report says that. All those figures about nonfamily abductions have a tiny little "§" after them, with a tiny note at the bottom of the page saying, "§ Estimate is based on an extremely small sample of cases; therefore, its precision and confidence interval are unreliable." Which is... acceptable, in a scientific journal article. But apparently reporters don't check the bottom of the page to see what "§" means.

Comments ( 44 )

(I don't quite understand how to do a trigger warning. If a subject triggers someone, then just naming that subject will trigger them just as badly. So how are you supposed to write a trigger warning?)

I'm going to assume here you're not openly mocking people with PTSD and that this is a legitimate question with a couple of bad assumptions in it.

First off, don't call it a "trigger warning", just call it a "content warning". Triggers are indeed a real thing for people with PTSD, but they've become a tool for the right to mock people with disabilities as being "overly sensitive".

No, naming a subject is nowhere nearly as bad as discussing details without warning. That's the whole point. I've no idea why you'd think that to be the case.

So, for example, before describing details about being raped, saying "CW: rape" or "content warning: rape" in advance would be appropriate, as would be hiding details below a cut (especially they should be hidden if they are images).

If they're in their 20s and haven't contacted their family, either they're dead, or they don't want to contact their family.

Generally, yeah. I think the carton thing is more for the family to feel like they've tried every possibility than for realistic hope.

There are cases where it happens, however. Dissociative fugue, cult induction, literally being chained up for twenty years, or long-term victim of sex trafficking. The last one is the only not-totally-rare one in the US, I think.

EDIT: The cult one isn't rare, but it's rare for children except when sex trafficking is also involved.

Google says:
"trigger warning": 10,800,000 results
"content warning": 6,580,000 results

I think "trigger warning" is the correct usage for words that could trigger a traumatic memory. "Content warning" is for cable TV shows with nudity.

No, naming a subject is nowhere nearly as bad as discussing details without warning. That's the whole point. I've no idea why you'd think that to be the case.

A trigger is something which only needs a tiny pull in order to produce a sudden, powerful response. That's why it's a more-appropriate term than "content warning": it's specifically about things where people can be re-traumatized just by being reminded of something. In my experience, all it takes is few words, or a smell, or sometimes even an unusual color, to remind me of a bad experience. I might even find a full description preferable, because it would clearly not be my experience. So I still wonder if maybe the only advantage of a warning is that the person reading just gets re-traumatized, instead of wasting a lot of time reading and getting re-traumatized.

Lizardman's constant is 4%: In a 2013 poll, 4% of American survey respondents said "yes" to the question "Q13. Do you believe that shape-shifting reptilian people control our world by taking on human form and gaining political power to manipulate our societies, or not?" The DOJ got their estimates of child abductions by polling people. The number of actual abductions by strangers is much smaller than the number of respondents who like to mess with pollsters

I regret to inform you that most of that 4% were probably not intentionally messing with the pollsters. Most Americans believe in at least one conspiracy theory, and millions of Americans believe absolutely outlandish things.


but neither of whom could ever pass a Turing test

So, quick Wikipedia check. Alan Turing proposed that test in 1950. The "Chinese room" that was designed to contradict it was published in 1980. I'll let you draw your own conclusions with regards to how much we should care about the test's results.

With all due respect, a second poll changes nothing about the argument being made. There's also the simple point that dissenting voices shout louder because that's what it takes for them to be heard at all independently of the belief being insane.

"Trigger warning" is not preferable to "content warning" any more than "retarded" is preferable to "cognitively disabled". When terms get co-opted by people in an injurious manner, psychology changes terms to something more neutral. I agree that "trigger" is more specific, but it isn't remotely necessary and opens a door to mockery of the disabled. "Content warning" works just as well and is a better choice.

I might even find a full description preferable, because it would clearly not be my experience.

I'm guessing that you don't have PTSD and you aren't a psychologist or psychiatrist. Don't make me tap the Dunning-Kruger sign again.

Seriously, you're casting aspersion on experts when you know next to nothing about the subject on which you're opining—and you've admitted as such. I know people who have been raped or attempted suicide, and I know content warnings help some of them. Please stop suggesting that content warnings are useless simply because you imagine them to be. This is very bad advice.

I don't know what you're trying to imply by listing those dates. I don't think Searle's Chinese Room argument has any consequences; it's just a bad argument. If you read Searle's original B&BS article, it's followed by many pages of AI researchers explaining why it's wrong, followed by Searle using cheap rhetorical tricks to try to evade the good arguments. I've spoken with Searle in person about it, and my impression is that he has no intellectual honesty. He just wants to win debates in any way possible, including grandstanding, evading questions, and changing his definitions on the fly. IMHO, he doesn't care about getting at the truth. At least not in this particular debate.

The Turing test wasn't, I think, proposed in order to provide a useful test, but as a Gedanken-experiment arguing that we should accept something as a person if it acts like a person. Likewise, the Chinese room wasn't really directed at the test; it was directed against the empiricist philosophy behind it, and in defense of the religious notions of "spirit", "essence", "life force", etc., which are the true reasons some people think a machine can never be intelligent and never be a person.

(I don't think Searle would agree with my characterizing him as religious, but I think the important distinction which we try to get at with the word "religious" is between people who do or don't believe in spiritual essences. The special property of being able to sustain consciousness which Searle ascribes to brains, and believes can't be duplicated mechanically, is a spiritual essence, whether he admits it or not. Spirit is that which is not produced by any "physical" (predictable) mechanism, and Searle doesn't believe consciousness can be produced by physical mechanism.)

Oh, I didn't mean to imply that people who can't pass the Turing Test aren't human. I meant it only to explain how difficult it is to communicate with these particular people. Someone calling them on the phone and asking them questions would be likely to misinterpret what they said. My argument was that you can't use a survey for questions where the number of people who would say "yes" and be correct is much smaller than the number of people who would say "yes" because they were delusional, or didn't understand the question, or were actually trying to say "no" but the pollster misinterpreted their answer.

The Turing test wasn't, I think, proposed in order to provide a useful test,

And yet the only reason people ever seem to bring it up is because they disagree with you on this point. That's the danger of throwaway references...

I understand that you're concerned that I might encourage someone to not use warnings, and you think using warnings is a good idea. But I object strongly to your claim that I shouldn't even be allowed to ask about the subject, because you have already discerned the truth of the matter, and any further discussion of it can therefore only be harmful.

I also understand your concern that I might be poking fun at people. I'm not, at least not consciously; and I appreciate your giving me the benefit of the doubt about that.

I really don't want to traumatize anyone, but I now feel obligated not to comply with any request you make at present on this subject. I think that, in the current social climate, encouraging your claim that I mustn't question or contradict your views on the matter would be far more harmful than adopting the wrong position on warnings.

Sorry. I still like you!

If you can point me to a study of the effect of warnings vs. no warnings, or something like that, I'd be more likely to respond favorably.

5603808 I'll confess to be on the edge of the Asperger's spectrum (which I understand is now called something else because something something), which can make conversations with me rather odd. I have almost no capacity to remember certain things like names and math formula, and an exceptional memory for odd trivia (which makes me a winner at Trivial Pursuit who can't remember the names of the people I'm playing with). I've learned to roll with it, and I try to start off any long conversation with anybody by telling them to STOP me if I'm going off on a tangent and dragging the conversation over rocks. (which also applies to internet comments, so I'll stop here)

If you see me at an event, come over and introduce yourself. I don't take offense at anything. I'm practically untriggerable, but I will trigger people by accident, and don't apologize well. So I'm sorry in advance.

Estimate is based on an extremely small sample of cases; therefore, its precision and confidence interval are unreliable.

I mean, if they can't even find enough cases to do meaningful statistics... shouldn't that in itself suggest that there are probably very, very few cases, and the low end estimates are much more likely to be the closer estimates to reality?


on the edge of the Asperger's spectrum (which I understand is now called something else because something something)

For the record:

  • it was only ever "Asperger's syndrome" and "autism spectrum" so you committed a malapropism of some kind, and
  • the "something something" was among other things the fact that his own preferred term was "autistic psychopathy" and general ties to the Nazis (not hyperbole; he was part of the medical corps for the German army in WW2)

It probably should, but the media opted for mass hysteria instead. Because of course they did.

Comment posted by TheJediMasterEd deleted Nov 5th, 2021

"Well Officer, it started out as an argument about kidnapped children, but it turned into a debate about statistics and that's when the fight broke out."

Everything you are saying is a) correct and b) irrelevant.

Parents are scared of child abductions because, whenever one's children are threatened, the human brain is not hardwired to begin statistical analyses . It is hardwired to go to war.

So you can talk all you want about the miniscule number of stranger-abductions in the U.S.. So do the FBI and lots of other relevant agencies. One story about Cleo Smith is all it takes to make parents feel like Pearl Harbor just got bombed and they need to form a Civil Defense brigade.

(BTW, this is a picture from inside the kidnapper's house. Good thing we don't know anybody with shelf upon shelf of little girls toys, right gang?)


Also, you seem to be incensed that the FBI's public information site is intended to convey basic facts to the general public, rather than withstand attack by an enraged statistician.

Which...is okay, I guess, you being you.

Lizardman is always one of my favorites. People without a background in probability do a bad job understanding just how weird things are even when you get out to the "5% of the population" level. I've probably shared this before, but one of my favorite teaching bits is introducing the sunrise problem and Laplace's rule of succession, and the fact that his contemporaries called him out for having such an unreasonably high probability that the sun wouldn't rise. (For reference, his rule of succession argument had the sun not rising with probability about 0.0001.)

The thing I love about this is how many students latch on to the criticism and think that's an unreasonably large probability. Which is when I point out that any time you consider very small probabilities, you need to consider a whole host of incredibly unlikely things that might drive the whole problem. One really needs to be seriously considering the simulation hypothesis, metaphysical solipsism, and evil demons. What probability does one attach to these possibilities—essentially, what probability does one attach to the proposition that all of their experiential knowledge is fundamentally misleading?

All the time, people are wrong about things they claim to be absolutely certain of. That doesn't even get close to a 0.9999 probability point. 0.0001 is an astronomically small probability, even if most folks don't realize it.

The lizardman segment of the population, frankly, drives a lot of survey weirdness. But I think DOJ is probably botching their stats on this badly as well; like 5603808 mentions in the blog, some of the assumptions that underlie these numbers like what percent of abductions get reported seem absolutely bonkers. Assuming that all crimes have an equivalent underreporting rate, which is what it looks like they may well have done, is very cloud cuckoolander here.

By the way, I also agree with 5603798 on the terminology question of "content" vs "trigger" warnings. I'm generally pro-empathy and pro-being-gentle-with-others, so I've been sympathetic to the push for using content warnings. I will say, though, that when I went on Web of Science to look at data, the studies I'm seeing with real data don't appear to suggest that they're helpful on average.

PJ Jones, BW Bellet, & RJ McNally, (2020) "Helping or Harming? The Effect of Trigger Warnings on Individuals With Trauma Histories", Clinical Psychological Science, 8, 905-917, looks like the most recent and relevant. It involves n=451 trauma survivors and is an experiment. The study raises three red flags for me, though: (1) the research team appears to be anti-content-warning based on previous work on non-trauma-survivor populations; (2) the exposure is to selections from world literature, which may again be putting content warnings into a situation outside real original intent, since (non-expert guess here) I suspect anything qualifying as world literature is probably going to treat sensitive issues a lot more carefully than news and internet sources that are really optimized for sensationalism and affect stimulation; and (3) the research team is from Harvard, and my experiences with folks affiliated with Harvard leads me to be more cautious with what they report, because on a few occasions I've seen Harvard-affiliated folks get away with major research laziness and still get a pass because Harvard.

So provisionally, I would say that data appear to show that content warnings don't provide much benefit outside the intended target population and for non-sensationalized stimuli. I'm not prepared to call them a bad deal, though. One of the big takeaways from lit-searching the topic is that there seems to be a real paucity of people gathering good data on the subject, and I didn't find anyone (in my quick search) running the study I'd want to see on whether they're helpful in what I'd consider their main intended context.

If you want to show off your power level, I've seen the term “infohazard warning” used in the wild. In particular, “triggers” are an evocation infohazard.

Regardless, “trigger warning” is too specific, as there are many other classes of infohazards one is likely to encounter in the wild:

  • Spoilers for a piece of media
  • Moral hazards (“PopMart is having a 50% off promotion on cola!”)
  • Scrupulosity-inducing sceptical hypotheses/thought experiments
  • Reminding people of their own mortality

6 attempted abductions
3 abductions
1.5 recovered by police
0.75 recovered by police due to aid from AMBER alerts

1.5 + 0.75 = 2.25
3 - 2.25 = 0.75

Must really suck for that three quarters of a child never found... .

Well, I still like you even more. So there. :flutterrage:

Also, more generally, I have a tendency to go off and sound much more animated than intended. But you know that. No intent here to chill speech, and the issues you raise are important and intelligent.

My Stars! How dare you be reasonable on Bad Horse's blog. :derpytongue2:


Yeah, this one's definitely not going to get me in trouble.

No, it does change it, despite my comment getting ratioed. If 4% of people are effing with pollsters, that doesn't explain why the number of people who claim conspiratorial beliefs range from 0% to 65% depending on which crazy belief you ask them about.

You could probably put malingering measures into polling to determine if people are screwing with you, come to think. I wonder if that's been tried much.

While you are correct that parents will worry either way and that anecdotes often carry more weight than statistics, I do feel like telling them vastly different numbers of cases occurred would produce somewhat measurably different levels of worry.

I didn't know that. A link on Asperger and the Nazis here. Sorry; I don't know why your comment got down-voted so hard.


So provisionally, I would say that data appear to show that content warnings don't provide much benefit outside the intended target population and for non-sensationalized stimuli.


If "the intended target population" is "people who could be re-traumatized", then I'm not at all concerned about providing benefit to people outside the intended target population.

Is that "and" (and for non-sensationalized stimuli) really an "and" (e.g., content warnings provide a lot of benefit if either the reader is within the intended target population or the stimuli has been sensationalized ), or an "or" (content warnings provide a lot of benefit if the reader is in the target pop and the stimuli has been sensationalized ). (In order to be brief, I assumed in my example wordings that failure to reject H implies not(H). That's a horrible and common error and I apologize for propagating it, but I figure you can cope with it and tell me if that assumption is wrong in this case.)

I recently administered a 50-question poll in which I used some pairs of questions for which I thought one answer on one question logically implied giving another answer on the other question. The correlation between these answers was near zero, so either I or a subset of respondents were failing at logic, or I used bad wording without noticing. On the bright side, I also tried inserting the exact same question at the start and the end of the survey, and people nearly always gave the same answer to the same question both times.

Did they set the dolls free? They look like prisoners in some horrible doll Panopticon.

Please tell me that isn't Fluttershy at the top center. :fluttershysad:

In logical terms, I'm using an OR. I saw two recent studies by that research team. The first looked at a sample from the population of healthy controls, so not "people who could be re-traumatized", and concluded (I believe) that content warnings didn't help reduce the severity of negative emotional impact, which I think was how they operationalized whether or not content warnings were working. The second study looked at a sample from the population of "people who could be re-traumatized" and exposed them to world literature involving (I think) neutral and trauma-reminding stimuli. I think they also kept track of what particular stimuli individuals in the sample would find particularly re-traumatizing—but I'm going on memory, and just read through the abstract in any case. The topline conclusion for the second study was, again, that content warnings weren't appearing to reduce the severity of negative emotional impact, and (they claimed in the abstract) this extended to cases where trauma stimuli matched previous trauma. Given that I had good reason to think the research team were biased against content warnings, it would have been good for me to actually look at the results and judge them for myself... but I didn't. Sorry. :twilightoops:

My takeaway was that the research team had, whether conclusively or not, demonstrated with data that in cases where EITHER non-re-traumatizable individuals were involved OR non-sensationalized stimuli were involved, content warnings didn't appear to be meeting the operationalized criteria the researchers wanted.

To the best of what I saw on my search, no one has tested H, which for me would be "content warnings provide a lot of benefit if the reader is in the target pop and the stimuli has been sensationalized". Signs seem like they might point toward rejecting this hypothesis—but like I said, I try to err on the side of being kind and gentle, so I'm not personally prepared to read this as an outright rejection of the core idea at present. Personally, I'm somewhat disappointed this excursion didn't come down more in favor of 5603798 's anecdotal comment about content warnings having helped some people she knows—but I gotta call the studies as I find them, if I'm going to go off looking.

Just for an example that predates the concept: a friend and I were joking very rudely about suicide and an acquaintance had to excuse herself to go to the bathroom. We found out it was because her son had recently killed himself. That doesn't mean we can't joke about suicide, but it would have been more polite if I'd asked before bringing up the subject in the first place and probably spared her from some pain.

...but that said, I'm not surprised that the concept is overblown in practice. On the other side of the bit, there's a risk that comes from trying to pander to the lowest common denominator in order to not offend anypony. I used to see new unrealistic demands on behavior daily back when tumblr was a thing. There's a balance to be had, surely.

Tumblr still technically exists, but I like to think that's entirely due to Mark Rosewater and has been since before the people you're talking about showed up.

Here's a HuffPo article that gives a surprisingly good overview of the issue (some of which you already found for yourself): The Futile Quest for Hard Numbers on Child Sex Trafficking.


It looks like a cheap knockoff of a funko-pop version of Equestria Girls Fluttershy.

So um...Yes? No? Ask Dr. Schroedinger?

Nope, the image is a big fuzzy, but I don't see any ponies in there anywhere. Looks like mostly Barbies, American Girl, and a few other odds and sods I either don't recognize, or can't make out sufficiently.

If you don’t mind heavy podcast episodes, the You’re Wrong About episodes on sex offenders and human trafficking are both highly relevant to this topic. The human trafficking episode, in particular, emphasizes the statistical games involved with reporting on these crimes.

I'm going to bounce off this one here and raise hand as 'Hi, I have PTSD, and here is how this works as someone living with it'

CW : Trauma, etc

Last year I dealt with a stalker. They spent months terrorizing the shit out of me and I had to get a restraining order. I don't always know what will cause me to flash back to that, but when it does it's this rising creepy-crawly sensation in my chest that becomes this aching acute discomfort/anxiety that rapidly shoots down my left arm and when I look I realize my fingers are spasming about in weird ways because the anxiety manifests in palpable motion.

A content warning can be as simple as saying 'Hey, we are going to go into some dark places. If you are not comfortable with darkness, maybe this isn't for you'.

Yes, some people will get hella pissy if you don't tell them every specific bad thing that can be there, but like...I would recommend reading the introduction to Neil Gaiman's short story collection Trigger Warning.

He goes into this subject in well-done nuanced detail.

But really, all a content warning is is a flag saying 'Yo, dark shit ahead', and you are free to fill it in as much or as little as you don't want to.

Trauma sucks. I'm...I would love to use the word 'healed', but it's not quite that. It's...

The analogy that works is it's an injury that hasn't healed right. A psychic limp. A mental joint that grinds when pressure is put on it the wrong way. A spot in your head that simply doesn't function as originally intended, anymore, and you can't help but /feel/ that because...well, you can feel it going wrong/uncomfy as it is happening.

I'm 'better', as much as I can be, but that better means that mental limp is there and I don't know how or if it will ever get better. I just have to adjust to having this weak spot in me now that I have to...deal with.

It sucks. But all a content warning is is that little bit of empathy of going 'I don't know you, but just in case, Dark Shit Ahead'.

Keenly aware that a lot of crime stats are hard to pin down at the national level. Since each state is so different about sex crimes this makes complete sense, and in some cases it's intentionally obfuscated, around shootings, and other times its completely impossible to compare places with different reporting methodologies around things like property crimes. It was relatively recent (within the past 10 years?) that a few places even started aggregating firearm shootings, and even now they only speak in the most general terms, like number of mass shootings where mass means more than 3 or 4 victims.

These are very interesting stats. It's a well known phenomenon that everyone thinks crime is worse than it is, and within the past few years the numbers around alleged child abuse have exploded to preposterous levels, particularly on facebook posts.... Since you're not afraid of research, I wonder if you would get more accurate results repeating your process but picking out 10 or so states that have better records. There's got to be some state level numbers where we can get a better picture. The numbers in your post all look waay to high or waay too low.

It's a good idea, and I'm not afraid of research, but it isn't high enough on my priority list for me to put that much work into it.

Not quite. It means their study was "underpowered". In statistics, you generally collect samples based on two target values: the target p value and the statistical power. The general idea is this: you have two probability distributions, one generated from the "null hypothesis" and one generated from the "alternative hypothesis", and you're trying to figure out if you should ditch the null hypothesis in favor of the alternative hypothesis. (Null hypothesis = status quo beliefs, alternative hypothesis = purposed change to the status quo beliefs.) The p value tells you the probability of getting some sample data *assuming the null hypothesis is true*. Statistical power tells you the probability of getting a low p value *assuming the alternative hypothesis is true*.

Statistical power is affected by sample size. This is because a high sample size tends to make probability distributions more narrow, hence they tend to increase the separation between the probability distributions generated from the null and alternative hypotheses. Increased separation means you're more likely to get a low p value *assuming the alternative hypothesis is true*.

Statisticians use estimates of statistical power to determine how large a sample size should be so they properly test their hypotheses. Huge sample sizes tend to be expensive, so the goal is to get a small sample size that's large enough to test an alternative hypothesis. If the DOJ had a very large confidence interval, that means the probability distributions they were working with were very wide, hence their sample size was too small. In other words, they did a study that would yield useless results with high probability. I would guess that it's because they had a very low budget and were pressured to publish something anyway.

So, TD found this blog post while he couldn't sleep, and decided to do some looking.

I found this study, which found the (very strange) stat that according to NIBRS, juveniles actually make up a *majority* of sex abuse victims in the US (though it is only the third most common crime against Juveniles). They also find a kidnapping number of only 16,200 per year nationally reported to the police, based on a subset count of about 4,000. That's *all* kinds of kidnapping, so obviously some of these numbers are insanely off.

I did, however, find the 93% of sexual assault victims know their abuser figure that RAINN likes to throw around - it's present on page 5, figure 9. However, even 93% is misleading; only 4% were identified as strangers, with the rest "unknown" (though that might indicate a lot of strangers).

They also, conveniently, had a kidnapping figure - which, interestingly, gave a 23% stranger kidnapping rate, suggesting that almost 1 in 4 kidnappings of juveniles is by a stranger, which would suggest close to 4000 such kidnappings per year - at odds with the assertion of Crime Library that only about 100 per year were abductions by strangers.

But I think this is explained by people garbling data while playing telephone.

One possibility is that the police are less likely to involve the NCMEC in cases of stranger abduction than normal because stranger abductions are significantly more dangerous and less likely to be successfully resolved by civilians.

However, another obvious possibility presents itself - stranger kidnappings generally don't last long enough to be reported to them.

Looking around, this article suggested that 840,279 people were reported missing to the FBI's NCIC. Of those, just shy of 1 in 4 of the kidnap victims in there were abducted by strangers - which matches the other study.

That also contains an interesting quote:

About 4,600 children are abducted each year by strangers, according to Ann Scofield of the National Center for Missing and Exploited Children. But she said most are held only briefly before being freed.

Only 100 or so abductions by strangers each year fit into more serious categories — cases in which the child is held for an extended period of time or is killed, she said. But the longer the abduction lasts, the more bleak the prospects become of finding the child alive.

So it sounds like the about 1 in 4 figure is correct - but of those, less than 1 in 40 involves the person actually trying to KEEP the child, with most being them "borrowing" them, as you mentioned. This is the source of the "100" figure as well, where people say only 100 stranger kidnappings per year or whatever.

Indeed, a lot of these kidnapping cases probably don't really involve missing persons at all - the person probably comes back and reports that they were victimized.

The reason why NCMEC only has a tiny number of stranger kidnap victims, then, is almost certainly an artifact of the fact that most of these cases wouldn't be something that the police would seek their help with, because they're over too fast. Moreover, the police only seek out NCMEC's help sometimes - NCMEC apparently only helps on cases where law enforcement asks them to, and an article about missing indigenous children I stumbled across mentioned this and the NCMEC mentioned that a lot of missing children don't end up in their database because the police just don't ask them for their help in those cases.

So, based on looking at this stuff, it sounds like about 4000ish children are kidnapped by strangers per year, of which 100ish are people who are kidnapped and held while the rest are let go (often but not always after being sexually assaulted).

The rate, thus, is about 5-6ish per 100k children per year - or around the same as the general population homicide rate.

Rare, but not unheard of, in other words.

The cases where some stranger kidnaps and keeps a kid is on the order of 1 in a million per year. which is struck by lightning levels of probability.

Finally, it should be noted that (not surprisingly) these numbers are all disproportionately unbalanced towards black, Hispanic, and Native American youth.

Incidentally, I personally have worked with people who deliberately gave bogus answers to phone polls because they resented getting called by them/thought it would frustrate the people who took those polls. I also knew someone in high school who used to do the same on all the surveys they got - they were the guy who did every single possible drug every day according to the DARE surveys.

As far as I know, he never did any drugs. He just thought it was funny.

When you see 0-65%, you should be immediately worried that whatever answers you're getting are 100% bias based on how you're asking your questions.

The smart money is on 100% bias.

Chapman University did a poll on this which they claimed showed that Americans were really prone to believing conspiracy theories.

In addition to a bunch of well-known conspiracy theories, they asked if people thought that the US government was covering up information about the suspicious car crash in North Dakota.

A car crash, which, notably, never happened, and that they made up out of whole cloth for the survey.

A third of people said that they agreed with that one.

To be clear, that puts it between "Plans for one world government" and "hiding information about global warming".

They thought that this showed how paranoid Americans are, but it really just showed that their poll was poorly designed and was intended to elicit certain kinds of answers. (It also doesn't help that the poll was worded in such a way that two of them weren't even conspiracy theories so people answering in the affirmative doesn't tell you that they are such - the US government actually was concealing information about what they knew about some Saudi officials having contacts with Al Qaeda around 9/11, and some information about the JFK assassination is still classified, though not for sinister reasons).

If you subtract the 33% from the other questions, the only *actual* conspiracy theory that beat the made up one was that 9% of Americans believe that the government is hiding information about aliens.

There's also the issue of polling as attire. Ask the public about whether the world was made in the last 10,000 years by god, and you'll get something around 40% of the population agreeing with that.

But if you ask them if dinosaurs went extinct 65 million years ago, about 70% will say that they did. Almost 80% will correctly tell you that there are fossils dating back hundreds of millions of years.

This means that at least 10% of people will both claim that the Earth was made in the last 10,000 years AND that dinosaurs died out 65 million years ago, and almost 20% (if not more) will say that there are fossils that are hundreds of millions of years old.

You can also see other badly worded questions, like whether dinosaurs went extinct, 46% said they weren't extinct - but, well, as a lot of dinosaur fans will tell you, they're not actually extinct, as birds are still around and birds are dinosaurs.

And note also that the 46% believe dinosaurs are still around and 69% believe dinosaurs went extinct 65 million years ago answers are completely contradictory as well, and again, show at least 20% would answer both questions in the affirmative.

Reputable pollsters very carefully design and calibrate their questions over time to try and at least get consistent levels of response so that they can at least see if the answers CHANGE.

But most pollsters don't care or are actively looking for certain answers.


But most pollsters don't care or are actively looking for certain answers.

I wouldn't go quite this far. I have no basis whatsoever for estimating the proportion of pollsters who are trying to extract specific answers, there isn't strong evidence for a conspiracy or tainted ties in the field of polling apart from polls which "lean" one direction or another for political reasons, and poll noteworthiness is based more on quality than specific answers (at least to pollsters who analyze them).

But more importantly, I don't leap to conspiracy when incompetence is just as likely of an explanation. It isn't easy to design polls so that they aren't leading given how they're necessarily tied up in language.

Login or register to comment