Comment author: Yvain 01 May 2016 05:11:57PM 19 points [-]

Nice work.

If possible, please do a formal writeup like this: http://lesswrong.com/lw/lhg/2014_survey_results/

If possible, please change the data on your PDF file to include an option to have it without nonresponders. For example, right now sex is 66% male, 12% female, unknown 22%, which makes it hard to intuitively tell what the actual sex ratio is. If you remove the unknowns you see that the knowns are 85% male 15% female, which is a much more useful result. This is especially true since up to 50% of people are unknowns on some questions.

If possible, please include averages for numerical questions. For example, there's no data about age on the PDF file because it just says everybody was a "responder" but doesn't list numbers.

Comment author: ingres 26 March 2016 02:46:40AM *  3 points [-]

Yeah, you're right.

Currently trying to figure out how to do that in the least intrusive way.

EDIT: Good news it turns out that I can edit the calibration question 'answers' after all. The ones where a range would make sense have been edited to include one. Questions such as "which is heavier" have not been because the ignorance prior should be fairly obvious.

Fri Mar 25 19:50:41 PDT 2016 | Answers on or before this date where the ranges have been added will be controlled for at analysis time.

Comment author: Yvain 26 March 2016 02:49:37AM 1 point [-]

If you throw out the data, I request you keep the thrown-out data somewhere else so I can see how people responded to the issue.

In response to comment by Elo on Lesswrong 2016 Survey
Comment author: ingres 26 March 2016 02:27:17AM *  1 point [-]

We will get that suggestion sorted asap.

I actually can't do that. The way our survey engine works changing the question answers mid-survey would require taking it down for maintenance and hand-joining the current respondents to the new respondents. In general I planned to handle the "within 10 cm" thing during analysis. Try to fermi estimate the value and give your closest answer, then the probability you got it right. We can look at how close your confidence was to a sane range of values for the answer.

I.E, if you got it within ten and said you had a ten percent chance of getting it right you're well calibrated.

Note: I am not entirely sure this is sane, and would like feedback on better ways to do it.

EDIT: I should probably be very precise here. I cannot change the question answers in the software, presumably because it would involve changing the underlying table schema for the database. I can change the question/ question descriptions so if there's a superior process for answering these I could describe it there.

Comment author: Yvain 26 March 2016 02:42:51AM *  6 points [-]

"In general I planned to handle the "within 10 cm" thing during analysis. Try to fermi estimate the value and give your closest answer, then the probability you got it right. We can look at how close your confidence was to a sane range of values for the answer."

But unless I'm misunderstanding you, the size of the unspoken "sane range" is the entire determinant of how you should calibrate yourself.

Suppose you ask me when Genghis Khan was born, and all I know is "sometime between 1100 and 1200, with certainty". Suppose I choose 1150. If you require the exact year, then I'm only right if it was exactly 1150, and since it could be any of 100 years my probability is 1%. If you require within five years, then I'm right if it was any time between 1145 and 1155, so my probability is 10%. If you require within fifty years, then my probability is effectively 100%. All of those are potential "sane ranges", but depending on which one you the correctly calibrated estimate could be anywhere from 1% to 100%.

Unless I am very confused, you might want to change the questions and hand-throw-out all the answers you received before now, since I don't think they're meaningful (except if interpreted as probability of being exactly right).

(Actually, it might be interesting to see how many people figure this out, in a train wreck sort of way.)

PS: I admit this is totally 100% my fault for not getting around to looking at it the five times you asked me to before this.

Comment author: Yvain 26 March 2016 02:04:36AM 24 points [-]

Elo, thanks a lot for doing this.

(for the record, Elo tried really hard to get me involved and I procrastinated helping and forgot about it. I 100% endorse this.)

My only suggestion is to create a margin of error on the calibration questions, eg "How big is the soccer ball, to within 10 cm?". Otherwise people are guessing whether they got the exact centimeter right, which is pretty hard.

Comment author: Lumifer 15 March 2016 04:49:41PM *  10 points [-]

This post by Eric Raymond should be interesting to LW :-) Extended quoting:

There’s a link between autism and genius says a popular-press summary of recent research. If you follow this sort of thing (and I do) most of what follows doesn’t come as much of a surprise. We get the usual thumbnail case studies about autistic savants. There’s an interesting thread about how child prodigies who are not autists rely on autism-like facilities for pattern recognition and hyperconcentration. There’s a sketch of research suggesting that non-autistic child-prodigies, like autists, tend to have exceptionally large working memories. Often, they have autistic relatives. Money quote: “Recent study led by a University of Edinburgh researcher found that in non-autistic adults, having more autism-linked genetic variants was associated with better cognitive function.”

But then I got to this: “In a way, this link to autism only deepens the prodigy mystery.” And my instant reaction was: “Mystery? There’s a mystery here? What?” Rereading, it seems that the authors (and other researchers) are mystified by the question of exactly how autism-like traits promote genius-level capabilities.

At which point I blinked and thought: “Eh? It’s right in front of you! How obvious does it have to get before you’ll see it?”

... Yes, there is an enabling superpower that autists have through damage and accident, but non-autists like me have to cultivate: not giving a shit about monkey social rituals.

Neurotypicals spend most of their cognitive bandwidth on mutual grooming and status-maintainance activity. They have great difficulty sustaining interest in anything that won’t yield a near-immediate social reward. By an autist’s standards (or mine) they’re almost always running in a hamster wheel as fast as they can, not getting anywhere.

The neurotypical human mind is designed to compete at this monkey status grind and has zero or only a vanishingly small amount of bandwidth to spare for anything else. Autists escape this trap by lacking the circuitry required to fully solve the other-minds problem; thus, even if their total processing capacity is average or subnormal, they have a lot more of it to spend on what neurotypicals interpret as weird savant talents.

Non-autists have it tougher. To do the genius thing, they have to be either so bright that they can do the monkey status grind with a tiny fraction of their cognitive capability, or train themselves into indifference so they basically don’t care if they lose the neurotypical social game.

Once you realize this it’s easy to understand why the incidence of socially-inept nerdiness doesn’t peak at the extreme high end of the IQ bell curve, but rather in the gifted-to-low-end-genius region closer to the median. I had my nose memorably rubbed in this one time when I was a guest speaker at the Institute for Advanced Study. Afternoon tea was not a nerdfest; it was a roomful of people who are good at the social game because they are good at just about anything they choose to pay attention to and the monkey status grind just isn’t very difficult. Not compared to, say, solving tensor equations.

Comment author: Yvain 18 March 2016 08:51:25PM 1 point [-]

This idea of having more "bandwidth" is tempting, but not really scientifically supported as far as I can tell, unless he just means autists have more free time/energy than neurotypicals.

Comment author: Yvain 18 March 2016 08:46:44PM *  7 points [-]

If race were a factor in twin studies, I think it would show up only in shared environment, since it differs between families but never within families (and is not differently different in MZ vs. DZ twins). That means it would not show in "heredity", unless we're talking about interracial couples with two children, each of whom by coincidence got a very different number of genes from the parents' two races - I think this is rare enough not to matter in real life studies.

Your point stands about the general role of these kinds of things, I just don't think it's counted that way in the twin studies we actually have.

You're right about beauty etc, though. Genetic studies are most informative about interventions to change individuals' standings relative to other individuals, not about interventions to completely change the nature of the playing field.

Comment author: Yvain 17 September 2015 05:33:50AM *  14 points [-]

I don't know if this solves very much. As you say, if we use the number 1, then we shouldn't wear seatbelts, get fire insurance, or eat healthy to avoid getting cancer, since all of those can be classified as Pascal's Muggings. But if we start going for less than one, then we're just defining away Pascal's Mugging by fiat, saying "this is the level at which I am willing to stop worrying about this".

Also, as some people elsewhere in the comments have pointed out, this makes probability non-additive in an awkward sort of way. Suppose that if you eat unhealthy, you increase your risk of one million different diseases by plus one-in-a-million chance of getting each. Suppose also that eating healthy is a mildly unpleasant sacrifice, but getting a disease is much worse. If we calculate this out disease-by-disease, each disease is a Pascal's Mugging and we should choose to eat unhealthy. But if we calculate this out in the broad category of "getting some disease or other", then our chances are quite high and we should eat healthy. But it's very strange that our ontology/categorization scheme should affect our decision-making. This becomes much more dangerous when we start talking about AIs.

Also, does this create weird nonlinear thresholds? For example, suppose that you live on average 80 years. If some event which causes you near-infinite disutility happens every 80.01 years, you should ignore it; if it happens every 79.99 years, then preventing it becomes the entire focus of your existence. But it seems nonsensical for your behavior to change so drastically based on whether an event is every 79.99 years or every 80.01 years.

Also, a world where people follow this plan is a world where I make a killing on the Inverse Lottery (rules: 10,000 people take tickets; each ticket holder gets paid $1, except a randomly chosen "winner" who must pay $20,000)

Comment author: [deleted] 07 July 2015 01:19:07PM 0 points [-]

Nope! I remembered a meeting happening in Ann Arbor but it appears that was a once-off thing, as I can't find any links to a group based there. Not to mention I couldn't make it there anyhow, anytime soon.

In response to comment by [deleted] on Open Thread, Jul. 6 - Jul. 12, 2015
Comment author: Yvain 16 July 2015 02:09:49PM 1 point [-]

There are meetings in the area every couple of months. There's no specific group to link to, but if you give me your email address I will add you to the list.

If you tell me where exactly in Michigan you are, I can try to put you in touch with other Michigan LW/SSC readers. Most are in Ann Arbor, but there are several in the Detroit metro area and at least one in Grand Rapids.

Comment author: gjm 07 May 2015 10:25:03AM 15 points [-]

There was a discussion a little while back (I think in another open thread) about the game of looking at the titles of articles linked from the "Recent on rationality blogs" sidebar and guessing who wrote them. Usually this is pretty easy.

Right now, though, the top link in the list is "The Future is Filters", which seems like an obvious Robin Hanson title. But no! It's Scott, not Robin, and it's about "filter bubbles" rather than "great filters".

I wonder whether Scott is aware of the game and deliberately trying to tease...

Comment author: Yvain 10 May 2015 05:08:30AM *  3 points [-]

I liked the sound of "The Future Is Pipes" and saved that sentence structure in case I needed it.

Comment author: DanArmak 25 February 2015 09:58:08PM *  6 points [-]

People here should be aware of fresh Word of God. Apparently we're NOT in the Mirror.

Comment author: Yvain 26 February 2015 08:48:19PM *  10 points [-]

The Mirror did not touch the ground; the golden frame had no feet. It didn't look like it was hovering; it looked like it was fixed in place, more solid and more motionless than the walls themselves, like it was nailed to the reference frame of the Earth's motion.

The Mirror is in the fourth wall. Now that we-the-readers have seen the mirror, we have to consider that our seeing Eliezer saying this isn't in the mirror might just be part of our coherent extrapolated volition.

View more: Next