Comment author: Unnamed 07 October 2016 06:15:59AM 2 points [-]
Comment author: Unnamed 07 October 2016 06:08:47AM 1 point [-]

This post doesn't have much that addresses the "expanding circle" case for empathy, which goes something like this:

Empathy is a powerful tool for honing in on what matters in the world. By default, people tend to use it too narrowly. We can see that in many of the great moral failings of the past (like those mentioned here) which involved people failing to register some others as an appropriate target of empathy, or doing a lousy job of empathizing which involved making up stories more than really putting oneself in their shoes, or actively working to block empathy by dehumanizing them and evoking disgust, fear, or other emotions. But over time there has been moral progress as societies have expanded the circle of who people habitually feel empathy for, and developed norms and institutions to reflect their membership in that circle of concern. And it is possible to do better than your societal default if you cultivate your empathy, including the ability to notice the blind spots where you could be empathizing but are not (and the ability to then direct some empathy towards those spots). This could include people who are far away or across some boundary, people in an outgroup who you might feel antagonistic towards, people who have been accused of some misdeed, people and nonhumans that are very different from you, those who are not salient to you at the moment, those who don't exist yet, those who are only indirectly affected by your actions, etc.

Comment author: Unnamed 29 July 2016 03:15:08AM 1 point [-]

Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)

If you're referring to the calibration questions on the 2014 LW survey, rationalists were pretty well calibrated on them (though a bit overconfident). I described some analyses of the data here and here, and here's a picture:

(where the amount of overconfidence is shown by how far the blue dots are below the black line)

I don't know of any data on whether rationalists believe they are well calibrated on these sorts of questions - I suspect that a fair number of people would guess that they are overconfident.

Comment author: Unnamed 31 July 2016 09:09:25AM 0 points [-]

I'll also note here that I'm planning to do some analyses of the calibration questions on the 2016 LW Diaspora Survey during the next month. I think that there are issues with some of the questions that were on the survey, so before I do any analyses I'll note that my preferred analyses will only include 4 of the questions:

Which is heavier, a virus or a prion?
What year was the fast food chain "Dairy Queen" founded? (Within five years)
Without counting, how many keys on a standard IBM keyboard released after 1986, within ten?
What's the diameter of a standard soccerball, in cm within 2?

For thoroughness I will also do some secondary analyses which include 7 questions, those 4 plus the following 3 (even though I think that these 3 questions have some issues which make them less good as tests of calibration):

I'm thinking of a number between one and ten, what is it?
Alexander Hamilton appears on how many distinct denominations of US Currency?
How many calories in a reese's peanut butter cup within 20?

Comment author: RomeoStevens 28 July 2016 02:11:11AM *  6 points [-]

Rationalists often presume that it is possible to do much better than average by applying a small amount of optimization power. This is true in many domains, but can get you in trouble in certain places (see: the valley of bad rationality).

Rationalists often fail to compartmentalize, even when it would be highly useful.

Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)

Rationalists don't even lift bro.

Rationalists often fail to take marginal utility arguments to their logical conclusion, which is why they spend their time on things they are already good at rather than power leveling their lagging skills (see above). (Actually, I think we might be wired for this in order to seek comparative advantage in tribal roles.)

Rationalists often presume that others are being stupidly irrational when really the other people just have significantly different values and/or operate largely in domains where there aren't strong reinforcement mechanisms for systematic thought or are stuck in a local maximum in an area where crossing a chasm is very costly.

Comment author: Unnamed 29 July 2016 03:15:08AM 1 point [-]

Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)

If you're referring to the calibration questions on the 2014 LW survey, rationalists were pretty well calibrated on them (though a bit overconfident). I described some analyses of the data here and here, and here's a picture:

(where the amount of overconfidence is shown by how far the blue dots are below the black line)

I don't know of any data on whether rationalists believe they are well calibrated on these sorts of questions - I suspect that a fair number of people would guess that they are overconfident.

Comment author: James_Miller 11 July 2016 09:46:52PM *  3 points [-]

How does being nervous influence your ability stats? Being nervous improves my mental abilities (I usually did better on standardized tests than I did on practice ones and I can tell that my recall is much better when I'm nervous), but I get clumsier and less articulate. Interestingly, when I'm nervous I come across as being far less intelligent than I normally do, even though the reverse is true.

Comment author: Unnamed 13 July 2016 09:08:19PM 0 points [-]

See: Yerkes-Dodson law and research on "optimal level of arousal".

Comment author: gjm 10 March 2016 12:43:26PM *  14 points [-]

Ignoring psychology and just looking at the results:

  1. Delta-function prior at p=1/2 -- i.e., completely ignore the first two games and assume they're equally matched. Lee Sedol wins 12.5% of the time.

  2. Laplace's law of succession gives a point estimate of 1/4 for Lee Sedol's win probability now. That means Lee Sedol wins about 1.6% of the time. [EDITED to add:] Er, no, actually if you're using the rule of succession you should apply it afresh after each game, and then the result is the same as with a uniform prior on [0,1] as in #3 below. Thanks to Unnamed for catching my error.

  3. Uniform-on-[0,1] prior for Lee Sedol's win probability means posterior density is f(p)=3(1-p)^2, which means he wins the match exactly 5% of the time.

  4. I think most people expected it to be pretty close. Take a prior density f(p)=4p(1-p), which favours middling probabilities but not too outrageously; then he wins the match about 7.1% of the time.

So ~5% seems reasonable without bringing psychological factors into it.

Comment author: Unnamed 10 March 2016 11:41:04PM 7 points [-]

Laplace's law of succession gives Lee Sedol a 5% chance of winning the match (and AlphaGo a 50% chance of a 5-0 sweep). It gives him a 1/4 chance of winning game 3, a 2/5 chance of winning game 4 conditional on winning game 3, and a 1/2 chance of winning game 5 conditional on winning games 3&4. It's important to keep updating the probability after each game, because 1/4 is just a point estimate for a distribution of true win probabilities and the cases where he wins game 3 tend to come from the part of the distribution where his true win probability is larger than 1/4. It is not a coincidence that Laplace's law (with updating) gives the same result as #3 - Laplace's law can be derived from assuming a uniform prior.

Comment author: Unnamed 05 March 2016 06:14:42AM 2 points [-]
Comment author: Unnamed 21 February 2016 06:47:17AM *  10 points [-]

Coincidentally, Scott Alexander just wrote a post with nonfiction writing advice which includes:

9. Use strong concept handles

The idea of concept-handles is itself a concept-handle; it means a catchy phrase that sums up a complex topic.

Eliezer Yudkowsky is really good at this. “belief in belief“, “semantic stopsigns“, “applause lights“, “Pascal’s mugging“, “adaptation-executors vs. fitness-maximizers“, “reversed stupidity vs. intelligence“, “joy in the merely real” – all of these are interesting ideas, but more important they’re interesting ideas with short catchy names that everybody knows, so we can talk about them easily.

I have very consciously tried to emulate that when talking about ideas like trivial inconveniences, meta-contrarianism, toxoplasma, and Moloch.

I would go even further and say that this is one of the most important things a blog like this can do. I’m not too likely to discover some entirely new social phenomenon that nobody’s ever thought about before. But there are a lot of things people have vague nebulous ideas about that they can’t quite put into words. Changing those into crystal-clear ideas they can manipulate and discuss with others is a big deal.

If you figure out something interesting and very briefly cram it into somebody else’s head, don’t waste that! Give it a nice concept-handle so that they’ll remember it and be able to use it to solve other problems!

I'll add that memorable, idea-crystallizing labels can also be useful for your own thinking, even if you only use them in your own head. Instead of thinking "I'm doing that thing, I should do that other thing instead" or "I'm doing that thing where [20-word description], better switch to [12-word description]" you tell yourself (e.g.) "That feels like doublethink, time to singlethink."

Comment author: Viliam 04 January 2016 03:59:29PM 23 points [-]

Lessons from teaching a neural network...

Grandma teaches our baby that a pink toy cat is "meow".
Baby calls the pink cat "meow".
Parents celebrate. (It's her first word!)

Later Barbara notices that the baby also calls another pink toy non-cat "meow".
The celebration stops; the parents are concerned.
Viliam: "We need to teach her that this other pink toy is... uhm... actually, what is this thing? Is that a pig or a pink bear or what? I have no idea. Why do people create such horribly unrealistic toys for the innocent little children?"
Barbara shrugs.
Viliam: "I guess if we don't know, it's okay if the baby doesn't know either. The toys are kinda similar. Let's ignore this, so we neither correct her nor reward her for calling this toy 'meow'."

Barbara: "I noticed that the baby also calls the pink fish 'meow'."
Viliam: "Okay... I think now the problem is obvious... and so is the solution."
Viliam brings a white toy cat and teaches the baby that this toy is also "meow".
Baby initially seems incredulous, but gradually accepts.

A week later, the baby calls every toy and grandma "meow".

Comment author: Unnamed 05 January 2016 09:44:37AM 0 points [-]

Sounds like she hasn't learned shape bias yet.

Comment author: TimMartin 16 December 2015 12:32:31AM 0 points [-]

Random question - was anything ever done with data from the November 2013 participants? (That's me.)

Comment author: Unnamed 16 December 2015 03:16:21PM 1 point [-]

Unfortunately not. We made a bunch of changes to the survey right after your workshop, so we ended up with only a tiny dataset with your workshop's version of the survey. (More here.)

View more: Next