Comment author: robertskmiles 12 October 2012 05:07:29PM 2 points [-]

A few minor clarity/readability points:

  1. The second paragraph opening "The statisticians who discovered the nature of reality" reads rather oddly when taken out of the context of "The Fabric of Real Things".
  2. When considering the three causal models of Burglars, Alarms and Recessions, tackling the models in a "First, third, second" order threw me on first reading. It would probably be easier to follow if the text and the diagram used the same order.
  3. Perhaps giving each node a different pastel colour would make it easier to follow what is changing between different diagrams.

And this has probably been said, but using exercise and weight is probably distracting, since people already have opinions on the issue.

All in all though, a great article.

Comment author: chaosmosis 06 October 2012 10:48:43PM 3 points [-]

I know as a matter of process that when a respected fellow rationalist tells me that I need to become curious, I should pause and check my curiosity levels and try to increase them.

How does one increase their curiosity levels?

Comment author: robertskmiles 08 October 2012 11:59:42AM *  4 points [-]

There's a post about this.

@Eliezer Perhaps it's worth making "try to increase them" a link to lukeprog's "Get Curious" article?

Comment author: Wei_Dai 07 October 2012 12:16:04AM 9 points [-]

I use "I think X" to indicate more uncertainty than just "X" all the time, and so does Eliezer. I just checked his recent comments, and on the first page, he used "I think" two times to indicate uncertainty, and one time to indicate that others may not share his belief. The statement "Consider the following deflations, all of which convey essentially the same information about your own opinions" just seems plain wrong.

Comment author: robertskmiles 08 October 2012 11:57:42AM *  3 points [-]

Agreed. The use of "I think" relies on its connotations, which are different from its denotation. When you say "I think X", you're not actually expressing the same sentiment as a direct literal reading of the text suggests.

Comment author: simplicio 21 September 2012 11:22:38PM 16 points [-]

One of the most audacious and famous experiments is known informally as "the door study": an experimenter asks a passerby for directions, but is interrupted by a pair of construction workers carrying an unhinged door, concealing another person whom replaces the experimenter as the door passes. Incredibly, the person giving directions rarely notices they are now talking to a completely different person. This effect was reproduced by Derren Brown on British TV (here's an amateur re-enactment).

I think the response of the passerby is quite reasonable, actually. Confronted with a choice between (a) "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions," and (b) "I just had a brain fart," I'll consciously go for (b) every time, especially considering that I make similar mistakes all the time (confusing people with each other immediately after having encountered them). I know that this is probably not a phenomenon that occurs at the conscious level, but we should expect the unconscious level to be even more automatic.

Comment author: robertskmiles 25 September 2012 06:07:42PM *  5 points [-]

A rational prior for "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions" would be very small indeed (that naturally doesn't happen, and psych experiments are rare). A rational prior for "I just had a brain fart" would be much bigger, since that sort of thing happens much more often. So at the end, a good Bayesian would assign a high probability to "I just had a brain fart", and also a high probability to "This is the same person" (though not as high as it would be without the brain fart).

The problem is that the conscious mind never gets the "I just had a brain fart" belief. The error is unconsciously detected and corrected but not reported at all, so the person doesn't even get the "huh, that feels a little off" feeling which is in many cases the screaming alarm bell of unconscious error detection. Rationalists can learn to catch that feeling and examine their beliefs or gather more data, but without it I can't think of a way to beat this effect at all, short of paying close attention to all details at all times.

Comment author: Eliezer_Yudkowsky 12 September 2012 01:43:50PM 16 points [-]

The criticism is that a martial artist or scientist is actually trying to attain a highly specific brain-state in which neurons have particular patterns in them; a feeling of emptiness, even if part of this brain state, is itself a neural pattern and certainly does not correspond to the absence of a mind.

The zeroth virtue or void - insofar as we believe in it - corresponds to particular mode of thinking; it's certainly not an absence of mind. Emptiness, no-mind, the Void of Musashi, all these things are modes of thinking, not the absence of any sort of reified spiritual substance. See also the fallacy of the ideal ghost of perfect emptiness in philosophy.

Comment author: robertskmiles 18 September 2012 06:09:02PM 1 point [-]

Cf. Mushin

In response to comment by mbh007 on 9/26 is Petrov Day
Comment author: Jimmc 29 September 2010 07:39:08PM 6 points [-]

This is actually not that relevant. The only question is whether humanity could survive a Nuclear Winter...

In response to comment by Jimmc on 9/26 is Petrov Day
Comment author: robertskmiles 13 September 2012 10:27:07PM *  9 points [-]

He didn't say "Wipe out humanity", he said "destroy the world". I'd say a global thermonuclear conflict would do enough damage to call the world destroyed, even if humanity wasn't utterly and irrevocably annihilated.

If I smashed your phone against a wall, you'd say I'd destroyed it, even if it could in principle be repaired.

Comment author: ChristianKl 19 November 2010 04:45:14PM 4 points [-]

From my own experience of awaking after days of artificial coma I find it unlikely to believe that the person will go through a good Bayesian reasoning.

It's very hard to form the belief that it's Thursday when the last day you remember is a Sunday. It goes against fundamental principles that evolution taught us over millions of years. It's one of those cases where our availability heuristics is really bad.

PS: I know this is only a thought experiment.

Comment author: robertskmiles 19 August 2012 12:37:31AM 5 points [-]

The ancestral environment didn't contain a lot of artificial comas, but to be fair it didn't contain many named week days either.

Comment author: [deleted] 07 August 2012 07:10:30AM 0 points [-]

I found this post very disturbing, so I thought for a bit about why. It reads very much like some kind of SF dystopia, and indeed if it were necessary to agree to this lottery to be part of the hypothetical rationalist community/country, then I wouldn't wish to be a part of it. One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly (so long as it's not impeding anyone else's right to do the same). No government should have the right to compel its citizens to become soldiers, and that's what it would become, after the first generation, unless you're going to choose to exile anyone who reaches adulthood there and then opts out.

Offering financial incentives for becoming a soldier, as has already been discussed in the comments, seems a fairer idea. Consider also that the more objectively evil the Evil Barbarians are, the more people will independently decide that fighting is the better decision. If not enough people support your war, maybe that in itself is a sign that it's not a good idea. If most of the rationalists would rather lose than fight, that tells you something.

It's quite difficult to know the right tone of response to take here - the Evil Barbarians are obviously pure thought-experiment, but presumably most of us would view a rationalist country as a good thing. Not if it made decisions like this, though. Sacrificing the individual for the collective isn't always irrational, but it needs to be the individual who makes that choice based on his or her own values, not due to some perceived social contact. Otherwise you might as well be sacrificed to make more paperclips.

If it was intended as pure metaphor, it's a disquieting one.

In response to comment by [deleted] on Bayesians vs. Barbarians
Comment author: robertskmiles 07 August 2012 09:48:18PM *  1 point [-]

One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly

A very sensible value in a heterogenous society, I think. But in this hypothetical nation, everyone is a very good rationalist. So they all, when they shut up and multiply, agree that being a soldier and winning the war is preferable to any outcome involving losing the war, and they all agree that the best thing to do as a group is to have a lottery, and so they all precommit to accepting the results.

No point in giving people the liberty to make their own individual decisions when everyone comes to the same decision anyway. Or more accurately, the society is fully respecting everyone's individual autonomy, but due to the very unlikely nature of the nation, the effect ends up being one of 100% compliance anyway.

Comment author: steven0461 23 March 2009 01:12:23PM 1 point [-]

Checking up on the guest lists for cocktail parties and customer data for salons, we find that these two activities are indeed disproportionately enjoyed by the rich, so that part of the statement also seems true enough.

P implies Q does not imply Q implies P, surely.

Comment author: robertskmiles 07 August 2012 02:30:14PM *  3 points [-]

I was bothered by this as well. The statement wasn't "cocktail parties and salons are patronised by the ultra-rich", but "the ultra-rich ... spend their time at cocktail parties and salons". So it's as you say, what you need to look at is not the guest lists for cocktail parties and customer data for salons, but what proportion of a typical ultra-rich person's time is spent at cocktail parties and salons. I don't have the data, but I'd anticipate the mean ultra-rich person spends more time managing their business concerns than attending cocktail parties.

Though this is all Support That Sounds Like Dissent, since none of it really detracts from the central thrust of the post, with which I broadly agree. Still, no point leaving holes in something which is political enough for people to have a good deal of motivated scepticism about it.

Comment author: robertskmiles 04 August 2012 04:10:27PM *  17 points [-]

The difference in optimisation targets between LW and H&B researchers is an important thing to point out, and probably the main thing I'll take away from this post.

Biases can:-

  • Be interesting to learn about
  • Serve an academic/political purpose to research
  • Give insight into the workings of human cognition
  • Be fun to talk about
  • Actually help to achieve your goals by understanding them

And the correlations between any 2 of these things need not be strong or positive.

Is it the halo effect if we assume that a more interesting bias will better help us achieve our goals?

View more: Prev | Next