Comment author: BlueSun 20 August 2013 01:59:38PM 13 points [-]

would you seriously, given the choice by Alpha, the Alien superintelligence that always carries out its threats, give up all your work, and horribly torture some innocent person, all day for fifty years in the face of the threat of a 3^^^3 insignificant dust specks barely inconveniencing sentient beings? Or be tortured for fifty years to avoid the dust specks?

Likewise, if you were faced with your Option 1: Save 400 Lives or Option 2: Save 500 Lives with 90% probability, would you seriously take option 2 if your loved ones were included in the 400? I wouldn't. Faced with statistical people I'd take option 2 every time. But make Option 1: Save 3 lives and those three lives are your kids or option 2: Save 500 statistical lives with 90% probability I don't think I'd hesitate to pick my kids.

In some sense, I'm already doing that. For the cost of raising three kids, I could have saved something like 250 statistical lives. So I don't know that our unwillingness to torture a loved one is a good argument against the math of the dust specks.

Comment author: Emile 14 August 2013 12:17:59PM 1 point [-]

Pretty interesting! You can't play real tit-for-tat, since you don't know the history of those you're playing against ... so you have to rely on reputation ... and so you have to be careful about your reputation, since it's pretty likely that nobody will cooperate with those of low reputation...

I'll probably submit something, will anybody else?

Comment author: BlueSun 14 August 2013 06:05:41PM 0 points [-]

Can players coordinate strategies? There's an advantage if two or more submitter can identify themselves (in game) and cooperate.

Comment author: BlueSun 14 August 2013 03:44:30PM 1 point [-]

If there is an 'm' reward, you get the same reward whether or not you choose to hunt? I'm confused how this adds incentive to hunt when your goal is to "get more food than other players," not "get food."

Comment author: BlueSun 06 August 2013 03:38:34PM 2 points [-]

Are the sequences still going to be made into a publishable book? If so, how is that process coming along?

Comment author: BlueSun 05 August 2013 05:29:46PM 9 points [-]

Is there a thread somewhere about effective ways to plant the 'rationalist seed' in your children? I'd like to see something other than anecdotes ideally. But just ideas about books to read, shows to watch, or places to visit for different ages of children would be useful to me. For example,

My 2 and 4 year old both love Introductory Calculus For Infants

And a couple of years ago I got the the Star War ABC which lead to a HUGE love of Star Wars. I'm hoping that turns into a love of Science Fiction...

Comment author: Eliezer_Yudkowsky 02 August 2013 09:01:03PM 23 points [-]

One who possesses a maximum-entropy prior is further from the truth than one who possesses an inductive prior riddled with many specific falsehoods and errors. Or more to the point, someone who endorses knowing nothing as a desirable state for fear of accepting falsehoods is further from the truth than somebody who believes many things, some of them false, but tries to pay attention and go on learning.

Comment author: BlueSun 05 August 2013 05:03:07PM 2 points [-]

Maybe it's just where my mind was when I read it but I interpreted the quote as meaning something more like:

"It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence."

Comment author: BlueSun 29 July 2013 09:28:11PM *  0 points [-]

Tim Harford has some relevant comments:

What does that tell us – that prisoners take care of each other? Or that they fear reprisals?

Probably not reprisals: they were promised anonymity. It’s really not clear what this result tells us. We knew already that people often co-operate, contradicting the theoretical prediction. We also know, for instance, that economics students co-operate more rarely than non-economists – perhaps because they’ve been socialised to be selfish people, or perhaps because they just understand the dilemma better.

You think the prisoners just didn’t understand the nature of the dilemma?

That’s possible. The students were much better educated and most students had played laboratory games before. Maybe the prisoners co-operated because they were too confused to betray each other.

That seems speculative.

It is speculative, but consider this: the researchers also looked at a variant game in which one player has to decide whether to stay silent or confess, and then the other player decides how to respond. If you play first in this game you would be well-advised to stay silent, because people typically reward you for that. In this sequential game, it was the students, not the prisoners, who were more likely to co-operate with each other by staying silent. So the students were just as co-operative as prisoners but their choice of when to co-operate with each other made more logical sense.

Comment author: DanArmak 26 July 2013 05:06:14PM 5 points [-]

That's the rebuttal I thought about too. In particular, the heuristic "if someone is vocal against gays, they are likely to be gay" (whether or not it's true) may arise in practice from the heuristic "if someone is vocal about gays, whether for or against, they are likely to be gay".

Comment author: BlueSun 29 July 2013 03:47:29PM 0 points [-]

This is what I was trying to avoid with my asterisk, i.e., just talking about stealing candy does raise the probability they stole the candy. But once they're talking, confessing raises the probability they did it so not confessing should lower it.

On reflection, when my original question was designed to help make situations clearer, using an example that I felt I had to asterisk probably wasn't wise.

Comment author: BlueSun 25 July 2013 04:24:36PM 5 points [-]

How would I update my probabilities if I saw the opposite piece of evidence? What I’m trying to get at here is that “A” and “not A” can’t really be evidences for the same thing. And often it’s more obvious which way “not A” is pointing. A couple of examples:

I saw someone suggesting that maybe a certain Mr. Far Wright was secretly gay because, when the subject was broached, he had publicly expressed his dislike of homosexuality. There was even a wiki page (that I now can’t find) laying out the “law” that the more a person sounds like they hate gays the more likely they are to be gay. At first this sounded appealing*, but then I applied the “not A” test: “if Mr. Far Wright’s sexual orientation is unknown and I heard him publicly declare that he loved homosexual behavior, how would I update the probability that he is gay?” In that case, it seems clear that I’d update it towards him being gay. Therefore, it doesn’t really make sense that when Mr. Wright does that opposite—publicly declaring that he hates homosexual behavior—I also update my probability that he is gay.

Or another recent example I had from talking with someone about Mormonism. Someone said that not having the golden plates available for inspection wasn’t really evidence against Joseph Smith’s story because there were several good reasons why they weren’t available. I was about to concede when I realized that a world where the golden plates were observable would be strong evidence for Joseph Smith’s story so a world where they aren’t has to be at least weak evidence against his story. If A moves the probability quite a bit one way, not A has to at least minimally move the probability the other way.

*Sometimes, if all I can observe, is a denial, it is evidence that the person is guilty. For example, if I walked through the door and the first thing I heard was my toddler denying to my wife that he took the candy, it increases my probability that he did take candy. But too my wife—who already has the evidence that led her to make the accusation—a denial is evidence against him taking the candy (it increases the relative odds that his brother did it instead).

Did I keep all of my reasoning here correct? If not, there might be a better way to express the idea with a Bayesian network.

Comment author: BlueSun 25 July 2013 03:42:11PM 6 points [-]

Some variation of “What is the other person’s actual objective?” Or “Why did they do that?” or “What are they actually asking me?”

I started this habit in chess where it’s always useful to ask ‘why did my opponent make their last move?’ (and then see if there are answers past the obvious one). But I’ve also found it useful in other areas. Several times at work I've gone through iterations of something with someone because I answered exactly what they said instead of what they actually wanted. I now try to stop and ask them what their actual purpose is and it often saves me a bit of work.

View more: Prev | Next