private_messaging comments on How many people here agree with Holden? [Actually, who agrees with Holden?] - Less Wrong

4 Post author: private_messaging 14 May 2012 11:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 14 May 2012 12:41:01PM *  3 points [-]

Well, maybe it is poorly worded, I'd rather also know who here thinks that Holden is essentially correct.

What probability would you give to Holden being essentially correct? Why?

Comment author: TheOtherDave 14 May 2012 12:57:31PM 5 points [-]

I'm going to read between the lines a little, and assume that "Holden is essentially correct" here means roughly that donating money to SI doesn't significantly reduce human existential risk. (Holden says a lot of stuff, some of which I agree with more than others.) I'm >.9 confident that's true. Holden's post hasn't significantly altered my confidence of that.

Why do you want to know?

Comment author: private_messaging 14 May 2012 01:06:55PM *  4 points [-]

Well, he estimated the expected effect on risk as insignificant increase of risk. That is to me the strong point; the 'does not reduce' is a weak version prone to eliciting Pascal's wager type response.

Comment author: TheOtherDave 14 May 2012 02:14:07PM *  4 points [-]

I am >.9 confident that donating money to SI doesn't significantly increase human existential risk.

(Edit: Which, on second read, I guess means I agree with Holden as you summarize him here. At least, the difference between "A doesn't significantly affect B" and "A insignificantly affects B" seems like a difference I ought not care about.)

I also think Pascal's Wager type arguments are silly. More precisely, given how unreliable human intuition is when dealing with very low probabilities and when dealing with very large utilities/disutilities, I think lines of reasoning that rely on human intuitions about very large very-low-probability utility shifts are unlikely to be truth-preserving.

Why do you want to know?

Comment author: Luke_A_Somers 14 May 2012 02:00:46PM 1 point [-]

On that, I'm pretty sure that the SI would not rush that way. Consider the parable of the dragon. This isn't the story of someone who's willing to cut corners, but of someone who accepts that delays for checking, even delays that cause people to die, are necessary.

Plus, if they develop a clear enough architecture, so one can query what the AI is thinking, then one would be able to see potential future failures while still in testing, without having to have those contingencies actually occur. That will be one of the keys, I think. Make the AI's reasons something that we can follow, even if we couldn't generate those arguments on a reasonable time-frame.