jsalvatier comments on Firewalling the Optimal from the Rational - Less Wrong

86 Post author: Eliezer_Yudkowsky 08 October 2012 08:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (339)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 07 October 2012 06:32:22AM *  16 points [-]

(a) deliberately not rejecting people who disagree with a particular point of mere optimality, and (b) deliberately extending hands to people who show respect for the process and interest in the algorithms even if they're disagreeing with the general consensus.

Do you think Dmytry might be a good case study for this? I thought he had some interesting and novel ideas about processes/algorithms that at least didn't seem obviously wrong as well as some technical understanding of things like Solomonoff Induction, and also had strong disagreements with many of us regarding FAI and AI Risk. Should we have "extended our hands" to him more (at least before he became increasingly trollish), and if so how? (How would you taboo "extend hands" generally and in this specific instance?) If not, do you have someone else in mind who could serve as a concrete example?

Comment author: jsalvatier 08 October 2012 03:33:31PM 1 point [-]

It's my impression that yes, more hand extension would have been good, but I didn't follow his threads that closely.

Comment author: John_Maxwell_IV 09 October 2012 02:57:00AM 4 points [-]

I wonder if the trivial inconvenience of him not being that great of a communicator might have put people off from following his threads.

Comment author: Eliezer_Yudkowsky 09 October 2012 07:39:48PM 3 points [-]

Does somebody want to post one part of Dmytry that seems new and true? My impression on a quick skim was not favorable.

Comment author: CarlShulman 10 October 2012 03:09:01AM *  3 points [-]

This comment on a drawback of donating primarily to the charities you think is best lest you make it profitable to invest in being or appearing better by your standards, and various empirical parameters (availablility of honest signals, your ability to distinguish different signals, the quantity of funds allocated by decision rules like yours, the costs of dishonest signals) fall in a narrow region. I am skeptical that this is a real issue in practice (e.g. GiveWell channels to a top charity, rather than diversifying), separate from the problem of assessing evidence (which is normally focused on finding signals that are costly to fake in any case), but it's still an interesting theoretical point which I hadn't seen made on Less Wrong before.