jimmy comments on Other Existential Risks - Less Wrong

32 Post author: multifoliaterose 17 August 2010 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (120)

You are viewing a single comment's thread.

Comment author: jimmy 18 August 2010 10:03:25PM *  6 points [-]

EY argues: "... your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't."

and you respond by saying that there have been people smarter than Eliezer that have suffered rationality fails when working outside their domain? Isn't that kinda the point?

EY wasn't arguing "My IQ is so damn high that I just have to be right. Look at my ability to generate novel hypothesis! It clearly shows high IQ!", which would indeed be foolish. It is understood here that high innate intelligence is not the same as real world effectiveness, which requires one be intelligent about how they use their intelligence.

The object of the game here is to evaluate hypothesis which have already been generated (ie SIAI claims). EY was showing that there are many very smart people that can't even evaluate the MWI hypothesis when it is handed to them and there is slam dunk evidence.

If you can't even get the right answer on simple questions, how the heck are you supposed to do better on tough problems than those that see the simple problems as, well... simple?

EDIT: It seems like my point did not come off clearly. I am not arguing that it is not an appeal to authority.

I am arguing that high IQ is different from "has lots of knowledge" which is different from "knows the fundamental rules of how to weigh evidence and evaluate claims", and that Eliezer was talking about the last one.

Comment author: ciphergoth 18 August 2010 10:21:29PM *  5 points [-]

More specifically, XiXiDu's whole point was "how do I evaluate this if, instead of addressing the arguments behind it, I talk about who believes it and who doesn't?" If that's the argument, it's fair enough for Eliezer to ask them to assess the rationality of the people whose opinions are being weighed.

Comment author: XiXiDu 19 August 2010 09:41:59AM *  1 point [-]

More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.

Reference: The Singularity: An Appraisal (Video) - Alastair Reynolds, Vernor Vinge, Charles Stross, Karl Schroeder

In the case of AI researchers like Marvin Minsky, amongst others, the knowledge of possible risks should be reasonable to infer from their overall familiarity with the topic.

EY wasn't arguing "My IQ is so damn high that I just have to be right.

I disagree based on the following evidence:

The object of the game here is to evaluate hypothesis which have already been generated (ie SIAI claims).

Hypothesis based on shaky conclusions, not on previous evidence.

Comment author: wedrifid 19 August 2010 12:19:46PM 3 points [-]

I disagree based on the following evidence:

I actually feel embarrassed just from reading that.

Comment author: Gabriel 19 August 2010 08:30:22PM 7 points [-]

EY wasn't arguing "My IQ is so damn high that I just have to be right.

I disagree based on the following evidence:

http://xixidu.net/lw/05.png "At present I do not know of any other person who could do that." (Reference)

You keep posting screenshots from the deleted Roko's post, with the "forbidden" parts blacked-out. I agree that the whole matter could have been handled much better, but I don't see how it or the other quoted line bears on the interpretation of the sentence quoted at the top of jimmy's post. Also, people have asked you several times to stop reminding them of the deleted post and the need for quotes proving that EY thinks highly of his intelligence can be satisfied without doing that. Seriously, they're everywhere.

Comment author: jimmy 19 August 2010 06:09:11PM 0 points [-]

See the edit to the original comment.

Comment author: multifoliaterose 18 August 2010 10:34:14PM *  0 points [-]

If only there had been detailed critical analysis of claims (1) and (2) on Less Wrong or the SIAI website I would find your comment compelling. But in light of the fact that detailed critical analysis of these significant claims has not taken place I believe that Eliezer's remarks are in fact properly conceptualized as an appeal to authority.

Comment author: jimmy 19 August 2010 05:56:51PM 1 point [-]

I totally agree that it's an appeal to authority. My point was that it's an appeal to a different and more relevant kind of authority.

Comment author: multifoliaterose 19 August 2010 07:15:28PM 0 points [-]

Do you disagree with

Just as Grothendieck's algebro-geometric achievements had no bearing on Grothendieck's ability to conceptualize a good plan to lower existential risk, so too does Eliezer's ability to interpret quantum mechanics have no bearing on Eliezer's ability to conceptualize a good plan to lower existential risk.

?

If so, why?

Comment author: jimmy 20 August 2010 05:36:07PM *  3 points [-]

Yes, I mostly disagree.

The first part is giving an example of high IQ not leading to a good existential risk plan, and the second part is saying that you expect that high ability to weigh evidence won't lead to a good plan either.

The counterexample proves that high IQ isn't everything one needs, but overall, I'd still expect it to help. I think "no bearing" is too strong even for an IQ->IQ comparison of that sort.

If you're going to assume you've been exposed to all the plans that people have come up with, picking the right plan is more of a claim evaluation job than a novel hypothesis generation job. For this, you're going to want someone that can evaluate claims like MWI easily. I think that this is sufficiently close to the case to make your comparison a poor one.

If I were going to make a comparison to make your point (to the degree which I agree with it), I'd use more than one person with more than one strength of intellect and instead ask "do we really think EY has shown enough to succeed where most talented people fail?". I'd also try to make it clear whether I'm arguing against him having a 'majority' of the probability mass in his favor vs having a 'plurality' of it going for him. It's a lot easier to argue against the former, but it's the latter that is more important if you have to pick someone to give money to.

Comment author: multifoliaterose 20 August 2010 05:59:18PM 0 points [-]

But how well does the ability to evaluate evidence connected with quantum mechanics correlate with ability to evaluate evidence connected with existential risk?

See also the thread here