XiXiDu comments on Other Existential Risks - Less Wrong

32 Post author: multifoliaterose 17 August 2010 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (120)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimmy 18 August 2010 10:03:25PM *  6 points [-]

EY argues: "... your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't."

and you respond by saying that there have been people smarter than Eliezer that have suffered rationality fails when working outside their domain? Isn't that kinda the point?

EY wasn't arguing "My IQ is so damn high that I just have to be right. Look at my ability to generate novel hypothesis! It clearly shows high IQ!", which would indeed be foolish. It is understood here that high innate intelligence is not the same as real world effectiveness, which requires one be intelligent about how they use their intelligence.

The object of the game here is to evaluate hypothesis which have already been generated (ie SIAI claims). EY was showing that there are many very smart people that can't even evaluate the MWI hypothesis when it is handed to them and there is slam dunk evidence.

If you can't even get the right answer on simple questions, how the heck are you supposed to do better on tough problems than those that see the simple problems as, well... simple?

EDIT: It seems like my point did not come off clearly. I am not arguing that it is not an appeal to authority.

I am arguing that high IQ is different from "has lots of knowledge" which is different from "knows the fundamental rules of how to weigh evidence and evaluate claims", and that Eliezer was talking about the last one.

Comment author: XiXiDu 19 August 2010 09:41:59AM *  1 point [-]

More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.

Reference: The Singularity: An Appraisal (Video) - Alastair Reynolds, Vernor Vinge, Charles Stross, Karl Schroeder

In the case of AI researchers like Marvin Minsky, amongst others, the knowledge of possible risks should be reasonable to infer from their overall familiarity with the topic.

EY wasn't arguing "My IQ is so damn high that I just have to be right.

I disagree based on the following evidence:

The object of the game here is to evaluate hypothesis which have already been generated (ie SIAI claims).

Hypothesis based on shaky conclusions, not on previous evidence.

Comment author: wedrifid 19 August 2010 12:19:46PM 3 points [-]

I disagree based on the following evidence:

I actually feel embarrassed just from reading that.

Comment author: Gabriel 19 August 2010 08:30:22PM 7 points [-]

EY wasn't arguing "My IQ is so damn high that I just have to be right.

I disagree based on the following evidence:

http://xixidu.net/lw/05.png "At present I do not know of any other person who could do that." (Reference)

You keep posting screenshots from the deleted Roko's post, with the "forbidden" parts blacked-out. I agree that the whole matter could have been handled much better, but I don't see how it or the other quoted line bears on the interpretation of the sentence quoted at the top of jimmy's post. Also, people have asked you several times to stop reminding them of the deleted post and the need for quotes proving that EY thinks highly of his intelligence can be satisfied without doing that. Seriously, they're everywhere.

Comment author: jimmy 19 August 2010 06:09:11PM 0 points [-]

See the edit to the original comment.