jimmy comments on Other Existential Risks - Less Wrong

32 Post author: multifoliaterose 17 August 2010 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (120)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 19 August 2010 09:41:59AM *  1 point [-]

More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.

Reference: The Singularity: An Appraisal (Video) - Alastair Reynolds, Vernor Vinge, Charles Stross, Karl Schroeder

In the case of AI researchers like Marvin Minsky, amongst others, the knowledge of possible risks should be reasonable to infer from their overall familiarity with the topic.

EY wasn't arguing "My IQ is so damn high that I just have to be right.

I disagree based on the following evidence:

The object of the game here is to evaluate hypothesis which have already been generated (ie SIAI claims).

Hypothesis based on shaky conclusions, not on previous evidence.

Comment author: jimmy 19 August 2010 06:09:11PM 0 points [-]

See the edit to the original comment.