XiXiDu comments on Other Existential Risks - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (120)
More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.
Reference: The Singularity: An Appraisal (Video) - Alastair Reynolds, Vernor Vinge, Charles Stross, Karl Schroeder
In the case of AI researchers like Marvin Minsky, amongst others, the knowledge of possible risks should be reasonable to infer from their overall familiarity with the topic.
I disagree based on the following evidence:
Hypothesis based on shaky conclusions, not on previous evidence.
I actually feel embarrassed just from reading that.
You keep posting screenshots from the deleted Roko's post, with the "forbidden" parts blacked-out. I agree that the whole matter could have been handled much better, but I don't see how it or the other quoted line bears on the interpretation of the sentence quoted at the top of jimmy's post. Also, people have asked you several times to stop reminding them of the deleted post and the need for quotes proving that EY thinks highly of his intelligence can be satisfied without doing that. Seriously, they're everywhere.
See the edit to the original comment.