shminux comments on Will AGI surprise the world? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (129)
I have as much credibility as Eliezer Yudkowsky in that regard, and for the same reason. As I mention in the post you replied to, my work is private and unpublished. None of my work is accessible to the internet, as it should be. I consider it unethical to be publishing AGI research given what is at stake.
Eliezer published a lot of relevant work, I have seen none from you.
Eliezer has publications in the field of artificial intelligence? Where?
Yudkowsky, Eliezer (2001): Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures.
Yudkowsky, Eliezer (2007): Levels of Organization in General Intelligence. In: Artificial General Intelligence, edited by Ben Goertzel and Cassio Pennachin, 389–501.
Hanson,Robin, Yudkowsky, Eliezer (2013): The Hanson-Yudkowsky AI-Foom Debate.
...
Don't make me figure this stuff out and publish the safe bits just to embarrass you guys.