cousin_it comments on Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips - Less Wrong

7 Post author: Kevin 22 July 2010 10:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 22 July 2010 06:27:25PM *  9 points [-]

Heresy alert: Eliezer seems to be better at writing than he is at AI theory. Maybe he should write a big piece of SF about unfriendly and friendly AI to make these concepts as popular as Skynet or the Matrix. A textbook on rationality won't have as much impact.

Comment author: Will_Newsome 22 July 2010 10:18:00PM *  9 points [-]

Or the Da Vinci Code. EMP attacks, rogue AI researchers, counterfactual terrorists, conflicts between FAI coders, sudden breakthroughs in molecular nanotechnology, SL5 decision theory insights, the Bayesian Conspiracy, the Cooperative Conspiracy, bioweapons, mad scientists trying to make utility monsters to hack CEV, governmental restrictions on AI research, quantum immortality (to be used as a plot device), and maybe even a glimpse of fun theory. Add in a gratuitous romantic interest to teach the readers about the importance of humanity and the thousand shards of desire.

Oh, and the main character is Juergen Schmidhuber. YES.

By the way, writing such a book would probably lead to the destruction of the world, which is probably a major reason why Eliezer hasn't done it.

Comment author: cousin_it 22 July 2010 10:50:19PM 3 points [-]

Marcus Hutter and the Prophets of Singularity. Works fine as a band name, too.

Comment author: RobinZ 22 July 2010 06:40:29PM 8 points [-]

I don't know that Eliezer Yudkowsky has spent much time talking about AI theory in this forum such that his competence would be obvious - but either way, the math of the decision theory is not as simple as "do what you are best at".

Comment author: khafra 22 July 2010 06:42:59PM 5 points [-]

It might not even be as simple as comparitive advantage, but there are certainly more good writers in the world than good AI theorists.

Comment author: Vladimir_M 22 July 2010 08:53:00PM 3 points [-]

cousin_it:

Maybe he should write a big piece of SF about unfriendly and friendly AI to make these concepts as popular as Skynet or the Matrix.

I don't think this would be a good strategy. In the general public, including the overwhelming part of the intelligentsia, SF associations are not exactly apt to induce intellectual respect and serious attention.