Fenty comments on What I Think, If Not Why - Less Wrong

25 Post author: Eliezer_Yudkowsky 11 December 2008 05:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Fenty 12 December 2008 04:07:00AM -1 points [-]

I like the argument that true AGI should take massive resources to make, and people with massive resources are often unfriendly, even if they don't know it.

The desired case of FOOM is a Friendly AI, built using deep insight, so that the AI never makes any changes to itself that potentially change its internal values; all such changes are guaranteed using strong techniques that allow for a billion sequential self-modifications without losing the guarantee. The guarantee is written over the AI's internal search criterion for actions, rather than external consequences.

This is blather. A self-modifying machine that fooms yet has limitations on how it can modify itself? A superintelligent machine that can't get around human-made restraints?

You can't predict the future, except you can predict it won't happen the way you predict it will.