Posts

Sorted by New

Wiki Contributions

Comments

I like the argument that true AGI should take massive resources to make, and people with massive resources are often unfriendly, even if they don't know it.

The desired case of FOOM is a Friendly AI, built using deep insight, so that the AI never makes any changes to itself that potentially change its internal values; all such changes are guaranteed using strong techniques that allow for a billion sequential self-modifications without losing the guarantee. The guarantee is written over the AI's internal search criterion for actions, rather than external consequences.

This is blather. A self-modifying machine that fooms yet has limitations on how it can modify itself? A superintelligent machine that can't get around human-made restraints?

You can't predict the future, except you can predict it won't happen the way you predict it will.