David3 comments on My Bayesian Enlightenment - Less Wrong

25 Post author: Eliezer_Yudkowsky 05 October 2008 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

Sort By: Old

You are viewing a single comment's thread.

Comment author: David3 06 October 2008 10:25:22AM 0 points [-]

Eliezer,

Have you considered in detail the idea of AGI throttling, that is, given a metric of intelligence, and assuming a correlation between existential risk and said intelligence, AGI throttling is the explicit control of the AGI's intelligence level (or optimization power if you like), which indirectly also bounds existential risk.

In other words, what, if any, are the methods of bounding AGI intelligence level? Is it possible to build an AGI and explicitly set it at human level?