Peterdjones comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread.

Comment author: Peterdjones 18 January 2013 02:07:34PM -1 points [-]

It isn't clear whether AGI would be as powerful as SI's views imply.

Yes. There's something weird going on there. EY seems to want to constrain AI in various ways -- to be friendly, to be Bayesian and so on -- but how, then is the "G" justifiied? Human intelligence is general enough to consider and formulate multiple theories of probability. Why should we consider something as being at least as smart as us and at least as general as us, when we can think things it can't think.

Comment author: ArisKatsaris 18 January 2013 02:29:48PM *  2 points [-]

"Friendliness" is (the way I understand it) a constraint on the purposes and desired consequences of the AI's actions, not on what it is allowed to think. It would be able to think of non-Friendly actions, if only for the purposes of e.g. averting them when necessary.

As for Bayesianism, my guess is that even a Seed AI has to start somehow. There's no necessary constraint on it remaining Bayesian if it manages to figure out some even better theory of probability (or if it judges that a theory humans have developed is better). If an AI models itself performing better according to its criteria if it used some different theory, it will ideally self-modify to use that theory...