wedrifid comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 26 May 2012 03:03:34AM 3 points [-]

Absent a theory of mind, how would it occur to the AI that those would be profitable things to do?

Should lack of a theory of mind here be taken to also imply lack of ability to apply either knowledge of physics or Bayesian inference to lumps of matter that we may describe as 'minds'.

Comment author: Strange7 26 May 2012 05:09:27AM 0 points [-]

Yes. More generally, when talking about "lack of X" as a design constraint, "inability to trivially create X from scratch" is assumed.

Comment author: wedrifid 26 May 2012 05:26:28AM 0 points [-]

Yes. More generally, when talking about "lack of X" as a design constraint, "inability to trivially create X from scratch" is assumed.

I try not to make general assumptions that would make the entire counterfactual in question untenable or ridiculous - this verges on such an instance. Making Bayesian inferences pertaining to observable features of the environment is one of the most basic features that can be expected in a functioning agent.

Comment author: Strange7 26 May 2012 05:41:22AM 0 points [-]

Note the "trivially." An AI with unlimited computational resources and ability to run experiments could eventually figure out how humans think. The question is how long it would take, how obvious the experiments would be, and how much it already knew.