cousin_it comments on The Friendly AI Game - Less Wrong

38 Post author: bentarm 15 March 2011 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (170)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 15 March 2011 07:33:25PM *  7 points [-]

I have no idea how to encode a prior saying "the universe I observe is all that exists", which is what you seem to assume. My proposed prior, which we do know how to encode, says "this mathematical structure is all that exists", with an apriori zero chance for any weird properties.

Comment author: Kaj_Sotala 16 March 2011 08:53:25AM *  5 points [-]

If the AI is only used to solve certain formally specified questions without any knowledge of an external world, then that sounds much more like a theorem-prover than a strong AI. How could this proposed AI be useful for any of the tasks we'd like an AGI to solve?

Comment author: cousin_it 16 March 2011 12:31:30PM 3 points [-]

An AI living in a simulated universe can be just as intelligent as one living in the real world. You can't ask it directly to feed African kids but you have many other options, see the discussion at Asking Precise Questions.

Comment author: Kaj_Sotala 16 March 2011 04:34:35PM 5 points [-]

An AI living in a simulated universe can be just as intelligent as one living in the real world.

It can be a very good theorem prover, sure. But without access to information about the world, it can't answer questions like "what is the CEV of humanity like" or "what's the best way I can make a lot of money" or "translate this book from English to Finnish so that a native speaker will consider it a good translation". It's narrow AI, even if it could be broad AI if it were given more information.

Comment author: Wei_Dai 16 March 2011 08:06:40PM 1 point [-]

The questions you wanted to ask in that thread were poly-time algorithm for SAT, and short proofs for math theorems. For those, why do you need to instantiate an AI in a simulated universe (which allows it to potentially create what we'd consider negative utility within the simulated universe) instead of just running a (relatively simple, sure to lack consciousness) theorem prover?

Is it because you think that being "embodied" helps with ability to do math? Why? And does the reason carry through even if the AI has a prior that assigns probability 1 to a particular universe? (It seems plausible that having experience dealing with empirical uncertainty might be helpful for handling mathematical uncertainty, but that doesn't apply if you have no empirical uncertainty...)

Comment author: cousin_it 16 March 2011 09:21:04PM *  3 points [-]

An AI in a simulated universe can self-improve, which would make it more powerful than the theorem provers of today. I'm not convinced that AI-ish behavior, like self-improvement, requires empirical uncertainty about the universe.

Comment author: Wei_Dai 16 March 2011 10:18:41PM 2 points [-]

But self improvement doesn't require interacting with an outside environment (unless "improvement" means increasing computational resources, but the outside being simulated nullifies that). For example, a theorem prover designed to self improve can do so by writing a provably better theorem prover and then transferring control to (i.e., calling) it. Why bother with a simulated universe?

Comment author: cousin_it 17 March 2011 11:37:37AM *  2 points [-]

A simulated universe gives precise meaning to "actions" and "utility functions", as I explained sometime ago. It seems more elegant to give the agent a quined description of itself within the simulated universe, and a utility function over states of that same universe, instead of allowing only actions like "output a provably better version of myself and then call it".

Comment author: Alexandros 17 March 2011 10:20:31AM 1 point [-]

From the FAI wikipedia page:

One example Yudkowsky provides is that of an AI initially designed to solve the Riemann hypothesis, which, upon being upgraded or upgrading itself with superhuman intelligence, tries to develop molecular nanotechnology because it wants to convert all matter in the Solar System into computing material to solve the problem, killing the humans who asked the question.

Cousin_it's approach may be enough to avoid that.