Will_Sawin comments on Notion of Preference in Ambient Control - Less Wrong

14 Post author: Vladimir_Nesov 07 October 2010 09:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jonathan_Graehl 07 October 2010 11:09:31PM 1 point [-]

Consider a first-order language and a theory in that language (defining the way agent reasons, the kinds of concepts it can understand and the kinds of statements it can prove). This could be a set theory such as ZFC or a theory of arithmetic such as PA. The theory should provide sufficient tools to define recursive functions and/or other necessary concepts.

This is unclear to me, and I've read and understood Enderton. I would have thought that ZFC and PA were sets of axioms and would say nothing about how an agent reasons.

Also,

In first-order logic, all valid statements are also provable by a formal syntactic argument.

Do you mean in the context of some axioms? (of course, you can always talk about whether the statement "PA implies X" is valid, so it doesn't really matter).

I haven't read the rest yet. I'm confident that you have a very precise and formally defined idea in mind, but I'd appreciate it if you could spell out your definitions, or link to them (mathworld, wikipedia, or even some textbook).

Comment author: Will_Sawin 08 October 2010 01:26:06AM 0 points [-]

An agent that reasoned by proving things in ZFC could exist.

Stupid argument: "This program, computed with this data, produces this result" is a statement in ZFC and is provable or disprovable as appropriate.

Obviously, a real ZFC-based AI would be more efficient than that.

ZFC is nice because Newton's laws, for example, can be formulated in ZFC but aren't computable. A computable agent could reason about those laws using ZFC, for example deriving the conservation of energy, which would allow him to compute certain things.