Qiaochu_Yuan comments on Causal Universes - Less Wrong

60 Post author: Eliezer_Yudkowsky 29 November 2012 04:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (385)

You are viewing a single comment's thread. Show more comments above.

Comment author: jeremysalwen 04 December 2012 02:28:36AM 1 point [-]

As mentioned below, we you'd need to make infinitely many queries to the Turing oracle. But even if you could, that wouldn't make a difference.

Again, even if there was a module to do infinitely many computations, the code I wrote still couldn't tell the difference between that being the case, and this module being a really good computable approximation of one. Again, it all comes back to the fact that I am programming my AI on a turing complete computer. Unless I somehow (personally) develop the skills to program trans-turing-complete computers, then whatever I program is only able to comprehend something that is turing complete. I am sitting down to write the AI right now, and so regardless of what I discover in the future, I can't program my turing complete AI to understand anything beyond that. I'd have to program a trans-turing complete computer now, if I ever hoped for it to understand anything beyond turing completeness in the future.

Comment author: Qiaochu_Yuan 04 December 2012 03:21:06AM 1 point [-]

Ah, I see. I think we were answering different questions. (I had this feeling earlier but couldn't pin down why.) I read the original question as being something like "what kind of hypotheses should a hypothetical AI hypothetically entertain" whereas I think you read the original question as being more like "what kind of hypotheses can you currently program an AI to entertain." Does this sound right?

Comment author: jeremysalwen 04 December 2012 07:24:04AM 1 point [-]

Yes, I agree. I can imagine some reasoning being concieving of things that are trans-turing complete, but I don't see how I could make an AI do so.

Comment author: jeremysalwen 21 December 2012 04:46:43PM -1 points [-]

I was reading a lesswrong post and I found this paragraph which lines up with what I was trying to say

Some boxes you really can't think outside. If our universe really is Turing computable, we will never be able to concretely envision anything that isn't Turing-computable—no matter how many levels of halting oracle hierarchy our mathematicians can talk about, we won't be able to predict what a halting oracle would actually say, in such fashion as to experimentally discriminate it from merely computable reasoning.