Qiaochu_Yuan comments on Causal Universes - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (385)
As mentioned below, we you'd need to make infinitely many queries to the Turing oracle. But even if you could, that wouldn't make a difference.
Again, even if there was a module to do infinitely many computations, the code I wrote still couldn't tell the difference between that being the case, and this module being a really good computable approximation of one. Again, it all comes back to the fact that I am programming my AI on a turing complete computer. Unless I somehow (personally) develop the skills to program trans-turing-complete computers, then whatever I program is only able to comprehend something that is turing complete. I am sitting down to write the AI right now, and so regardless of what I discover in the future, I can't program my turing complete AI to understand anything beyond that. I'd have to program a trans-turing complete computer now, if I ever hoped for it to understand anything beyond turing completeness in the future.
Ah, I see. I think we were answering different questions. (I had this feeling earlier but couldn't pin down why.) I read the original question as being something like "what kind of hypotheses should a hypothetical AI hypothetically entertain" whereas I think you read the original question as being more like "what kind of hypotheses can you currently program an AI to entertain." Does this sound right?
Yes, I agree. I can imagine some reasoning being concieving of things that are trans-turing complete, but I don't see how I could make an AI do so.
I was reading a lesswrong post and I found this paragraph which lines up with what I was trying to say