Wei_Dai comments on Metaphilosophical Mysteries - Less Wrong

35 Post author: Wei_Dai 27 July 2010 12:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (255)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 27 July 2010 10:54:25AM *  6 points [-]

Eliezer, that was your position in this thread, and I thought I had convinced you that it was wrong. If that's not the case, can you please re-read my argument (especially the last few posts in the thread) and let me know why you're not convinced?

Comment author: Eliezer_Yudkowsky 28 July 2010 08:33:04AM 2 points [-]

So... the part I found potentially convincing was that if you ran off a logical view of the world instead of a Solomonoff view (i.e., beliefs represented in e.g. higher-order logic instead of Turing machines) and lived in a hypercomputable world then it might be possible to make better decisions, although not better predictions of sensory experience, in some cases where you can infer by reasoning symbolically that EU(A) > EU(B), presuming that your utility function is itself reasoning over models of the world represented symbolically. On the other hand, cousin_it's original example still looks wrong.

Comment author: Wei_Dai 28 July 2010 09:08:53AM *  1 point [-]

not better predictions of sensory experience

You can make better predictions if you're allowed to write down your predictions symbolically, instead of using decimal numbers. (And why shouldn't that be allowed?)

ETA: I made this argument previously in the one-logic thread, in this post.

ETA 2: I think you can also make better (numerical) predictions of the form "this black box is a halting-problem oracle" although technically that isn't a prediction of sensory experience.

Comment author: Vladimir_Nesov 29 July 2010 08:32:26PM 0 points [-]

Why would you want to make any predictions at all? Predictions are not directly about value. It doesn't seem that there is a place for the human concept of prediction in a foundational decision theory.

Comment author: Wei_Dai 29 July 2010 08:41:06PM 1 point [-]

It doesn't seem that there is a place for the human concept of prediction in a foundational decision theory.

I think that's right. I was making the point about prediction because Eliezer still seems to believe that predictions of sensory experience is somehow fundamental, and I wanted to convince him that the universal prior is wrong even given that belief.

Comment author: Vladimir_Nesov 29 July 2010 08:44:59PM *  1 point [-]

Still, universal prior does seem to be a universal way of eliciting what the human concept of prediction (expectation, probability) is, to the limit of our ability to train such a device, for exactly the reasons Eliezer gives: whatever is the concept we use, it's in there, among the programs universal prior weights.

ETA: On the other hand, the concept thus reconstructed would be limited to talk about observations, and so won't be a general concept, while human expectation is probably more general than that, and you'd need a general logical language to capture it (and a language of unknown expressive power to capture it faithfully).

ETA2: Predictions might still be a necessary concept to express the decisions that agent makes, to connect formal statements with what the agent actually does, and so express what the agent actually does as formal statements. We might have to deal with reality because the initial implementation of FAI has to be constructed specifically in reality.

Comment author: Wei_Dai 29 July 2010 09:04:27PM *  1 point [-]

Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this? Or in other words, the only reason the standard proofs of Solomonoff prediction's optimality go through is that they assume predictions are represented using numerals?

Comment author: timtyler 31 July 2010 09:40:29PM *  1 point [-]

Re: "what about my argument that a human can [adapt its razor a little] and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this?"

There are at least two things "Solomonoff predictor" could refer to:

  • An intelligent agent with Solomonoff-based priors;

  • An agent who is wired to use a Solomonoff-based razor on their sense inputs;

A human is more like the first agent. The second agent is not really properly intelligent - and adapts poorly to new environments.

Comment author: ocr-fork 29 July 2010 10:02:50PM 0 points [-]

Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this?

BB(100) is computable. Am I missing something?

Comment author: Wei_Dai 29 July 2010 10:11:51PM 1 point [-]

BB(100) is computable. Am I missing something?

Maybe... by BB I mean the Busy Beaver function Σ as defined in this Wikipedia entry.

Comment author: ocr-fork 29 July 2010 10:14:28PM 0 points [-]

Right, and...

A trivial but noteworthy fact is that every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is computable, even though the infinite sequence Σ is not computable (see computable function examples.

So why can't the universal prior use it?

Comment author: LucasSloan 29 July 2010 09:21:15PM *  0 points [-]

Humans are (can be represented by) turing machines. All halting turing machines are incorporated in AIXI. Therefore, anything that humans can do to more effectively predict something than a "mere machine" is already incorporated into AIXI.

More generally, anything you represent symbolically can be represented using binary strings. That's how that string you wrote got to me in the first place. You converted the turing operations in your head into a string of symbols, a computer turned that into a string of digits, my computer turned it back into symbols and my brain used computable algorithms to make sense of them. What makes you think that any of this is impossible for AIXI?

Comment author: Wei_Dai 29 July 2010 09:36:26PM *  3 points [-]

Am I going crazy, or did you just basically repeat what Eliezer, Cyan, and Nesov said without addressing my point?

Do you guys think that you understand my argument and that it's wrong, or that it's too confusing and I need to formulate it better, or what? Everyone just seems to be ignoring it and repeating the standard party line....

ETA: Now reading the second part of your comment, which was added after my response.

ETA2: Clearly I underestimated the inferential distance here, but I thought at least Eliezer and Nesov would get it, since they appear to understand the other part of my argument about the universal prior being wrong for decision making, and this seems to be a short step. I'll try to figure out how to explain it better.

Comment author: LucasSloan 29 July 2010 09:40:54PM 0 points [-]

If 4 people all think you're wrong for the same reason, either you're wrong or you're not explaining yourself. You seem to disbelieve the first, so try harder with the explaining.

Comment author: Unknowns 01 August 2010 01:23:57PM 1 point [-]

The fact that AIXI can predict that a human would predict certain things, does not mean that AIXI can agree with those predictions.

Comment author: LucasSloan 01 August 2010 08:05:03PM -1 points [-]

In the limit, even if that one human is the only thing in all of the hypotheses that AIXI has under consideration, AIXI will be predicting precisely as that human does.

Comment author: timtyler 31 July 2010 09:33:02PM -1 points [-]

Surely predictions of sensory experience are pretty fundamental. To understand the consequences of your actions, you have to be able to make "what-if" predictions.

Comment author: timtyler 31 July 2010 09:30:59PM *  0 points [-]

Re: "It doesn't seem that there is a place for the human concept of prediction in a foundational decision theory."

You can hardly steer yourself effectively into the future if you don't have an understanding of the consequences of your actions.

Comment author: Vladimir_Nesov 01 August 2010 08:01:10AM *  0 points [-]

You can hardly steer yourself effectively into the future if you don't have an understanding of the consequences of your actions.

Yes, it might be necessary exactly for that purpose (though consequences don't reside just in the "future"), but I don't understand this well enough to decide either way.