Vaniver comments on Probability and radical uncertainty - Less Wrong

11 Post author: David_Chapman 23 November 2013 10:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (71)

You are viewing a single comment's thread. Show more comments above.

Comment author: CoffeeStain 24 November 2013 03:29:51AM *  3 points [-]

The idea of metaprobability still isn't particularly satisfying to me as a game-level strategy choice. It might be useful as a description of something my brain already does, and thus give me more information about how my brain relates to or emulates an AI capable of perfect Bayesian inference. But in terms of picking optimal strategies, perfect Bayesian inference has no subroutine called CalcMetaProbability.

My first thought was that your approach elevates your brain's state above states of the world as symbols in the decision graph, and calls the difference "Meta." By Luke's analogy, information about the black box is unstable, but all that means is that the (yes, single) probability value we get when we query the Bayesian network is conditionally dependent on nodes with a high degree of expected future change (including many nodes referring to your brain). If you maintain discipline and keep yourself (and your future selves) as a part of the system, you can as perfectly calculate your current self's expected probability without "metaprobability." If you're looking to (losslessly or otherwise) optimize your brain to calculate probabilities, then "metaprobability" is a useful concept. But then we're no longer playing the game, we're designing minds.

Comment author: Vaniver 24 November 2013 05:37:29PM 3 points [-]

But then we're no longer playing the game, we're designing minds.

I find it helpful to think of "the optimal way to play game X" as "design the mind that is best at playing game X." Does that not seem helpful to you?

Comment author: CoffeeStain 24 November 2013 10:53:29PM *  3 points [-]

It is helpful, and was one of the ways that helped me to understand One-boxing on a gut level.

And yet, when the problem space seems harder, when "optimal" becomes uncomputable and wrapped up in the fact that I can't fully introspect, playing certain games doesn't feel like designing a mind. Although, this is probably just due to the fact that games have time limits, while mind-design is unconstrained. If I had an eternity to play any given game, I would spend a lot of time introspecting, changing my mind into the sort that could play iterations of the game in smaller time chunks. Although there would still always be a part of my brain (that part created in motion) that I can't change. And I would still use that part to play the black box game.

In regards to metaprobabilities, I'm starting to see the point. I don't think it alters any theory about how probablity "works," but its intuitive value could be evidence that optimal AIs might be able to more efficiently emulate perfect decision theory with CalcMetaProbability implemented. And it's certainly useful to many here.