Comment author: Manfred 24 November 2013 07:43:34AM 7 points [-]

The problem of what to expect from the black box?

I'd think about it like this: suppose that I hand you a box with a slot in it. What do you expect to happen if you put a quarter into the slot?

To answer this we engage our big amount of human knowledge about boxes and people who hand them to you. It's very likely that nothing at all will happen, but I've also seen plenty of boxes that also emit sound, or gumballs, or temporary tattoos, or sometimes more quarters. But suppose that I have previously handed you a box that emits more quarters sometimes when you put quarters in. Then maybe you raise the probability that it also emits quarters, et cetera.

Now, within this model you have a probability of some payoff, but only if it's one of the reward-emitting boxes, and it also has some probability of emitting sound etc. What you call a "meta-probability" is actually the probability of some sub-model being verified or confirmed. Suppose I put in one quarter in and two quarters come out - now you've drastically cut down the models that can describe the box. This is "updating the meta-probability."

Comment author: David_Chapman 24 November 2013 07:01:39PM 1 point [-]

To answer this we engage our big amount of human knowledge about boxes and people who hand them to you.

Of comments so far, this comes closest to the answer I have in mind... for whatever that's worth!

Comment author: CoffeeStain 24 November 2013 05:49:23AM 3 points [-]

My LessWrongian answer is that I would ask my mind that was created already in motion what the probability is, then refine it with as many further reflections as I can come up with. Embody an AI long enough in this world, and it too will have priors about black boxes, except that reporting that probability in the form of a number is inherent to its source code rather than strange and otherworldly like it is for us.

The point that was made in that article (and in the Metaethics sequence as a whole) is that the only mind you have to solve a problem is the one that you have, and you will inevitably use it to solve problems unoptimally, where "unoptimal" if taken strictly means everything anybody has ever done.

The reflection part of this is important, as it's the only thing we have control over, and I suppose could involve discussions about metaprobabilities. It doesn't really do it for me though, although I'm only just a single point in the mind design space. To me, metaprobability seems isomorphic to a collection of reducible considerations, and so doesn't seem like a useful shortcut or abstraction. My particular strategy for reflection would be something like that in dspeyer's comment, things such as reasoning about the source of the box, possibilities for what could be in the box that I might reasonably expect to be there. Depending on how much time I have, I'd be very systematic about it, listing out possibilities, solving infinite series on expected value, etc.

Comment author: David_Chapman 24 November 2013 06:59:56PM 0 points [-]

Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases. Maybe I ought to have made that clearer! The approach I would take to the black box does not rely on metaprobability, so let's set that aside.

So, your mind is already in motion, and you do have priors about black boxes. What do you think you ought to in this case? I don't want to waste your time with that... Maybe the thought experiment ought to have specified a time limit. Personally, I don't think enumerating things the boxes could possibly do would be helpful at all. Isn't there an easier approach?

Comment author: Bayeslisk 24 November 2013 06:35:26AM 3 points [-]

I am pattern-matching from fiction on "black box with evil-looking inscriptions on it". Those do not tend to end well for anyone. Also, what do you mean by strong evidence against that the box is less harmful than a given random object from Thingspace? I can /barely sort of/ see "not a random object from Thingspace"; I cannot see "EV(U(spoopy creppy black box)) > EV(U(object from Thingspace))".

Comment author: David_Chapman 24 November 2013 06:48:56PM *  1 point [-]

The evidence that I didn't select it at random was my saying “I find this one particularly interesting.”

I also claimed that "I'm probably not that evil." Of course, I might be lying about that! Still, that's a fact that ought to go into your Bayesian evaluation, no?

Comment author: RichardKennaway 24 November 2013 09:43:30AM 5 points [-]

This is now a situation of radical uncertainty.

The Bayesian Universalist answer to this would be that there is no separate meta-probability. You have a universal prior over all possible hypotheses, and mutter a bit about Solomonoff induction and AIXI.

I am putting it this way, distancing myself from the concept, because I don't actually believe it, but it is the standard answer to draw out from the LessWrong meme space, and it has not yet been posted in this thread. Is there anyone who can make a better fist of expounding it?

Comment author: David_Chapman 24 November 2013 06:44:56PM 2 points [-]

Yes, I'm not at all committed to the metaprobability approach. In fact, I concocted the black box example specifically to show its limitations!

Solomonoff induction is extraordinarily unhelpful, I think... that it is uncomputable is only one reason.

I think there's a fairly simple and straightforward strategy to address the black box problem, which has not been mentioned so far...

Comment author: ialdabaoth 24 November 2013 04:53:02AM 4 points [-]

how would you know you hadn't left out important possibilities?

At least one of the top-level headings should be a catch-all "None of the above", which represents your estimated probability that you left something out.

Comment author: David_Chapman 24 November 2013 05:08:20AM 1 point [-]

That's good, yes!

How would you assign a probability to that?

Comment author: Bayeslisk 24 November 2013 04:38:02AM 1 point [-]

IMO the correct response is to run like hell from the box. In Thingspace, most things are very unfriendly, in much the same way that most of Mindspace contains unfriendly AIs.

Comment author: David_Chapman 24 November 2013 05:07:36AM *  1 point [-]

So... you think I am probably evil, then? :-)

I gave you the box (in the thought experiment). I may not have selected it from Thingspace at random!

In fact, there's strong evidence in the text of the OP that I didn't...

Comment author: dspeyer 24 November 2013 02:33:53AM *  7 points [-]

Instead of metaprobabilities, the black box might be better thought of in terms of hierarchically partitioning possibility space.

  • It could dispense money under some conditions
    • It could be a peg-and-wheel box like from the previous post
      • With zero pegs
      • One peg
      • ...
    • Those conditions could be temperature-dependant
    • ...
  • It could be a music box
    • Opera
    • Country
    • Yodeling
    • ...
  • It could be a bomb
  • ...

Each sublist's probability's should add up to the heading above, and the top-level headings should add up to 1. Given how long the list is, all the probabilities are very small, though we might be able to organize them into high-level categories with reasonable probabilities and then tack on a "something else" category. Categories are map, not territory, so we can rewrite them to our convenience.

It's useful to call the number of pegs the "probability" which makes the probability of 45 pegs a "meta-probability". It isn't useful to call opera or yodeling a "probability" so calling the probability that a music box is opera a "meta-probability" is really weird, even though it's basically the same sort of thing being discussed.

Comment author: David_Chapman 24 November 2013 04:47:22AM 3 points [-]

This is interesting—it seems like the project here would be to construct a universal, hierarchical ontology of every possible thing a device could do? This seems like a very big job... how would you know you hadn't left out important possibilities? How would you go about assigning probabilities?

(The approach I have in mind is simpler...)

Comment author: CoffeeStain 24 November 2013 03:29:51AM *  3 points [-]

The idea of metaprobability still isn't particularly satisfying to me as a game-level strategy choice. It might be useful as a description of something my brain already does, and thus give me more information about how my brain relates to or emulates an AI capable of perfect Bayesian inference. But in terms of picking optimal strategies, perfect Bayesian inference has no subroutine called CalcMetaProbability.

My first thought was that your approach elevates your brain's state above states of the world as symbols in the decision graph, and calls the difference "Meta." By Luke's analogy, information about the black box is unstable, but all that means is that the (yes, single) probability value we get when we query the Bayesian network is conditionally dependent on nodes with a high degree of expected future change (including many nodes referring to your brain). If you maintain discipline and keep yourself (and your future selves) as a part of the system, you can as perfectly calculate your current self's expected probability without "metaprobability." If you're looking to (losslessly or otherwise) optimize your brain to calculate probabilities, then "metaprobability" is a useful concept. But then we're no longer playing the game, we're designing minds.

Comment author: David_Chapman 24 November 2013 04:24:01AM 0 points [-]

Well, regardless of the value of metaprobability, or its lack of value, in the case of the black box, it doesn't seem to offer any help in finding a decision strategy. (I find it helpful in understanding the problem, but not in formulating an answer.)

How would you go about choosing a strategy for the black box?

Comment author: William_Quixote 24 November 2013 02:00:12AM 1 point [-]

I like this article / post but I find myself wanting more at the end. A payoff or a punch line or at least a lesson to take away.

Comment author: David_Chapman 24 November 2013 02:57:18AM *  1 point [-]

Well, I hope to continue the sequence... I ended this article with a question, or puzzle, or homework problem, though. Any thoughts about it?

Comment author: Manfred 24 November 2013 02:33:14AM 1 point [-]

You need to take advantage of the fact that probability is a consequence of incomplete information, and think about the models of the world people have that encode their information. "Meta-probabbility" only exists within a certain model of the problem, and if you totally ignore that you get some drastically confusing conclusions.

Comment author: David_Chapman 24 November 2013 02:56:17AM 1 point [-]

So, how would you analyze this problem, more specifically? What do you think the optimal strategy is?

View more: Prev | Next