Comment author: David_Chapman 24 November 2013 06:59:56PM 0 points [-]

Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases. Maybe I ought to have made that clearer! The approach I would take to the black box does not rely on metaprobability, so let's set that aside.

So, your mind is already in motion, and you do have priors about black boxes. What do you think you ought to in this case? I don't want to waste your time with that... Maybe the thought experiment ought to have specified a time limit. Personally, I don't think enumerating things the boxes could possibly do would be helpful at all. Isn't there an easier approach?

Comment author: CoffeeStain 24 November 2013 11:24:59PM *  2 points [-]

Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases.

Ah! I didn't quite pick up on that. I'll note that infinite regress problems aren't necessarily defeaters of an approach. Good minds that could fall into that trap implement a "Screw it, I'm going to bed" trigger to keep from wasting cycles even when using an otherwise helpful heuristic.

Maybe the thought experiment ought to have specified a time limit. Personally, I don't think enumerating things the boxes could possibly do would be helpful at all. Isn't there an easier approach?

Maybe, but I can't guarantee you won't get blown up by a black box with a bomb inside! As a friend, I would be furiously lending you my reasoning to help you make the best decision, worrying very little what minds better and faster than both of ours would be able to do.

It is, at the end of the day, just the General AI problem: Don't think too hard on brute-force but perfect methods or else you might skip a heuristic that could have gotten you an answer within the time limit! But when do you know whether the time limit is at that threshold? You could spend cycles on that too, but time is wasting! Time limit games presume that the participant has already underwent a lot of unintentional design (by evolution, history, past reflections, etc.). This is the "already in-motion" part which, frustratingly, cannot ever be optimal unless somebody on the outside designed you for it. It's a formal problem what source code performs best under what game. Being a source code involves taking the discussion we're having now and applying it the best you can, because that's what your source code does.

Comment author: Gunnar_Zarncke 24 November 2013 09:08:33PM 0 points [-]

When we query the Bayesian network is conditionally dependent on nodes with a high degree of expected future change [...].

But the point about meta probability is that we do not have the nodes. Each meta level corresponds to one nesting of networks in nodes.

If you maintain discipline and keep yourself [...] as a part of the system, you can as perfectly calculate your current self's expected probability without "metaprobability."

Only in so far as you approximate yourself simply as per above.This discards information.

Comment author: CoffeeStain 24 November 2013 11:08:29PM *  2 points [-]

But the point about meta probability is that we do not have the nodes. Each meta level corresponds to one nesting of networks in nodes.

Think of Bayesian graphs as implicitly complete, with the set of nodes being every thing to which you have a referent. If you can even say "this proposition" meaningfully, a perfect Bayesian implemented as a brute-force Bayesian network could assign it a node connected to all other nodes, just with trivial conditional probabilities that give the same results as an unconnected node.

A big part of this discussion has been whether some referents (like black boxes) actually do have such trivial conditional probabilities which end up returning an inference of 50%. It certainly feels like some referents should have no precedent, and yet it also feels like we still don't say 50%. This is because they actually do have precedent (and conditional probabilities), it's just that our internal reasonings are not always consciously available.

Comment author: Vaniver 24 November 2013 05:37:29PM 3 points [-]

But then we're no longer playing the game, we're designing minds.

I find it helpful to think of "the optimal way to play game X" as "design the mind that is best at playing game X." Does that not seem helpful to you?

Comment author: CoffeeStain 24 November 2013 10:53:29PM *  3 points [-]

It is helpful, and was one of the ways that helped me to understand One-boxing on a gut level.

And yet, when the problem space seems harder, when "optimal" becomes uncomputable and wrapped up in the fact that I can't fully introspect, playing certain games doesn't feel like designing a mind. Although, this is probably just due to the fact that games have time limits, while mind-design is unconstrained. If I had an eternity to play any given game, I would spend a lot of time introspecting, changing my mind into the sort that could play iterations of the game in smaller time chunks. Although there would still always be a part of my brain (that part created in motion) that I can't change. And I would still use that part to play the black box game.

In regards to metaprobabilities, I'm starting to see the point. I don't think it alters any theory about how probablity "works," but its intuitive value could be evidence that optimal AIs might be able to more efficiently emulate perfect decision theory with CalcMetaProbability implemented. And it's certainly useful to many here.

Comment author: David_Chapman 24 November 2013 05:08:20AM 1 point [-]

That's good, yes!

How would you assign a probability to that?

Comment author: CoffeeStain 24 November 2013 05:55:05AM 3 points [-]

"How often do listing sorts of problems with some reasonable considerations result in an answer of 'None of the above' for me?"

If "reasonable considerations" are not available, then we can still:

"How often did listing sorts of problems with no other information available result in an answer of 'None of the above' for me?"

Even if we suppose that maybe this problem bears no resemblance to any previously encountered problem, we can still (because the fact that it bears no resemblance is itself a signifier):

"How often did problems I'd encountered for the first time have an answer I never thought of?"

Comment author: David_Chapman 24 November 2013 04:24:01AM 0 points [-]

Well, regardless of the value of metaprobability, or its lack of value, in the case of the black box, it doesn't seem to offer any help in finding a decision strategy. (I find it helpful in understanding the problem, but not in formulating an answer.)

How would you go about choosing a strategy for the black box?

Comment author: CoffeeStain 24 November 2013 05:49:23AM 3 points [-]

My LessWrongian answer is that I would ask my mind that was created already in motion what the probability is, then refine it with as many further reflections as I can come up with. Embody an AI long enough in this world, and it too will have priors about black boxes, except that reporting that probability in the form of a number is inherent to its source code rather than strange and otherworldly like it is for us.

The point that was made in that article (and in the Metaethics sequence as a whole) is that the only mind you have to solve a problem is the one that you have, and you will inevitably use it to solve problems unoptimally, where "unoptimal" if taken strictly means everything anybody has ever done.

The reflection part of this is important, as it's the only thing we have control over, and I suppose could involve discussions about metaprobabilities. It doesn't really do it for me though, although I'm only just a single point in the mind design space. To me, metaprobability seems isomorphic to a collection of reducible considerations, and so doesn't seem like a useful shortcut or abstraction. My particular strategy for reflection would be something like that in dspeyer's comment, things such as reasoning about the source of the box, possibilities for what could be in the box that I might reasonably expect to be there. Depending on how much time I have, I'd be very systematic about it, listing out possibilities, solving infinite series on expected value, etc.

Comment author: CoffeeStain 24 November 2013 03:29:51AM *  3 points [-]

The idea of metaprobability still isn't particularly satisfying to me as a game-level strategy choice. It might be useful as a description of something my brain already does, and thus give me more information about how my brain relates to or emulates an AI capable of perfect Bayesian inference. But in terms of picking optimal strategies, perfect Bayesian inference has no subroutine called CalcMetaProbability.

My first thought was that your approach elevates your brain's state above states of the world as symbols in the decision graph, and calls the difference "Meta." By Luke's analogy, information about the black box is unstable, but all that means is that the (yes, single) probability value we get when we query the Bayesian network is conditionally dependent on nodes with a high degree of expected future change (including many nodes referring to your brain). If you maintain discipline and keep yourself (and your future selves) as a part of the system, you can as perfectly calculate your current self's expected probability without "metaprobability." If you're looking to (losslessly or otherwise) optimize your brain to calculate probabilities, then "metaprobability" is a useful concept. But then we're no longer playing the game, we're designing minds.

Comment author: Yvain 23 November 2013 06:55:52PM 11 points [-]

I just realized I forgot a very important question I really want to know the answer to!

"What is your 90% confidence interval for the percent of people you expect to answer 'cooperate' on the prize question?"

I've added this into the survey so that people who take it after this moment can answer. If you've taken the survey already, feel free to record your guess below (if you haven't taken the survey, don't read responses to this comment)

Comment author: CoffeeStain 23 November 2013 11:22:05PM 2 points [-]

Right down the middle: 25-75

Comment author: Nornagest 23 November 2013 12:21:36AM 2 points [-]

I cooperated, because I'm more or less indifferent to monetary prizes of less than twenty dollars or so, and more substantial prizes imply widespread cooperation. I view it as unlikely that I can get away with putting myself into a separate reference class, so I might as well contribute to that.

Comment author: CoffeeStain 23 November 2013 12:30:02AM 0 points [-]

Hmm, come to think of it, deciding the size of the cash prize (for it being interesting) is probably worth more to me as well. I'll just have to settle for boring old cash.

Comment author: CoffeeStain 23 November 2013 12:04:36AM 12 points [-]

I defected, because I'm indifferent to whether the prize-giver or prize-winner has 60 * X dollars, unless the prize-winner is me.

Comment author: timujin 21 October 2013 08:33:57PM *  1 point [-]

Maybe that's just my personal quirk (is it?) but my pride is a good motivator for me to become stronger. If I think I am more able in some area than I actually am, then when evidence for the contrary comes knocking, I try as much as I can to defend the 'truth' I believe in by actually training myself in that area until I match that belief. And since I can't keep my mouth shut and thus I tell and demonstrate everyone how awesome I am when I am not actually that good, there is really no way out but to make myself match what other people think of me. Maybe that's not a very good rationality habit, but I am fully mindful of the process, and if I ever need to know my actual level at expense of that motivational factor, it is no trouble to sit down with a pencil and figure out the truth. It can hurt (because my real level almost always is way less than my expectations of it most of the time), but is probably worth it.

Manipulating people just out of pride and sense of domination was actually the factor that developed my social skills more than anything else. I became more polite, started to watch my appearance, posture and facial expressions (because it's easier to trick those who like me), became better at detecting lies and other people's attempts to manipulate me. Also, I believe, it helped me to avoid conformity (when you see people making dumb mistakes on a regular basis just because you told them something, the belief in their sanity vanishes quickly). And I am safe from losing friends' trust, because I strive to never trick or decieve close people (in a very broad sense) and maintain something close to (but not quite) Radical Honesty policy wtih those whom I value.

Am I walking the wrong path?

In response to comment by timujin on The best 15 words
Comment author: CoffeeStain 31 October 2013 08:33:51AM 0 points [-]

Am I walking the wrong path?

Eh, probably not. Heuristically, I shy away from modes of thought that involve intentional self-deception, but that's because I haven't been mindful of myself long enough to know ways I can do this systematically without breaking down. I would also caution against letting small-scale pride translate into larger domains where there is less available evidence for how good you really are. "I am successful" has a much higher chance of becoming a cached self than "I am good at math." The latter is testable with fewer bits of evidence, and the former might cause you to think you don't need to keep trying.

As for other-manipulation, it seems the confidence terminology can apply to social dominance as well. I don't think desiring superior charisma necessitates an actual belief in your awesomeness compared to others, just the belief that you are awesome. The latter to me is more what it feels like to be good at being social, and has the benefit of not entrenching a distance from others or the cached belief that others are useful manipulation targets rather than useful collaborators.

People vary on how they can use internal representations to produce results. It's really hard to use probabilistic distributions on outcomes as sole motivator for behavior, so we do need to cache beliefs in the language of conventional social advice sometimes. The good news is that good people who are non-rationalists are a treasure trove for this sort of insight.

View more: Prev | Next