In the previous article in this sequence, I conducted a thought experiment in which simple probability was not sufficient to choose how to act. Rationality required reasoning about meta-probabilities, the probabilities of probabilities.

Relatedly, lukeprog has a brief post that explains how this matters; a long article by HoldenKarnofsky makes meta-probability  central to utilitarian estimates of the effectiveness of charitable giving; and Jonathan_Lee, in a reply to that, has used the same framework I presented.

In my previous article, I ran thought experiments that presented you with various colored boxes you could put coins in, gambling with uncertain odds.

The last box I showed you was blue. I explained that it had a fixed but unknown probability of a twofold payout, uniformly distributed between 0 and 0.9. The overall probability of a payout was 0.45, so the expectation value for gambling was 0.9—a bad bet. Yet your optimal strategy was to gamble a bit to figure out whether the odds were good or bad.

Let’s continue the experiment. I hand you a black box, shaped rather differently from the others. Its sealed faceplate is carved with runic inscriptions and eldritch figures. “I find this one particularly interesting,” I say.

What is the payout probability? What is your optimal strategy?

In the framework of the previous article, you have no knowledge about the insides of the box. So, as with the “sportsball” case I analyzed there, your meta-probability curve is flat from 0 to 1.

The blue box also has a flat meta-probability curve; but these two cases are very different. For the blue box, you know that the curve really is flat. For the black box, you have no clue what the shape of even the meta-probability curve is.

The relationship between the blue and black boxes is the same as that between the coin flip and sportsball—except at the meta level!

So if we’re going on in this style, we need to look at the distribution of probabilities of probabilities of probabilities. The blue box has a sharp peak in its meta-meta-probability (around flatness), whereas the black box has a flat meta-meta-probability.

You ought now to be a little uneasy. We are putting epicycles on epicycles. An infinite regress threatens.

Maybe at this point you suddenly reconsider the blue box… I told you that its meta-probability was uniform. But perhaps I was lying! How reliable do you think I am?

Let’s say you think there’s a 0.8 probability that I told the truth. That’s the meta-meta-probability of a flat meta-probability. In the worst case, the actual payout probability is 0, so the average just plain probability is 0.8 x 0.45 = 0.36. You can feed that worst case into your decision analysis. It won’t drastically change the optimal policy; you’ll just quit a bit earlier than if you were entirely confident that the meta-probability distribution was uniform.

To get this really right, you ought to make a best guess at the meta-meta-probability curve. It’s not just 0.8 of a uniform probability distribution, and 0.2 of zero payout. That’s the worst case. Even if I’m lying, I might give you better than zero odds. How much better? What’s your confidence in your meta-meta-probability curve? Ought you to draw a meta-meta-meta-probability curve? Yikes!

Meanwhile… that black box is rather sinister. Seeing it makes you wonder. What if I rigged the blue box so there is a small probability that when you put a coin in, it jabs you with a poison dart, and you die horribly?

Apparently a zero payout is not the worst case, after all! On the other hand, this seems paranoid. I’m odd, but probably not that evil.

Still, what about the black box? You realize now that it could do anything.

  • It might spring open to reveal a collection of fossil trilobites.
  • It might play Corvus Corax’s Vitium in Opere at ear-splitting volume.
  • It might analyze the trace DNA you left on the coin and use it to write you a personalized love poem.
  • It might emit a strip of paper with a recipe for dundun noodles written in Chinese.
  • It might sprout six mechanical legs and jump into your lap.

What is the probability of its giving you $2?

That no longer seems quite so relevant. In fact… it might be utterly meaningless! This is now a situation of radical uncertainty.

What is your optimal strategy?

I’ll answer that later in this sequence. You might like to figure it out for yourself now, though.

Further reading

The black box is an instance of Knightian uncertainty. That’s a catch-all category for any type of uncertainty that can’t usefully be modeled in terms of probability (or meta-probability!), because you can’t make meaningful probability estimates. Calling it “Knightian” doesn’t help solve the problem, because there’s lots of sources of non-probabilistic uncertainty. However, it’s useful to know that there’s a literature on this.

The blue box is closely related to Ellsberg’s paradox, which combines probability with Knightian uncertainty. Interestingly, it was invented by the same Daniel Ellsberg who released the Pentagon Papers in 1971. I wonder how his work in decision theory might have affected his decision to leak the Papers?

New to LessWrong?

New Comment
71 comments, sorted by Click to highlight new comments since: Today at 9:27 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Instead of metaprobabilities, the black box might be better thought of in terms of hierarchically partitioning possibility space.

  • It could dispense money under some conditions
    • It could be a peg-and-wheel box like from the previous post
      • With zero pegs
      • One peg
      • ...
    • Those conditions could be temperature-dependant
    • ...
  • It could be a music box
    • Opera
    • Country
    • Yodeling
    • ...
  • It could be a bomb
  • ...

Each sublist's probability's should add up to the heading above, and the top-level headings should add up to 1. Given how long the list is, all the probabilities are very small, though we might be able to organize them into high-level categories with reasonable probabilities and then tack on a "something else" category. Categories are map, not territory, so we can rewrite them to our convenience.

It's useful to call the number of pegs the "probability" which makes the probability of 45 pegs a "meta-probability". It isn't useful to call opera or yodeling a "probability" so calling the probability that a music box is opera a "meta-probability" is really weird, even though it's basically the same sort of thing b... (read more)

5David_Chapman10y
This is interesting—it seems like the project here would be to construct a universal, hierarchical ontology of every possible thing a device could do? This seems like a very big job... how would you know you hadn't left out important possibilities? How would you go about assigning probabilities? (The approach I have in mind is simpler...)
6ialdabaoth10y
At least one of the top-level headings should be a catch-all "None of the above", which represents your estimated probability that you left something out.
2David_Chapman10y
That's good, yes! How would you assign a probability to that?
6ialdabaoth10y
Ideally, by looking a the number of times that I've experienced out-of-context problems in the past. You can optimize further by creating models that predict the base amount of novelty in your current environment - if you have reason to believe that your current environment is more unusual / novel than normal, increase your assigned "none of the above" proportionally. (And conversely, whenever evidence triggers the creation of a new top-level heading, that top-level heading's probability should get sliced out of the "none of the above", but the fact that you had to create a top-level heading should be used as evidence that you're in a novel environment, thus slightly increasing ALL "none of the above" categories. If you're using hard-coded heuristics instead of actually computing probability tables, this might come out as a form of hypervigilance and/or curiosity triggered by novel stimulus.)
4CoffeeStain10y
"How often do listing sorts of problems with some reasonable considerations result in an answer of 'None of the above' for me?" If "reasonable considerations" are not available, then we can still: "How often did listing sorts of problems with no other information available result in an answer of 'None of the above' for me?" Even if we suppose that maybe this problem bears no resemblance to any previously encountered problem, we can still (because the fact that it bears no resemblance is itself a signifier): "How often did problems I'd encountered for the first time have an answer I never thought of?"
0philh10y
The probability assigned to "none of the above" should be smaller than your probability that you left something out, since "none of the above is true" is a strict subset of "I left out a possibility". (It's possible I misinterpreted you, so apologies if I'm stating the obvious.)
3[anonymous]10y
I'm currently mostly wondering how I get the black box to do anything at all, and particularly how I can protect myself against the dangerous things it might be feasible for an eldritch box to do.
2dspeyer10y
A universal ontology is intractable, no argument there. As is a tree of (meta)*-probabilities. My point was about how to regard the problem. As for an actual solution, we start with propositions like "this box has a nontrivial potential to kill, injure or madden me.". I can find a probability for that based on my knowledge of you and on what you've said. If the probability is small enough, I can subdivide that by considering another proposition.
0David_Chapman10y
One aspect of what I consider the correct solution is that the only question that needs to be answered is "do I think putting a coin in the box has positive or negative utility", and one can answer that without any guess about what it is actually going to do. What is your base rate for boxes being able to drive you mad if you put a coin in them? Can you imagine any mechanism whereby a box would drive you mad if you put a coin in it? (I can't.)
1dspeyer10y
Given that I'm inside a hypothetical situation proposed on lesswrong, the likelihood of being inside a Lovecraft crossover or something similar is about .001. Assuming a Lovecraft crossover, the likelihood of a box marked in eldritch runes containing some form of Far Realm portal is around .05. So say .0005 from that method, which is what was on my mind when I wrote that.
0Vaniver10y
Perhaps sticking a coin in it triggers the release of some psychoactive gas or aerosol?
0David_Chapman10y
Are there any psychoactive gases or aerosols that drive you mad? I suppose a psychedelic might push someone over the edge if they were sufficiently psychologically fragile. I don't know of any substances that specifically make people mad, though.
0Vaniver10y
I'm not a psychiatrist. Maybe? It looks like airborne transmission of prions might be possible, and along an unrelated path the box could go the Phineas Gage route.
0Bayeslisk10y
Alternatively, aerosolized agonium, for adequate values of sufficiently long-lived and finely-tuned agonium.

This is now a situation of radical uncertainty.

The Bayesian Universalist answer to this would be that there is no separate meta-probability. You have a universal prior over all possible hypotheses, and mutter a bit about Solomonoff induction and AIXI.

I am putting it this way, distancing myself from the concept, because I don't actually believe it, but it is the standard answer to draw out from the LessWrong meme space, and it has not yet been posted in this thread. Is there anyone who can make a better fist of expounding it?

3David_Chapman10y
Yes, I'm not at all committed to the metaprobability approach. In fact, I concocted the black box example specifically to show its limitations! Solomonoff induction is extraordinarily unhelpful, I think... that it is uncomputable is only one reason. I think there's a fairly simple and straightforward strategy to address the black box problem, which has not been mentioned so far...
4[anonymous]10y
Because it's output is not human-readable being the other? I mean, even if I've got a TARDIS to use as a halting oracle, an Inductive Turing Machine isn't going to output something I can actually use to make predictions about specific events such as "The black box gives you money under X, Y, and Z circumstances."
3David_Chapman10y
Well, the problem I was thinking of is "the universe is not a bit string." And any unbiased representation we can make of the universe as a bit string is going to be extremely large—much too large to do even sane sorts of computation with, never mind Solomonoff. Maybe that's saying the same thing you did? I'm not sure...
5torekp10y
Can you please give us a top level post at some point, be it in Discussion or Main, arguing that "the universe is not a bit string"? I find that very interesting, relevant, and plausible.
2David_Chapman10y
Thanks for the encouragement! I have way too many half-completed writing projects, but this does seem an important point.
3Richard_Kennaway10y
Going back to the basic question about the black box: Too small to be worth considering. I might as well ask, what's the probability that I'll find $2 hidden half way up the nearest tree? Nothing has been claimed about the black box to specifically draw "it will pay you $2 for $1" out of hypothesis space.
5David_Chapman10y
Hmm... given that the previous several boxes have either paid $2 or done nothing, it seems like that primes the hypothesis that the next in the series also pays $2 or does nothing. (I'm not actually disagreeing, but doesn't that argument seem reasonable?)
0Richard_Kennaway10y
Priming a hypothesis merely draws it to attention; it does not make it more likely. Every piece of spam, every con game, "primes the hypothesis" that it is genuine. It also "primes the hypothesis" that it is not. "Priming the hypothesis" is no more evidence than a purple giraffe is evidence of the blackness of crows. Explicltly avoiding saying that it does pay $2, and saying instead that it is "interesting", well, that pretty much stomps the "priming" into a stain on the sidewalk.
1linkhyrule510y
.... purple giraffes are evidence of the blackness of crows, though. Just, really really terrible evidence.
1Richard_Kennaway10y
Well, yes. As is the mere presence of the idea of $2 for $1 terrible evidence that the black box will do any such thing. Eliezer speaks in the Twelve Virtues of letting oneself be as light as a leaf, blown unresistingly by the wind of evidence, but evidence of this sort is on the level of the individual molecules and Brownian motion of that leaf.
0Watercressed10y
It depends on your priors
2DanielLC10y
You can give a meta-probability if you want. However, this makes no difference in your final result. If you are 50% certain that a box has a diamond in it with 20% probability, and you are 50% certain that it has a diamond with 30% probability, then you are 50% sure that it has an expected value of 0.2 diamonds and 50% sure that it has an expected value of 0.3 diamonds, so it has an expected expected value of 0.25 diamonds. Why not just be 25% sure from the beginning? Supposedly, David gave an example of meta-probability being necessary in the earlier post her references. However, using conditional probabilities give you the right answer. There is a difference between a gambling machine having independent 50% chances of giving out two coins when you put in one, and one that has a 50% chance the first time, but has a 100% chance of giving out two coins the nth time given that it did the first time and a 0% chance given it did not. Since there are times where you need conditional probabilities and meta-probabilities won't suffice, you need to have conditional probabilities anyway, so why bother with meta-probabilities? That's not to say that meta-probabilities can't be useful. If the probability of A depends on B, and all you care about is A, meta-probabilities will model this perfectly, and will be much simpler to use than conditional probabilities. A good example of a successful use of meta-probabilities is Student's t-test, which can be thought of as a distribution of normal distributions, in which the standard deviation itself has a probability distribution.

The idea of metaprobability still isn't particularly satisfying to me as a game-level strategy choice. It might be useful as a description of something my brain already does, and thus give me more information about how my brain relates to or emulates an AI capable of perfect Bayesian inference. But in terms of picking optimal strategies, perfect Bayesian inference has no subroutine called CalcMetaProbability.

My first thought was that your approach elevates your brain's state above states of the world as symbols in the decision graph, and calls the differ... (read more)

4Vaniver10y
I find it helpful to think of "the optimal way to play game X" as "design the mind that is best at playing game X." Does that not seem helpful to you?
5CoffeeStain10y
It is helpful, and was one of the ways that helped me to understand One-boxing on a gut level. And yet, when the problem space seems harder, when "optimal" becomes uncomputable and wrapped up in the fact that I can't fully introspect, playing certain games doesn't feel like designing a mind. Although, this is probably just due to the fact that games have time limits, while mind-design is unconstrained. If I had an eternity to play any given game, I would spend a lot of time introspecting, changing my mind into the sort that could play iterations of the game in smaller time chunks. Although there would still always be a part of my brain (that part created in motion) that I can't change. And I would still use that part to play the black box game. In regards to metaprobabilities, I'm starting to see the point. I don't think it alters any theory about how probablity "works," but its intuitive value could be evidence that optimal AIs might be able to more efficiently emulate perfect decision theory with CalcMetaProbability implemented. And it's certainly useful to many here.
0Gunnar_Zarncke10y
But the point about meta probability is that we do not have the nodes. Each meta level corresponds to one nesting of networks in nodes. Only in so far as you approximate yourself simply as per above.This discards information.
3CoffeeStain10y
Think of Bayesian graphs as implicitly complete, with the set of nodes being every thing to which you have a referent. If you can even say "this proposition" meaningfully, a perfect Bayesian implemented as a brute-force Bayesian network could assign it a node connected to all other nodes, just with trivial conditional probabilities that give the same results as an unconnected node. A big part of this discussion has been whether some referents (like black boxes) actually do have such trivial conditional probabilities which end up returning an inference of 50%. It certainly feels like some referents should have no precedent, and yet it also feels like we still don't say 50%. This is because they actually do have precedent (and conditional probabilities), it's just that our internal reasonings are not always consciously available.
1Gunnar_Zarncke10y
Sure you can always use the total net of all possible proposition. But the set of all propositions is intractable. It may not even be sensibly enumerable. For nested nets at least you can construct the net of the powerset of the nodes and that will do the job - in theory. In practive even that is horribly inefficient. And even though our brain is massively parallel it surely doesn't do that.
0David_Chapman10y
Well, regardless of the value of metaprobability, or its lack of value, in the case of the black box, it doesn't seem to offer any help in finding a decision strategy. (I find it helpful in understanding the problem, but not in formulating an answer.) How would you go about choosing a strategy for the black box?
5CoffeeStain10y
My LessWrongian answer is that I would ask my mind that was created already in motion what the probability is, then refine it with as many further reflections as I can come up with. Embody an AI long enough in this world, and it too will have priors about black boxes, except that reporting that probability in the form of a number is inherent to its source code rather than strange and otherworldly like it is for us. The point that was made in that article (and in the Metaethics sequence as a whole) is that the only mind you have to solve a problem is the one that you have, and you will inevitably use it to solve problems unoptimally, where "unoptimal" if taken strictly means everything anybody has ever done. The reflection part of this is important, as it's the only thing we have control over, and I suppose could involve discussions about metaprobabilities. It doesn't really do it for me though, although I'm only just a single point in the mind design space. To me, metaprobability seems isomorphic to a collection of reducible considerations, and so doesn't seem like a useful shortcut or abstraction. My particular strategy for reflection would be something like that in dspeyer's comment, things such as reasoning about the source of the box, possibilities for what could be in the box that I might reasonably expect to be there. Depending on how much time I have, I'd be very systematic about it, listing out possibilities, solving infinite series on expected value, etc.
0David_Chapman10y
Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases. Maybe I ought to have made that clearer! The approach I would take to the black box does not rely on metaprobability, so let's set that aside. So, your mind is already in motion, and you do have priors about black boxes. What do you think you ought to in this case? I don't want to waste your time with that... Maybe the thought experiment ought to have specified a time limit. Personally, I don't think enumerating things the boxes could possibly do would be helpful at all. Isn't there an easier approach?
2CoffeeStain10y
Ah! I didn't quite pick up on that. I'll note that infinite regress problems aren't necessarily defeaters of an approach. Good minds that could fall into that trap implement a "Screw it, I'm going to bed" trigger to keep from wasting cycles even when using an otherwise helpful heuristic. Maybe, but I can't guarantee you won't get blown up by a black box with a bomb inside! As a friend, I would be furiously lending you my reasoning to help you make the best decision, worrying very little what minds better and faster than both of ours would be able to do. It is, at the end of the day, just the General AI problem: Don't think too hard on brute-force but perfect methods or else you might skip a heuristic that could have gotten you an answer within the time limit! But when do you know whether the time limit is at that threshold? You could spend cycles on that too, but time is wasting! Time limit games presume that the participant has already underwent a lot of unintentional design (by evolution, history, past reflections, etc.). This is the "already in-motion" part which, frustratingly, cannot ever be optimal unless somebody on the outside designed you for it. It's a formal problem what source code performs best under what game. Being a source code involves taking the discussion we're having now and applying it the best you can, because that's what your source code does.
2David_Chapman10y
Yes—this is part of what I'm driving at in this post! The kinds of problems that probability and decision theory work well for have a well-defined set of hypotheses, actions, and outcomes. Often the real world isn't like that. One point of the black box is that the hypothesis and outcome spaces are effectively unbounded. Trying to enumerate everything it could do isn't really feasible. That's one reason the uncertainty here is "Knightian" or "radical." In fact, in the real world, "and then you get eaten by a black hole incoming near the speed of light" is always a possibility. Life comes with no guarantees at all. Often in Knightian problems you are just screwed and there's nothing rational you can do. But in this case, again, I think there's a straightforward, simple, sensible approach (which so far no one has suggested...)
3CoffeeStain10y
As you know, this attitude isn't particularly common 'round these parts, and while I fall mostly in the "Decision theory can account for everything" camp, there may still be a point there. "Rational" isn't really a category so much as a degree. Formally, it's a function on actions that somehow measures how much that action corresponds to the perfect decision-theoretic action. My impression is that somewhere there's Godelian consideration lurking, which is where the "Omega fines you exorbitantly for using TDT" thought experiment comes into play. That thought experiment never bothered me much, as it just is what it is: a problem where you are just screwed, and there's nothing rational you can do to improve your situation. You've already rightly programmed yourself to use TDT, and even your decision to stop using TDT would be made using TDT, and unless Omega is making exceptions for that particular choice (in which case you should self-modify to non-TDT), Omega is just a jerk that goes around fining rational people. In such situations, the words "rational" and "irrational" are less useful descriptors than just observing source code being executed. If you're formal about it using metric R, then you would be more R, but its correlation to "rational" wouldn't really be at point. So, I don't think the black box is really one of the situations I've described. It seems to me a decision theorist training herself to be more generally rational is in fact improving her odds at winning the black box game. All the approaches outlined so far do seem to also improve her odds. I don't think a better solution exists, and she will often lose if she lacks time to reflect. But the more rational she is, the more often she will win.

I don't have a full strategy, but I have an idea for a data-gathering experiment:

I hand you a coin and try to get you to put it in the box for me. If you refuse, I update in the direction of the box harming people who put coins in it. If you comply, I watch and see what happens.

0David_Chapman10y
Excellent! This is very much pointing in the direction of what I consider the correct general approach. I hadn't thought of what you suggest specifically, but it's an instance of the general category I had in mind.

Meta-probability seems like something that is reducible to expected outcomes and regular probability. I mean, what kind of box the black box is, is nothing more than what you expect it to do conditional on what you might have seen it do. If it gives you three dollars the next three times you play it, you'd then expect the fourth time to also give you three dollars (4/5ths of the time, via Bayes' Theorem, via Laplace's Rule of Succession).

Meta-probability may be a nifty shortcut, but it's reducible to expected outcomes and conditional probability.

2V_V10y
Laplace's Rule of Succession can only be used once you have identified a set of possible outcomes and made certain assumptions on the underlying probability distribution. That's not the case at hand. Applying Bayesian reasoning to such cases requires an universal prior. It could be that humans do a form of approximate Bayesian reasoning with something like an universal prior when reasoning informally, but we know no satisfactory way of formalizing that reasoning in mathematical terms.

I wonder how his work in decision theory might have affected his decision to leak the Papers?

Obviously he was a rational thinker. And that seems to have implied thinking aoutside of the rules and customs. For him leaking the papers was just one nontirivial option among lots.

A few terminological headaches in this post. Sorry for the negative tone.

There is talk of a "fixed but unknown probability," which should always set alarm bells ringing.

More generally, I propose that whenever one assigns a probability to some parameter, that parameter is guaranteed not to be a probability.

I am also disturbed by the mention of Knightian uncertainty, descried as "uncertainty that can't be usefully modeled in terms of probability." Now there's a charitable interpretation of that phrase, and I can see that there may be a ps... (read more)

I throw the box into the corner of the room with a high pitched scream of terror. Then I run away to try to find thermite.

Edit: then I throw the ashes into a black hole, and trigger a True Vacuum colapse just in case.

4ygert10y
This raises the very important point that the overwhelming majority of worldstates are bad, bad, bad, and so when presented with a box that could give literally any outcome, running might be a good idea. (Metaphorically, that is. I doubt it would do you much good.)
2Zvi10y
I think backing away slowly and quietly is the better play. The box might feast off your screams or sense your fear.
7ialdabaoth10y
Then again, screams might hurt it. That's the problem with true radical uncertainty - if you're sufficiently uncertain that you can't even conjecture about meta-probabilities, how do you know if ANY action (or lack of action) might have a net positive or negative outcome?

You need to take advantage of the fact that probability is a consequence of incomplete information, and think about the models of the world people have that encode their information. "Meta-probabbility" only exists within a certain model of the problem, and if you totally ignore that you get some drastically confusing conclusions.

1David_Chapman10y
So, how would you analyze this problem, more specifically? What do you think the optimal strategy is?

The problem of what to expect from the black box?

I'd think about it like this: suppose that I hand you a box with a slot in it. What do you expect to happen if you put a quarter into the slot?

To answer this we engage our big amount of human knowledge about boxes and people who hand them to you. It's very likely that nothing at all will happen, but I've also seen plenty of boxes that also emit sound, or gumballs, or temporary tattoos, or sometimes more quarters. But suppose that I have previously handed you a box that emits more quarters sometimes when you put quarters in. Then maybe you raise the probability that it also emits quarters, et cetera.

Now, within this model you have a probability of some payoff, but only if it's one of the reward-emitting boxes, and it also has some probability of emitting sound etc. What you call a "meta-probability" is actually the probability of some sub-model being verified or confirmed. Suppose I put in one quarter in and two quarters come out - now you've drastically cut down the models that can describe the box. This is "updating the meta-probability."

2David_Chapman10y
Of comments so far, this comes closest to the answer I have in mind... for whatever that's worth!
2Armok_GoB10y
It also has elrich markers, and is being used in a decision theory experiment, and given in association with omnius wording. These indicates it does something nasty.
8Manfred10y
I guess it will raise the probability a little bit, but out of all eldritch-marked things I've ever seen, about 100% have been ornamental. We can't over-weight small probabilities just because they're vivid.
1Armok_GoB10y
... Maybe we're getting different mental images for "eldrich". I assumed things that'd get me banned to even vaguely describe, not tentackles and pentagrams.

I like this article / post but I find myself wanting more at the end. A payoff or a punch line or at least a lesson to take away.

2David_Chapman10y
Well, I hope to continue the sequence... I ended this article with a question, or puzzle, or homework problem, though. Any thoughts about it?
2Bayeslisk10y
IMO the correct response is to run like hell from the box. In Thingspace, most things are very unfriendly, in much the same way that most of Mindspace contains unfriendly AIs.
6Armok_GoB10y
Technically, almost all things in thingspace are high energy plasma. Edit: actually most of them are probably some kind of exotic (anti-, strange-, dark- etc.) matter that'll blow up the planet.
3Bayeslisk10y
The high-energy exotic plasma not from this universe does not love or hate you. Your universe is simply a false vacuum with respect to its home universe's, which it accidentally collapses.
2David_Chapman10y
So... you think I am probably evil, then? :-) I gave you the box (in the thought experiment). I may not have selected it from Thingspace at random! In fact, there's strong evidence in the text of the OP that I didn't...
4Bayeslisk10y
I am pattern-matching from fiction on "black box with evil-looking inscriptions on it". Those do not tend to end well for anyone. Also, what do you mean by strong evidence against that the box is less harmful than a given random object from Thingspace? I can /barely sort of/ see "not a random object from Thingspace"; I cannot see "EV(U(spoopy creppy black box)) > EV(U(object from Thingspace))".
3Bayeslisk10y
EBWOP: On further reflection I find that since most of Thingspace instantaneously destroys the universe, EV(U(spoopy creppy black box)) >>> EV(U(object from Thingspace)). However, what I was trying to get at was that EV(U(spoopy creppy black box)) <= EV(U(representative object from-class: chance-based deal boxes with "normal" outcomes)) <= EV(U(representative object from-class: chance-based deal boxes with Thingspace-like outcomes)) <= EV(U(representative object from-class: chance-based deal boxes with terrifyingly creatively imaginable outcomes))
1David_Chapman10y
The evidence that I didn't select it at random was my saying “I find this one particularly interesting.” I also claimed that "I'm probably not that evil." Of course, I might be lying about that! Still, that's a fact that ought to go into your Bayesian evaluation, no?
3Bayeslisk10y
"Interesting" tends to mean "whatever it would be, it does that more" in the context of possibly psuedo-Faustian bargains and signals of probable deceit. From what I know, I do not start with reason to trust you, and the evidence found in the OP suggests that I should update the probability that you are concealing information updating on which would lead me not to use the black box to "much higher".
4David_Chapman10y
Oh, goodness, interesting, you do think I'm evil! I'm not sure whether to be flattered or upset or what. It's kinda cool, anyway!
2Bayeslisk10y
I think that avatar-of-you-in-this-presented-scenario does not remotely have avatar-of-me-in-this-scenario's best interests at heart, yes.
0Transfuturist9y
I hope you continue the sequence as well. :V