David_Chapman comments on Probability and radical uncertainty - Less Wrong

11 Post author: David_Chapman 23 November 2013 10:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (71)

You are viewing a single comment's thread. Show more comments above.

Comment author: David_Chapman 24 November 2013 04:47:22AM 3 points [-]

This is interesting—it seems like the project here would be to construct a universal, hierarchical ontology of every possible thing a device could do? This seems like a very big job... how would you know you hadn't left out important possibilities? How would you go about assigning probabilities?

(The approach I have in mind is simpler...)

Comment author: ialdabaoth 24 November 2013 04:53:02AM 4 points [-]

how would you know you hadn't left out important possibilities?

At least one of the top-level headings should be a catch-all "None of the above", which represents your estimated probability that you left something out.

Comment author: David_Chapman 24 November 2013 05:08:20AM 1 point [-]

That's good, yes!

How would you assign a probability to that?

Comment author: CoffeeStain 24 November 2013 05:55:05AM 3 points [-]

"How often do listing sorts of problems with some reasonable considerations result in an answer of 'None of the above' for me?"

If "reasonable considerations" are not available, then we can still:

"How often did listing sorts of problems with no other information available result in an answer of 'None of the above' for me?"

Even if we suppose that maybe this problem bears no resemblance to any previously encountered problem, we can still (because the fact that it bears no resemblance is itself a signifier):

"How often did problems I'd encountered for the first time have an answer I never thought of?"

Comment author: ialdabaoth 24 November 2013 05:33:08AM 3 points [-]

Ideally, by looking a the number of times that I've experienced out-of-context problems in the past. You can optimize further by creating models that predict the base amount of novelty in your current environment - if you have reason to believe that your current environment is more unusual / novel than normal, increase your assigned "none of the above" proportionally. (And conversely, whenever evidence triggers the creation of a new top-level heading, that top-level heading's probability should get sliced out of the "none of the above", but the fact that you had to create a top-level heading should be used as evidence that you're in a novel environment, thus slightly increasing ALL "none of the above" categories. If you're using hard-coded heuristics instead of actually computing probability tables, this might come out as a form of hypervigilance and/or curiosity triggered by novel stimulus.)

Comment author: philh 25 November 2013 02:27:42PM 0 points [-]

which represents your estimated probability that you left something out.

The probability assigned to "none of the above" should be smaller than your probability that you left something out, since "none of the above is true" is a strict subset of "I left out a possibility".

(It's possible I misinterpreted you, so apologies if I'm stating the obvious.)

Comment author: dspeyer 30 November 2013 03:21:38PM 2 points [-]

A universal ontology is intractable, no argument there. As is a tree of (meta)*-probabilities. My point was about how to regard the problem.

As for an actual solution, we start with propositions like "this box has a nontrivial potential to kill, injure or madden me.". I can find a probability for that based on my knowledge of you and on what you've said. If the probability is small enough, I can subdivide that by considering another proposition.

Comment author: David_Chapman 01 December 2013 01:26:19AM 0 points [-]

One aspect of what I consider the correct solution is that the only question that needs to be answered is "do I think putting a coin in the box has positive or negative utility", and one can answer that without any guess about what it is actually going to do.

What is your base rate for boxes being able to drive you mad if you put a coin in them?

Can you imagine any mechanism whereby a box would drive you mad if you put a coin in it? (I can't.)

Comment author: dspeyer 01 December 2013 05:34:24AM 0 points [-]

Given that I'm inside a hypothetical situation proposed on lesswrong, the likelihood of being inside a Lovecraft crossover or something similar is about .001. Assuming a Lovecraft crossover, the likelihood of a box marked in eldritch runes containing some form of Far Realm portal is around .05. So say .0005 from that method, which is what was on my mind when I wrote that.

Comment author: Vaniver 01 December 2013 02:37:19AM *  0 points [-]

Can you imagine any mechanism whereby a box would drive you mad if you put a coin in it? (I can't.)

Perhaps sticking a coin in it triggers the release of some psychoactive gas or aerosol?

Comment author: David_Chapman 01 December 2013 06:12:42PM 0 points [-]

Are there any psychoactive gases or aerosols that drive you mad?

I suppose a psychedelic might push someone over the edge if they were sufficiently psychologically fragile. I don't know of any substances that specifically make people mad, though.

Comment author: Vaniver 01 December 2013 06:52:44PM *  0 points [-]

I'm not a psychiatrist. Maybe? It looks like airborne transmission of prions might be possible, and along an unrelated path the box could go the Phineas Gage route.

Comment author: Bayeslisk 10 December 2013 09:43:39AM 0 points [-]

Alternatively, aerosolized agonium, for adequate values of sufficiently long-lived and finely-tuned agonium.

Comment author: [deleted] 24 November 2013 11:28:21PM 2 points [-]

I'm currently mostly wondering how I get the black box to do anything at all, and particularly how I can protect myself against the dangerous things it might be feasible for an eldritch box to do.