Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: lackofcheese 01 October 2014 11:57:46PM *  0 points [-]

Yes, but the programs that AIXI maintains internally in its model ensemble are defined as input-less programs that generate all the possible histories.

Please clarify this and/or give a reference. Every time I've seen the equation AIXI's actions are inputs to the environment program.

How can they be simpler, given that you have explained to AIXI what Newcomb's problem is and provided it with enough evidence so that it really believes that it is going to face it?

The point of Newcomb's problem is that the contents of the box are already predetermined; it's stipulated that as part of the problem setup you are given enough evidence of this. In general, any explanation that involves AIXI's action directly affecting the contents of the box will be more complex because it bypasses the physics-like explanation that AIXI would have for everything else.

When I am facing Newcomb's problem I don't believe that the box magically changes contents as the result of my action---that would be stupid. I believe that the box already has the million dollars because I'm predictably a one-boxer, and then I one-box.

Similarly, if AIXI is facing Newcomb's then it should, without a particularly large amount of evidence, also narrow its environment programs down to ones that already either contains the million, and ones that already do not.

EDIT: Wait, perhaps we agree re. the environment programs.

AIXI filters them for the one observed history and then evaluates the expected (discounted) reward over the future histories, for each possible choice of its next action.

Yes, for each possible choice. As such, if AIXI has an environment program "q" in which Omega already predicted one-boxing and put the million dollars in, AIXI will check the outcome of OneBox as well as the outcome of TwoBox with that same "q".

Comment author: lackofcheese 01 October 2014 11:51:09PM 0 points [-]

I figured that would be the case (it was, after all, the top entry when Googling for it), but since you never changed your post to reflect this fact I decided it would be best to bring it up just in case.

Comment author: eli_sennesh 01 October 2014 11:44:04PM 0 points [-]

If this mean what I think it means, then yes, this was my interpretation too - plus assuming the truth or falsity of propositions that are undecided by the axioms.

Or assigning degrees of belief to the axioms themselves! Any mutually-consistent set of statements will allow some probability assignment in which each statement has a nonzero degree of belief.

If you assign probability 1 to the axioms, there is only one correct distribution for anything that follows from the axioms, like a digit of pi (or as the kids are calling it these days, $\pi$): probability 1 for the right answer, 0 for the wrong answers. If you want logical probabilities to deviate from this (like by being ignorant about some digit of pi), then logical probabilities cannot follow the same rules as probabilities.

Which is actually an issue I did bring up, in proof-theoretic language. Let's start by setting the term "certain distribution" to mean "1.0 to the right answer, 0.0 to everything else", and then set "uncertain distribution" to be "all distributions wider than this".

Except that actually means we're talking about something like a relationship between head-normal-form proof objects and non-normalized proof objects,

A non-normalized proof object (read: the result of a computation we haven't done yet, a lazily evaluated piece of data) has an uncertain distribution over values. To head-normalize a proof object means to evaluate enough of it to decide (compute the identity of, with certainty) the outermost introduction rule (outermost data constructor), but (by default) this still leaves us uncertain of the premises which were given to that introduction rule (the parameters passed to the data constructor). As we proceed down the levels of the tree of deduction rules, head-normalizing as we go, we eventually arrive to a completely normalized proof-term/piece of data, for which no computation remains undone.

Only a completely normalized proof term has a certain distribution: 1.0 for the right answer, 0.0 for everything else. In all other cases, we have an uncertain distribution, albeit one in which all the probability mass may be allocated in the particular region associated with only one outermost introduction rule (the special case of head-normalized terms).

Remember, these distributions are being assigned over proof terms (ie: constructions in some kind of lambda calculus), not over syntactic values (well-typed and fully normalized terms). The distributions express our state of knowledge given the limited certainty granted by however many levels of the tree (all terms of inductive types can be written as trees) are already normalized to data constructors -- they express uncertain states of knowledge in situations where the uncertainty derives from having a fraction of the computational power necessary to decide a decidable question.

So when we're talking about "digits of $\pi$" sorts of problems, we should actually speak of distributions over computational terms, not "logical probabilities". This problem has nothing to do with assigning probabilities to sentences in first-order classical logic: by the time we construct and reason with our distributions over computational terms, we have already established the existence of a proof object inhabiting a particular type. If that type happened to be a mere proposition, then we've already established, by the time we construct this distribution, that we believe in the truth of the proposition, and are merely reasoning under discrete uncertainty over how its truth was proof-theoretically asserted.

Whereas I think that establishing probabilities for sentences which are not decided by our given axioms may be more difficult, particularly when those sentences may be decidable by other means.

Comment author: shminux 01 October 2014 09:51:12PM *  0 points [-]

I don't think they are pure speculations. This is not the shipowner's first launch, so the speculations over possible worlds can be approximated by observations over past decisions.

Comment author: Manfred 01 October 2014 09:35:54PM *  1 point [-]

This would give you a uniform prior over digits when asking about the nth digit of \pi.

This is very likely a non sequitur. But yes, I agree that in this formalism it's tough to express the probability of a sentence.

free-standing assumptions (unbound variables) being used only for things like empirical hypotheses, as opposed to computations we just haven't managed to do.

If this mean what I think it means, then yes, this was my interpretation too - plus assuming the truth or falsity of propositions that are undecided by the axioms.

In light of "that stuff" (probabilities over digits of \pi) being largely separate from a full way of establishing probability values for arbitrary logical formulas

I would like to reiterate that there is a difference between assigning probabilities and the project of logical probability. Probabilities have to follow the product rule, which is contains modus ponens in the case of certainty. If you assign probability 1 to the axioms, there is only one correct distribution for anything that follows from the axioms, like a digit of pi (or as the kids are calling it these days, $\pi$): probability 1 for the right answer, 0 for the wrong answers. If you want logical probabilities to deviate from this (like by being ignorant about some digit of pi), then logical probabilities cannot follow the same rules as probabilities.

I will probably go look up Kolmogorov's method for constructing measures over countably infinite sets and write an entire article on using Kolmogorov measures over arbitrary inductive types and equational reasoning to build distributions over non-normalized computational terms.

Cool. If you use small words, then I will be happy :)

Comment author: simplicio 01 October 2014 09:31:36PM 0 points [-]

I think Clifford was wrong to say the shipowner was sincere in his belief. In the situation he describes, the belief is insincere - indeed such situations define what I think "insincere belief" ought to mean.

what are you going to do about, basically, stupid people who quite sincerely do not anticipate the consequences of their actions?

Good question. Ought implies can, so in extreme cases I'd consider that to diminish their culpability. For less extreme cases - heh, I had never thought about it before, but I think the "reasonable man" standard is implicitly IQ-normalized. :)

That would be a posterior, not a prior.

Sure.

Comment author: TheOtherDave 01 October 2014 09:14:14PM 1 point [-]

Well, yes, I agree, but I'm not sure how that helps.

We're now replacing facts about his thoughts (which the story provides us) with speculations about what he might have done in various possible worlds (which seem reasonably easy to infer, either from what we're told about his thoughts, or from our experience with human nature, but are hardly directly observable).

How does this improve matters?

Comment author: TheOtherDave 01 October 2014 09:06:55PM *  1 point [-]

If we have access to the mental processes inside someone's mind

But we don't.

When judging this story, we do.
We know what was going on in this shipowner's mind, because the story tells us.

I'm not generalizing. I'm making a claim about my judgment of this specific case, based on the facts we're given about it, which include facts about the shipowner's thoughts.

What's wrong with that?

As I said initially... I can see arguing that if we allow ourselves to judge this (fictional) situation based on the facts presented, we might then be tempted to judge other (importantly different) situations as if we knew analogous facts, when we don't. And I agree that doing so would be silly.

But to ignore the data we're given in this case because in a similar real-world situation we wouldn't have that data seems equally silly.

Comment author: shminux 01 October 2014 08:51:10PM 1 point [-]

In absence of applicable regulations I think the veil of ignorance of sorts can help here. Would the shipowner make the same decision were he or his family one of the emigrants? What if it was some precious irreplaceable cargo on it? What if it was regular cargo but not fully insured? If the decision without the veil is significantly difference from the one with, then one can consider him "verily guilty", without worrying about his thoughts overmuch.

Comment author: tetronian2 01 October 2014 08:29:34PM 2 points [-]

Wow, I had no idea that people missed out on the tournament because I posted it to discussion. I'll keep this in mind for next year. Apologies to Sniffnoy and BloodyShrimp and anyone else who missed the opportunity.

Comment author: Cyan 01 October 2014 08:00:14PM *  0 points [-]

I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.

Mental processes inside someone's mind actually happen in physical reality.

Just kidding; I know that's not what you mean. My actual reply is that it seems manifestly obvious that a person in some set of circumstances that demand action can make decisions that careful and deliberate consideration would judge to be the best, or close to the best, possible in prior expectation under those circumstances, and yet the final outcome could be terrible. Conversely, that person might make decisions that that careful and deliberate consideration would judge to be terrible and foolish in prior expectation, and yet through uncontrollable happenstance the final outcome could be tolerable.

Comment author: Lumifer 01 October 2014 07:53:35PM 2 points [-]

In a world where mental states could be subpoenaed, Clifford would have both a correct and an actionable theory

That's not self-evident to me. First, in this particular case as you yourself note, "Clifford says the shipowner is sincere in his belief". Second, in general, what are you going to do about, basically, stupid people who quite sincerely do not anticipate the consequences of their actions?

That which would be arrived at by a reasonable person ... updating on the same evidence.

That would be a posterior, not a prior.

Comment author: RichardKennaway 01 October 2014 07:49:04PM 1 point [-]

The author of the quote certainly knew how to say "the ship was not seaworthy" and "the ship sank because it was not seaworthy". The author said no such things.

The author said:

He knew that she was old, and not over-well built at the first; that she had seen many seas and climes, and often had needed repairs...

and more, which you have already read. This is clear enough to me.

Suppressing your own doubts is not actus reus -- you need an action in physical reality.

In this case, an inaction.

And, legally, there is a LOT of difference between an act and an omission, failing to act.

In general there is, but not when the person has a duty to perform an action, knows it is required, knows the consequences of not doing it, and does not. That is the situation presented.

Comment author: Cyan 01 October 2014 07:41:26PM 0 points [-]

That's because we live in a world where... it's not great, but better than speculating on other people's psychological states.

I wanted to put something like this idea into my own response to Lumifer, but I couldn't find the words. Thanks for expressing the idea so clearly and concisely.

Comment author: SilentCal 01 October 2014 07:25:00PM 0 points [-]

I think you're right. At first I was worried (here and previously in the thread) that the proof that AIXI would two-box was circular, but I think it works out if you fill in the language about terminating turing machines and stuff. I was going to write up my formalization, but once I went through it in my head your proof suddenly looked too obviously correct to be worth expanding.

Comment author: Cyan 01 October 2014 07:08:17PM *  0 points [-]

I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world. I'll make a weaker claim -- when I'm engaging conscious effort in trying to figure out how the world is and I notice myself doing it, I try to stop. Less Wrong, not Absolute Perfection.

Pretty much everyone does that almost all the time. So, is everyone blameworthy? Of course, if everyone is blameworthy then no one is.

That's a pretty good example of the Fallacy of Gray right there.

Comment author: simplicio 01 October 2014 06:55:55PM 3 points [-]

completely ignoring the actual outcome seems iffy to me

That's because we live in a world where people's inner states are not apparent, perhaps not even to themselves. So we revert to (a) what would a reasonable person believe, (b) what actually happened. The latter is unfortunate in that it condemns many who are merely morally unlucky and acquits many who are merely morally lucky, but that's life. The actual bad outcomes serve as "blameable moments". What can I say - it's not great, but better than speculating on other people's psychological states.

In a world where mental states could be subpoenaed, Clifford would have both a correct and an actionable theory of the ethics of belief; as it is I think it correct but not entirely actionable.

I don't know what a "genuine extrapolated prior" is.

That which would be arrived at by a reasonable person (not necessarily a Bayesian calculator, but somebody not actually self-deceptive) updating on the same evidence.

A related issue is sincerity; Clifford says the shipowner is sincere in his beliefs, but I tend to think in such cases there is usually a belief/alief mismatch.

I love this passage from Clifford and I can't believe it wasn't posted here before. By the way, William James mounted a critique of Clifford's views in an address you can read here; I encourage you to do so as James presents some cases that are interesting to think about if you (like me) largely agree with Clifford.

Comment author: Lumifer 01 October 2014 05:34:15PM 0 points [-]

facts about physical reality

I read the story as asserting three facts about the physical reality: the ship was old, the ship was not overhauled, the ship sank in the middle of the ocean. I don't think these facts lead to the conclusion of negligence.

If we have access to the mental processes inside someone's mind

But we don't. We're talking about the world in which we live. I would presume that the morality in the world of telepaths would be quite different. Don't do this.

Comment author: eli_sennesh 01 October 2014 05:24:12PM 1 point [-]

Several people have already found it for me, and I've also found a better account of the proof theory of classical logic.

Comment author: eli_sennesh 01 October 2014 05:23:50PM 0 points [-]

To assign 0.99999 to a judgement that a : T is to assign 0.99999 probability to "a proves T", which also states that "T is inhabited", which then means "we believe T".

View more: Next