Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: jollybard 16 February 2017 05:18:22AM *  1 point [-]

One thing I've never really seen mentioned in discussion of the planning fallacy is that there is something of a self-defeating prophecy at play.

Let's say I have a report to write, and I need to fit it in my schedule. Now, according to my plans, things should go fine if I take an hour to write it. Great! So, knowing this, I work hard at first, then become bored and dick around for a while, then realise that my self-imposed deadline is approaching, and -- whoosh, I miss it by 30 minutes.

Now, say I go back in time and redo the report, but now I assume it'll take me an hour and a half. Again, I work properly at first, then dick around, and -- whoa, only half an hour left! Quick, let's finish thi--- whoosh.

The point I'm trying to make here is that sometimes the actual length of a task depends directly on your estimate of that task's length, in which case avoiding the planning fallacy simply by giving yourself a larger margin won't work.

But I suppose the standard argument against this is that to properly counteract this kind of planning fallacy, one mustn't just take out a longer span of time, but find what it is that makes one miss the deadline and correct it.

Comment author: jollybard 01 December 2016 03:23:50AM 6 points [-]

Personally, I am still eagerly waiting for CFAR to release more of their methods and techniques. A lot of them seem to be already part of the rationalist diaspora's vocabulary -- however, I've been unable to find descriptions of them.

For example, you mention "TAP"s and the "Inner Simulator" at the beginning of this article, yet I haven't had any success googling those terms, and you offer no explanation of them. I would be very interested in what they are!

I suppose the crux of my criticism isn't that there are techniques you haven't released yet, nor that rationalists are talking about them, but that you mention them as though they were common knowledge. This, sadly, gives the impression that LWers are expected to know about them, and reinforces the idea that LW has become a kind of elitist clique. I'm worried that you are using this in order to make aspiring rationalists, who very much want to belong, come to CFAR events, to be in the know.

Comment author: SquirrelInHell 24 May 2016 02:26:32AM 5 points [-]
Comment author: jollybard 25 May 2016 03:07:52AM 1 point [-]

This looks great and I can see that it should work, but I can't seem to find a formal proof. Can you explain a bit?

Comment author: Nebu 28 April 2016 03:09:04AM 0 points [-]

suppose that we (or Omega, since we're going to assume nigh omniscience) asked the person whether JFK was murdered by Lee Harvey Oswald or not, and if they get it wrong, then they are killed/tortured/dust-specked into oblivion/whatever.

Okay, but what is the utility function Omega is trying to optimize?

Let's say you walk up to Omega, tell it "was JFK murdered by Lee Harvey Oswald or not? And by the way, if you get this wrong, I am going to kill you/torture you/dust-spec you."

Unless we've figured out how to build safe oracles, with very high probability, Omega is not a safe oracle. Via https://arbital.com/p/instrumental_convergence/, even though Omega may or may not care if it gets tortured/dust-speced, we can assume it doesn't want to get killed. So what is it going to do?

Do you think it's going to tell you what it thinks is the true answer? Or do you think it's going to tell you the answer that will minimize the risk of it getting killed?

Comment author: jollybard 28 April 2016 02:09:48PM 0 points [-]

That wasn't really my point, but I see what you mean. The point was that it is possible to have a situation where the 0 prior does have specific consequences, not that it's likely, but you're right that my example was a bit off, since obviously the person getting interrogated should just lie about it.

Comment author: jollybard 28 April 2016 01:45:39AM *  0 points [-]

I can think of many situations where a zero prior gives rise to tangibly different behavior, and even severe consequences. To take your example, suppose that we (or Omega, since we're going to assume nigh omniscience) asked the person whether JFK was murdered by Lee Harvey Oswald or not, and if they get it wrong, then they are killed/tortured/dust-specked into oblivion/whatever. (let's also assume that the question is clearly defined enough that the person can't play with definitions and just say that God is in everyone and God killed JFK)

However, let me steelman this a bit by somewhat moving the goalposts: if we allow a single random belief to have P=0, then it seems very unlikely that it will have a serious effect. I guess that the above scenario would require that we know that the person has P=0 about something (or have Omega exist), which, if we agree that such a belief will not have much empirical effect, is almost impossible to know. So that's also unlikely.

Comment author: jollybard 27 March 2016 02:24:56AM *  1 point [-]

Oh, yes, good old potential UFAI #261: let the AI learn proper human values from the internet.

The point here being, it seems obvious to me that the vast majority of possible intelligent agents are unfriendly, and that it doesn't really matter what we might learn from specific error cases. In order words, we need to deliberately look into what makes an AI friendly, not what makes it unfriendly.

Comment author: turchin 15 February 2016 10:33:34AM 0 points [-]

In Soviet Union a woman survived mid-air frontal planes collision - her chair rotated together with part of the wing and failed into a forrest.

But the main idea here is that the same "me" may exist in different worlds - in one I am in a plane in the other I am in plane simulator. I will survive in the second one.

Comment author: jollybard 15 February 2016 08:06:32PM 0 points [-]

My point was that QM is probabilistic only at the smallest level, for example in the Schrödinger's cat thought experiment. I don't think surviving a plane crash is ontologically probabilistic, unless of course the crash depends on some sort of radioactive decay or something! You can't make it so that you survive the plane crash without completely changing the prior causal networks... up until the beginning of you universe. Maybe there could be a way to very slightly change one of the universal constants so that nothing changes except that you survive, but I seriously doubt it.

Comment author: jollybard 14 February 2016 04:29:23PM 0 points [-]

There might also be situations where surviving is not just ridiculously unlikely, but simply mathematically impossible. That is, I assume that not everything is possible through quantum effects? I'm not a physicist. I mean, what quantum effects would it take to have your body live forever? Are they really possible?

And I have serious doubts that surviving a plane crash or not could be due to quantum effects, but I suppose it could simply be incredibly unlikely. I fear that people might be confusing "possible worlds" in the subjective Bayesian sense and in the quantum many-worlds sense.