DISCLAIMER: This topic is related to a potentially harmful memetic hazard, that has been rightly banned from Less Wrong. If you don't know what is, it is more likely you will be fine than not, but be advised. If do know, do not mention it in the comments.


 

Abstract: The fact that humans cannot precommit very well might be one of our defences against acausal trades. If transhumanists figure out how to beat akrasia by some sort of drug or brain tweaks, that might make them much better at precommitment, and thus more vulnerable. That means solving akrasia might be dangerous, at least until we solve blackmail. If the danger is bad enough, even small steps should be considered carefully.



Strong precommitment and building detailed simulations of other agents are two relevant capabilities humans currently don't have. These capabilities have some unusual consequences for games. Most relevant games only arise when there is a chance of monitoring, commitment and multiple interactions. Hence being in a relevant game often implies cohabiting casual connected space-time regions with other agents. Nevertheless, being able to build detailed simulations of agents allows one to vastly increase the subjective probably this particular agent will have that his next observational moment will be under one's control iff the agent have access to some relevant areas of the logical game theoretic space. This doesn't seem desirable from this agent's perspective, it is extremely asymmetrical and allows more advanced agents to enslave less advanced ones even if they don't cohabit casual connected regions of the universe. Being able to be acausally reached by powerful agent who can simulate 3^^^3 copies of you, but against which you cannot do much is extremely undesirable.

However, and more generally, regions of the block universe can only be in a game with non-cohabiting regions if they are both agents and if they can strong precommit. Any acausal trade depends on precommitment, this is the only way an agreement can go across space-time, it is done on the game-theoretical possibilities space - as I am calling it. In the case I am discussing, a powerful agent would only have reason to even consider acausal trading with an agent if that agent can precommit. Otherwise, there is no other way of ensuring acausal cooperation. If the other agent cannot, beforehand, understand that due to the peculiarities of the set of possible strategies, it is better to always precommit to those strategies that will have higher payoff when considering all other strategies, then there's no trade to be made. Would be like trying to threaten a spider with a calm verbal sentence. If the other agent cannot precommit, there is no reason for the powerful agent to punish him for anything, he wouldn't be able to cooperate anyway, he wouldn't understand the game and, more importantly in my argument, he wouldn't be able to follow his precommitment, it would break down eventually, specially since the evidence for it is so abstract and complex. The powerful agent might want to simulate the minor agent suffering anyway, but it would solely amount to sadism. Acausal trades can only reach strong precommitable areas of the universe.

Moreover, an agent also needs reasonable epistemic access to the regions of logical space (certain areas of game theory, or, TDT if you will) that indicates both the possibility of acausal trades and some estimative on the type-distribution of superintelligences willing to trade with him (most likely, future ones that the agent can help create). Forever deterring the advance of knowledge on that area seems unfeasible, or - at best - complicated and undesirable for other reasons.

It is clear that we (humans) don't want to be in an enslavable position. I believe we are not. One of the things excluding us from this position is complete incapability to precommit. This is a psychological constrain, a neurochemical constrain. We do not have the ability of even having stable long term goals, strong precommitment is neurochemical impossible. However, it seems we can change this with human enhancement, we could develop drugs which could cure akrasia, we could overcome breakdown of will with some amazing psychological technique discovered by CFAR. It seems, however desirable on other grounds, getting rid of akrasia presents severe risks. Even if somehow we only slightly decrease akrasia, this would increase the probability that individuals with access to the relevant regions of logical space could precommit and become slaves. They might then proceed to cure akrasia for the rest of humanity.

Therefore, we should avoid trying to fundamentally fix akrasia for now, until we have a better understanding of those matters and perhaps solve the blackmail problem, or maybe only after FAI. My point here is merely arguing everyone should not endorse technologies (or psychological techniques) proposing to fundamentally fix a problem that would, otherwise, seems desirable of fixing. It would seem like a clear optimization process, but it could actually open the gates of acausal hell and damn humanity to eternal slavery.

 

(Thank cousin_it for the abstract. All mistakes are my responsibility.)

(EDIT: Added an explanation to back up the premise the acausal trade entails precommitment.)

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 6:18 PM

but against which you can do much

Presumably a typo for "you cannot do much".

There are probably less elaborate reasons for akrasia in general. It's much too easy to come up with something that looks like a good practice than it is to come up with something that actually is a good practice. This is true for self-imposed tasks and more true for other-imposed tasks-- the latter are more likely to be for someone else's advantage.

A generalized rebelliousness sub-routine is an important safety factor, even though it, like any other subroutine, needs to not be in charge.

Yes, it was a typo. Thanks for the correction. I agree that akrasia can be advantageous. Obsessing over one goal wouldn't be advantageous in the environment of evolutionary adaptedness. It might not be advantageous at the present time, on average. However, I think that for those on the intellectual elite, it seems that not being able to overcome akrasia is, on average, modulo my post, bad. We are often confronted with very long term goals that would payoff if followed, our lives are quite stable and we can trust better the information we have. (Although, there's more need for fine tuning probably). But, you being right is one more reason for my conclusion, and it remains a fact that (most) rationalists are trying to overcome akrasia in general, without paying attention to the specificities.

Here's another possible angle on the question: I'm smart and depressed. I have a lot of smart, depressed friends. If I see someone on line and I think "How intelligent! How polite! What a pleasant person to read!", the odds seem awfully high that within three or four months, they will write something about serious problems with depression.

When I mention this, the usual reaction is, "The world is so messed up, It's natural for intelligent people to be depressed". I find that hard to believe, though I'm inclined to think that the procession of future disasters (I'm old enough to remember serious fear of nuclear war, followed by overpopulation, ecological disaster, and global warming-- it hasn't let up) probably has some emotional effect.

In any case, I'm wondering if what you're seeing is a symptom of depression which should be addressed from that angle. And I'm also wondering whether I'm selecting for too little aggression.

You just lost me there. I thought I knew what you were talking about before, but I have my doubts now, since I have no idea of what you are talking now. I do not understand what is the relation with depression. Further, as a data point, I can say that I'm not depressed (happiest than 100% on my ZIP code), never been depressed, nor have depressed friends, or want to have depressed friends or think this is a good and justifiable approach to life. But mainly, I do not understand how this relates to the question.

Akrasia might just be a symptom of depression rather than something more complex.

I think that the very existence of "Roko's Basilisk" is a great example of the difference between theories and dogmas.

If your theory about the world postulates something supremely bizarre and of dubiously possibility, like an "acausal trade" between a far future supercomputer and a 21st century amateur logician, then you need to sit down and work over your theory. Gather as much evidence as you can, consult with experts in the field, ponder where you might have gone wrong and at which scales your theory is accurate; in other words, generally approach the issue from a skeptical viewpoint. You can, and very well may, confirm that the bizarre postulate is true and the theory is accurate after such a process. But most of the time you'll find you've just run into the limitations of your theory.

If your dogma makes a bizarre and dubiously possible claim... well, I guess it's time to dig out the 'ole checkbook. Or take a page from Dan Brown's Opus Dei and make sure no-one ever uncovers the evil secret that can damn your immortal s... -imulation, yeah simulation.

Is it okay to admit that TDT might just not be capable of providing the correct answer to time-travel related Newcomb-like Pascal's dilemmas? Seriously, that's not exactly the most glaring weakness a decision theory could have; it's like getting bent out of shape that your Sedan can't drive along the bottom of the Marianas Trench. If this is, as the name indicates, just an application of game theory devised by clever yet fallible humans, then we really ought to doubt the map before we take it's "Here be Dragons" seriously.

I agree. This doesn't mean one shouldn't investigate further the absurd to see what comes out of it.

I'm floating in abstraction. Could you give a concrete story where a society that fixes akrasia suffers? I won't hold you to the particulars of the story, but I'd appreciate a place to plant my feet and generalize from.

Unfortunately, no, what you ask for is not a permissible thing to do on LessWrong.

Thank you Moss_Piglet for PMing me enough specifics to get me grounded. I'll admit I don't understand why topic is banned, but respect the importance of local norms and will stop discussion.

I can give examples without it being the forbidden topic, I think. I will try to improve the post as soon as I have a little free time.

[-][anonymous]10y00

Um, OK. Could you Private Message me a concrete story?

[This comment is no longer endorsed by its author]Reply

Any acausal trade depends on precommitment, this is the only way an agreement can go across space-time, it is done on the game-theoretical possibilities space - as I am calling it. In the case I am discussing, a powerful agent would only have reason to even consider acausal trading with an agent if that agent can precommit. Otherwise, there is no other way of ensuring acausal cooperation.

I don't understand what you mean by "precommit" here and in the rest of the article. Taboo "precommit". (Acausal trade is presumably also possible for individual actions, in which case the usual sense of "precommitment" doesn't seem relevant.)

Thanks for the tip. You are right, it is not clear when I am using the word in a game theoretical framing or in psychological framing. That made my argumentation easier but more likely to be flawed. Mostly I was referring to precommitment as in TDT, but then by the end I changed to psychological precommitment, it's fuzzy (and wrong). I will rewrite the post tabooing precommit when I have the time, probably tomorrow.

...What's the big deal? Don't precommit to make poor decisions, especially those which leave you vulnerable to acausal (or any) blackmail. And precommit to cancel your precommitment if you learn that it is harmful.

I don't believe it could work that way. If you don't precommit and you could have, your next observational moment will likely be of extreme suffering than not. It is rational to precommit, if you can, that's the whole issue. You are common sensing game theory. You cannot suddenly start choosing which consequences you accept or not from your model of rationality based on hidden intuitions. If you care to explain further your views in light of TDT os related theories, this could be a fruitful discussion (at least for me).

If my response to the situation you described is to precommit to whatever the blackmailer wants, that is what makes the blackmailer want to blackmail me in the first place. If every simulation of me shrugs and flips my blackmailer the bird, the blackmailer has no incentive to punish me. You can escape punishment if you are prepared to flip the bird and accept it before incentivizing it. There may be blackmailers that enforce their ultimatums whether or not you will respond to them, but in that case akrasia doesn't help.

[-][anonymous]10y10

If you don't precommit and you could have, your next observational moment will likely be of extreme suffering than not.

And yet we do not experience this. Something is wrong with this thesis.

Yes, because we can't precommit. That's one of the points on my post. But might be other reasons, I would assume so. Nevertheless, it seems to me it is still the case that precommitments would make this scenario more likely and that this is undesirable.

regions of the block universe can only be in a game with non-cohabiting regions if they are both agents and if they can strong precommit

I see this as an assertion, but I do not see justification or explanation for why. Since your entire post relies on it, please justify and/or explain.

Sure. Thanks for pointing that out. Any acausal trade depends on precommitment, this is the only way an agreement can go across space-time, it is done on the game-theoretical possibilities space - as I am calling it. In the case I am discussing, a powerful agent would only have reason to even consider acausal trading with an agent if that agent can precommit. Otherwise, there is no other way of ensuring acausal cooperation. If the other agent cannot, beforehand, understand the due to the peculiarities of the set of possible strategies is better to always precommit to those strategies that will have higher payoff when considering all other strategies, then there's no trade to be done. Would be like trying to threaten a spyder with a calm verbal sentence. If the other agent cannot precommit, there is no reason for the powerful agent to punish him for anything, he wouldn't be able to cooperate anyway, he wouldn't understand the game and, more importantly in my argument, he wouldn't be able to follow his precommitment, it would break down eventually, specially since the evidence for it is so abstract and complex. The powerful agent might want to simulate the minor agent suffering anyway, but it would solely amount to sadism. You might want to consider taking a look at the acausal trade wiki entry, and maybe TDT entry, probabily they can explain things better than me: http://wiki.lesswrong.com/wiki/Acausal_trade http://wiki.lesswrong.com/wiki/TDT

[-][anonymous]10y00

This is not a justification, this is several repetitions of the disputed claim in various wordings.

[This comment is no longer endorsed by its author]Reply

Adding that to the post.

[-][anonymous]10y00

If you stop wasting all your time reading stuff on the internet, you might be tortured forever. Beware!

Didn't quite catch what you intended to convey here. If anything, I am pretty sure I argued against that view you may have hinted there.

[-][anonymous]10y-10

It's a reference to the (I think) undeniable fact that the entire acausal blackmail idea and in particular the utterly hilarious 'basilisk' we are not allowed to mention here lest our comments get deleted are nothing more than an extremely nerdy and overly-taken-seriously theological equivalent of Christian hell.

And why this invalidates it? Do you chose theories based in their distance of Christianity or based on arguments? I didn't assume that hilarious thing you said either, on the contrary.