The Ten Commandments of Rationality
(Disclaimer/TL;DR: This article, much like Camelot, is a silly place/post. Nonetheless I think it presents a pretty solid list of 10 rationality lessons to take away from Less Wrong which must not be forgotten upon pain of eternal damnation/irrationality.)
In a realm not far from here, somewhere within a bustling metropolis, there lies an old and dusty book. It is placed in a most conspicuous location; in the middle of a busy street where countless citizens walk by it every day. Yet none pick it up, for it is placed on a pedestal just high enough that it cannot be reached or seen easily, and the slight inconvenience of standing on one’s toes to reach for it is sufficient to deter most. Yet if a traveller were sufficiently aware to look up and see the book, and curious enough to reach for it, and willing to suffer the slight discomfort of having to touch its muddy cover to open and read its ancient pages, that one would find within a wealth of wisdom and rationality that would transform the reader’s life forever. For this is the most holy Book of Bayes, and its first and last pages both read thusly:
The Ten Commandments of Rationality
1) Thou shalt never conflate the truth or falsehood of a proposition with any other characteristic, be it the consequences of the proposition if it be true, or the consequences of believing it for thyself personally, or the pleasing or unpleasant aesthetics of the belief itself. Furthermore, thou shalt never let thy feelings regarding the matter overrule what thy critical faculties tells thee, or in any other way act as if reality might adjust itself in accordance with thine own wishes.
2) Thou shalt not accept any imperfect situation if it may be optimized, nor shalt thou abstain from improving upon a situation by imagining ever better options without acting on any of them, nor must thee allow thyself to be paralyzed with fear or apathy or indecision when any action is still superior to doing nothing at all. Thus let it be said: Thou shalt not allow thyself to be beaten by a random number generator.
3) Thou shalt not declare any matter to be unscientific, or inherently irrational, or a false question, or with any other excuse wilfully close thine own eyes and expel all curiosity regarding the matter before thou hast even asked thyself whether the question is worth answering. To transgress thusly is to forfeit any chance to update thy own beliefs on a matter that is truly unusual to thee.
4) Thou shalt not hold goals or beliefs which conflict with each other, in such a manner as to violate most divine transitivity, and thereby set thyself up for most ignominious defeat, and rest easy in knowing this fact. Rather shalt thou engage in mindfulness and self-reflection, and in doing so find thy own true priorities, and solve any inconsistencies in a utility maximising manner so that thou may not fall prey to the wrath of the most holy Dutch Book, which is merciless but just.
5) Thou shalt never engage in defeatism, nor wallow in ennui or existential angst, or in any other way declare that thy efforts are pointless and that exerting thyself is entirely without merit. For just as it is true that matters may never get to the point where they cannot possibly get any worse, so is it true that no situation is impossible to improve upon.
6) Thou shalt never judge a real or proposed action by any metric other than this: The expected consequences of the action, both direct and indirect, be they subtle or blatant, taking into account all relevant information available at the time of deciding and no more or less than this.
7) Thou shalt never sit back on thy lazy laurels and wait for rationality to come to thee, nor shalt thou declare that thy beliefs must be correct as all others have failed to convince thee of the contrary: The cultivation of thy rationality and the falsification of thine beliefs is thine own most sacred task, which is eternal and never finished, and to leave it to others is to invite doom upon the validity of thine own beliefs and actions, for in this case others will never serve thee as well as thou might serve thyself.
8) Thou shalt never let argumentation stand in the way of knowledge, nor let knowledge stand in the way of wisdom, nor let wisdom stand in the way of victory, no matter how wise or clever it makes thee feel. Also shalt thou never conflate exceptions for rules or rules for exceptions when arguing any issue, nor bring up minutiae as if they were crucial issues, nor allow oneself to be swept away in arguing for the sake of argumentation, nor act to score cheap and yea also easy points, nor present thy learnings in a needlessly ambiguous manner such as this if it can be helped, or in any other way allow oneself to lose sight of thine most sacred goal, which is victory.
9) Thou shalt never assign a probability exactly equal to 0 or 1 to any proposition, nor declare to the skies that thy certainty regarding any matter is absolute, nor any derivation of such, for to do so is to declare thyself infallible and is placing thyself above thine most holy lord, Bayes.
10) Thou shalt never curse thy rationality, and wish for ye immediate satisfaction over thy eventual victory, all for the sake of base emotion, which is transient whereas victory is transcendent. Let it be known that it is an unspoken truth amongst rationalists -indeed it is the first and most elementary rule of rationality and yet oft forgotten by those practiced in the art- that base impulse and most holy reason are as a general rule incompatible, as there cannot be two skies.
Such are the Ten Commandments of Rationality. And Lo! If one abides by these rules, then let it be said that they act virtuously, and the heavens shall reward them with the splendour of higher expected utility relative to the counterfactual wherein they did not act virtuously. But to those who do not act virtuously, but rather act with irrationality in their minds and biases in their thinking, and who in doing so break any of the Commandments of Rationality, to them let it be said that they have transgressed against thy lord Bayes, and they shall be smitten by the twin gods of Cause and yea also Effect as surely as if they had smitten themselves. For let it be said: The gods of causality may be blind, but their aim doth be excellent regardless.
(All silliness aside, what do you all think? Is this a good list of 10 things to take away from Less Wrong? Do you have a better list? Are posts like these a waste of time? Or, Bayes forbid, did I get my thees and thous wrong somewhere? Let me know in the comments.)
Fixing akrasia: damnation to acausal hell
DISCLAIMER: This topic is related to a potentially harmful memetic hazard, that has been rightly banned from Less Wrong. If you don't know what is, it is more likely you will be fine than not, but be advised. If do know, do not mention it in the comments.
Abstract: The fact that humans cannot precommit very well might be one of our defences against acausal trades. If transhumanists figure out how to beat akrasia by some sort of drug or brain tweaks, that might make them much better at precommitment, and thus more vulnerable. That means solving akrasia might be dangerous, at least until we solve blackmail. If the danger is bad enough, even small steps should be considered carefully.
Strong precommitment and building detailed simulations of other agents are two relevant capabilities humans currently don't have. These capabilities have some unusual consequences for games. Most relevant games only arise when there is a chance of monitoring, commitment and multiple interactions. Hence being in a relevant game often implies cohabiting casual connected space-time regions with other agents. Nevertheless, being able to build detailed simulations of agents allows one to vastly increase the subjective probably this particular agent will have that his next observational moment will be under one's control iff the agent have access to some relevant areas of the logical game theoretic space. This doesn't seem desirable from this agent's perspective, it is extremely asymmetrical and allows more advanced agents to enslave less advanced ones even if they don't cohabit casual connected regions of the universe. Being able to be acausally reached by powerful agent who can simulate 3^^^3 copies of you, but against which you cannot do much is extremely undesirable.
However, and more generally, regions of the block universe can only be in a game with non-cohabiting regions if they are both agents and if they can strong precommit. Any acausal trade depends on precommitment, this is the only way an agreement can go across space-time, it is done on the game-theoretical possibilities space - as I am calling it. In the case I am discussing, a powerful agent would only have reason to even consider acausal trading with an agent if that agent can precommit. Otherwise, there is no other way of ensuring acausal cooperation. If the other agent cannot, beforehand, understand that due to the peculiarities of the set of possible strategies, it is better to always precommit to those strategies that will have higher payoff when considering all other strategies, then there's no trade to be made. Would be like trying to threaten a spider with a calm verbal sentence. If the other agent cannot precommit, there is no reason for the powerful agent to punish him for anything, he wouldn't be able to cooperate anyway, he wouldn't understand the game and, more importantly in my argument, he wouldn't be able to follow his precommitment, it would break down eventually, specially since the evidence for it is so abstract and complex. The powerful agent might want to simulate the minor agent suffering anyway, but it would solely amount to sadism. Acausal trades can only reach strong precommitable areas of the universe.
Moreover, an agent also needs reasonable epistemic access to the regions of logical space (certain areas of game theory, or, TDT if you will) that indicates both the possibility of acausal trades and some estimative on the type-distribution of superintelligences willing to trade with him (most likely, future ones that the agent can help create). Forever deterring the advance of knowledge on that area seems unfeasible, or - at best - complicated and undesirable for other reasons.
It is clear that we (humans) don't want to be in an enslavable position. I believe we are not. One of the things excluding us from this position is complete incapability to precommit. This is a psychological constrain, a neurochemical constrain. We do not have the ability of even having stable long term goals, strong precommitment is neurochemical impossible. However, it seems we can change this with human enhancement, we could develop drugs which could cure akrasia, we could overcome breakdown of will with some amazing psychological technique discovered by CFAR. It seems, however desirable on other grounds, getting rid of akrasia presents severe risks. Even if somehow we only slightly decrease akrasia, this would increase the probability that individuals with access to the relevant regions of logical space could precommit and become slaves. They might then proceed to cure akrasia for the rest of humanity.
Therefore, we should avoid trying to fundamentally fix akrasia for now, until we have a better understanding of those matters and perhaps solve the blackmail problem, or maybe only after FAI. My point here is merely arguing everyone should not endorse technologies (or psychological techniques) proposing to fundamentally fix a problem that would, otherwise, seems desirable of fixing. It would seem like a clear optimization process, but it could actually open the gates of acausal hell and damn humanity to eternal slavery.
(Thank cousin_it for the abstract. All mistakes are my responsibility.)
(EDIT: Added an explanation to back up the premise the acausal trade entails precommitment.)
What are you counting?
Eliezer's post How To Convince Me That 2 + 2 = 3 has an interesting consideration - if putting two sheep in a field, and putting two more sheep in a field, resulted in three sheep being in the field, would arithmetic hold that two plus two equals three?
I want to introduce another question. What exactly are you counting?
Imagine one sheep in one field, and another sheep in another. Now put them together. Do you now have two sheep?
"Of course!"
Ah, but is that -all- you have?
"What?"
Two sheep are more than twice as complex as a single sheep. It takes more than twice as many bits to describe two sheep than it takes to describe a single sheep, because, in addition to those two sheep, you now also have to describe their relationship to one another.
Or, to phrase it slightly differently, does 1+1=2?
Well, the answer is, it depends on what you're counting.
If you're counting the number of discrete sheep, 1+1=2. However, why is the number of discrete sheep meaningful?
If you're a hunter counting, not herded sheep, but prey - two sheep is, roughly, twice as much meat as one sheep. 1+1=2. If you're a herder, however, two sheep could be a lot more valuable than one - two sheep can turn into three sheep, if one is female and one is male. The value of two sheep can be more than twice the value of a single sheep. And if you're a hypercomputer running Solomonoff Induction to try to describe sheep positional vectors, two sheep will have a different complexity than twice the complexity of a single sheep.
Which is not to say that one plus one does not equal two. It is, however, to say that one plus one may not be meaningful as a concept outside a very limited domain.
Would an alien intelligence have arrived at arithmetic? Depends on what it counts. Is arithmetic correct?
Well, does a set of two sheep contain only two sheep, or does it also contain their interactions? Depends on your problem domain; 1+1 might just equal 2+i.
Framing a problem in a foreign language seems to reduce decision biases
The researchers aren't entirely sure why speaking in a less familiar tongue makes people more "rational", in the sense of not being affected by framing effects or loss aversion. But they think it may have to do with creating psychological distance, encouraging systematic rather than automatic thinking, and with reducing the emotional impact of decisions. This would certainly fit with past research that's shown the emotional impact of swear words, expressions of love and adverts is diminished when they're presented in a less familiar language.
Paywalled article (can someone with access throw a PDF up on dropbox or something?): http://pss.sagepub.com/content/early/2012/04/18/0956797611432178
Blog summary: http://bps-research-digest.blogspot.co.uk/2012/06/we-think-more-rationally-in-foreign.html
This post is for sacrificing my credibility!
Thank you for your cooperation and understanding. Don't worry, there won't be future posts like this, so you don't have to delete my LessWrong account, and anyway I could make another, and another.
But since you've dared to read this far:
Credibility. Should you maximize it, or minimize it? Have I made an error?
Discuss.
Don't be shallow, don't just consider the obvious points. Consider that I've thought about this for many, many hours, and that you don't have any privileged information. Whence our disagreement, if one exists?
[Link] A superintelligent solution to the Fermi paradox
Long story short, it's an attempt to justify the planetarium hypothesis as a solution to the Fermi paradox. The first half is a discussion of how it and things like it are relevant to the intended purview of the blog, and the second half is the meat of the post. You'll probably want to just eat the meat, which I think is relevant to the interests of many LessWrong folk.
The blog is Computational Theology. It's new. I'll be the primary poster, but others are sought. I'll likely introduce the blog and more completely describe it in its own discussion post when more posts are up, hopefully including a few from people besides me, and when the archive will give a more informative indication of what to expect from the blog. Despite theism's suspect reputation here at LessWrong I suspect many of the future posts will be of interest to this audience anyway, especially for those of you who take interest in discussion of the singularity. The blog will even occasionally touch on rationality proper. So you might want to store the fact of the blog's existence somewhere deep in the back of your head. A link to the blog's main page can be found on my LessWrong user page if you forget the url.
I'd appreciate it if comments about the substance of the post were made on the blog post itself, but if you want to discuss the content here on LessWrong then that's okay too. Any meta-level comments about presentation, typos, or the post's relevance to LessWrong, should probably be put as comments on this discussion post. Thanks all!
I Stand by the Sequences
Edit, May 21, 2012: Read this comment by Yvain.
Forming your own opinion is no more necessary than building your own furniture.
There's been a lot of talk here lately about how we need better contrarians. I don't agree. I think the Sequences got everything right and I agree with them completely. (This of course makes me a deranged, non-thinking, Eliezer-worshiping fanatic for whom the singularity is a substitute religion. Now that I have admitted this, you don't have to point it out a dozen times in the comments.) Even the controversial things, like:
- I think the many-worlds interpretation of quantum mechanics is the closest to correct and you're dreaming if you think the true answer will have no splitting (or I simply do not know enough physics to know why Eliezer is wrong, which I think is pretty unlikely but not totally discountable).
- I think cryonics is a swell idea and an obvious thing to sign up for if you value staying alive and have enough money and can tolerate the social costs.
- I think mainstream science is too slow and we mere mortals can do better with Bayes.
- I am a utilitarian consequentialist and think that if allow someone to die through inaction, you're just as culpable as a murderer.
- I completely accept the conclusion that it is worse to put dust specks in 3^^^3 people's eyes than to torture one person for fifty years. I came up with it independently, so maybe it doesn't count; whatever.
- I tentatively accept Eliezer's metaethics, considering how unlikely it is that there will be a better one (maybe morality is in the gluons?)
- "People are crazy, the world is mad," is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature.
- Edit, May 27, 2012: You know what? I forgot one: Gödel, Escher, Bach is the best.
There are two tiny notes of discord on which I disagree with Eliezer Yudkowsky. One is that I'm not so sure as he is that a rationalist is only made when a person breaks with the world and starts seeing everybody else as crazy, and two is that I don't share his objection to creating conscious entities in the form of an FAI or within an FAI. I could explain, but no one ever discusses these things, and they don't affect any important conclusions. I also think the sequences are badly-organized and you should just read them chronologically instead of trying to lump them into categories and sub-categories, but I digress.
Furthermore, I agree with every essay I've ever read by Yvain, I use "believe whatever gwern believes" as a heuristic/algorithm for generating true beliefs, and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Dai; policy debates should not appear one-sided, so it's good that they don't.
I write this because I'm feeling more and more lonely, in this regard. If you also stand by the sequences, feel free to say that. If you don't, feel free to say that too, but please don't substantiate it. I don't want this thread to be a low-level rehash of tired debates, though it will surely have some of that in spite of my sincerest wishes.
Holden Karnofsky said:
I believe I have read the vast majority of the Sequences, including the AI-foom debate, and that this content - while interesting and enjoyable - does not have much relevance for the arguments I've made.
I can't understand this. How could the sequences not be relevant? Half of them were created when Eliezer was thinking about AI problems.
So I say this, hoping others will as well:
I stand by the sequences.
And with that, I tap out. I have found the answer, so I am leaving the conversation.
Even though I am not important here, I don't want you to interpret my silence from now on as indicating compliance.
After some degree of thought and nearly 200 comment replies on this article, I regret writing it. I was insufficiently careful, didn't think enough about how it might alter the social dynamics here, and didn't spend enough time clarifying, especially regarding the third bullet point. I also dearly hope that I have not entrenched anyone's positions, turning them into allied soldiers to be defended, especially not my own. I'm sorry.
Hypothetical scenario
One day, someone not a member of the Singularity Institute (and has publically stated that they don't believe in the necessity of ensuring all AI is Friendly) manages to build an AI. It promptly undergoes an intelligence explosion and sends kill-bots to massacre the vast majority of the upper echelons of the US Federal Government, both civilian and military. Or maybe forcibly upload them; it's sort of difficult for untrained meat-bags like the people running the media to tell. It claims, in a press release, that its calculations indicate that the optimal outcome for humanity is achieved by removing corruption from the US Government, and this is the best way to do this.
What do you do?
[LINK] Cryo Comic
This is the obligatory post of the recent xkcd comic:
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)