Dark Arts: Defense in Reputational Warfare
First, the Dark Arts are, as the name implies, an art, not a science. Likewise, defending against them is. An artful attacker can utilize expected defenses against you; if you can be anticipated, you can be defeated. The rules, therefore, are guidelines. I'm going to stage the rules in a narrative form; they don't need to be, however, because life doesn't follow a narrative. The narrative exists to give them context, to give the reader a sense of the purpose of each rule.
Rule #0: Never follow the rules if they would result in a worse outcome.
Now, generally, the best defense is to never get attacked in the first place. Security through obscurity is your first line of defense. Translations of Sun Tzu vary somewhat, but your ideal form is to be formless, by which I mean, do not be a single point of attack, or defense. If there's a mob in your vicinity, the ideal place is neither outside it, nor leading it, but a faceless stranger among it. Even better is to be nowhere near a mob. This is the fundamental basis of not being targeted; the other two rules derive from this one.
Rule #1: Do not stand out.
Sometimes you're picked out. There's a balancing art with this next piece; you don't want to stand out, to be a point of attack, but if somebody is picking faces, you want to look slightly more dangerous than your neighbor, you want to look like a hard target. (But not when somebody is looking for hard targets. Obviously.)
Rule #2: Look like an unattractive target.
The third aspect of this is somewhat simpler, and I'll borrow the phrasing from HPMoR:
Rule #3: "I will not go around provoking strong, vicious enemies" - http://hpmor.com/chapter/19
The first triplet of rules, by and large, are about -not- being attacked in the first place. These are starting points; Rule #1, for example, culminates in not existing at all. You can't attack what doesn't exist. Rule #1 is the fundamental strategy of Anonymous. Rule #2 is about encouraging potential attackers to look elsewhere; Rule #1 is passive, and this is the passive-aggressive form of Rule #1. It's the fundamental strategy of home security - why else do you think security companies put signs in the yard saying the house is protected? Rule #3 is obvious. Don't make enemies in the first place, and particularly don't make dangerous enemies. It has critical importance beyond its obvious nature, however - enemies might not care if they get hurt in the process of hurting you. That limits your strategies for dealing with them considerably.
You've messed up the first three rules. You're under attack. What now? Manage the Fight. Your attacker starts with the home field advantage - they attacked you under the terms they are most comfortable in. Change the terms, immediately. Do not concede that advantage. Like Rule #1, Rule #4 is the basis of your First Response, and Rule #5 and Rule #6. The simplest approach is the least obvious - immediate surrender, but on your terms. If you're accused of something, admit to the weakest and least harmful version of that which is true (be specific, and deny as necessary), and say you're aware of your problem and working on improving. This works regardless of whether there's an audience or not, but works best if there is an audience.
Rule #4: Change the terms of the fight to favor yourself, or disfavor your opponent.
Sometimes, the best response to an attack is no response at all. Is anybody (important) going to take it seriously? If not, then the very worst thing you can do is to respond, because that validates the attack. If you do need to respond, respond as lightly as possible; do not respond as if the accusation is serious or matters, because that lends weight to the accusation. If there's no audience, or a limited audience, responding gives your attacker an opportunity to continue the attack. If there's a risk of them physically assaulting you, ignoring them is probably a bad idea; a polite non-response is ideal in that situation. (For crowds that pose a risk of physically assault you... you need more rules than I'm going to write here.)
Rule #5: Use the minimum force necessary to respond.
It's tempting to attack back: Don't. You're going to escalate the situation, and escalation is going to favor the person who is better at this; worse, in a public Dark Arts battle, even the better person is going to take some hits. Nobody wins. Instead, mine the battlefield, and make sure your opponent sees you mining the battlefield. If you're accused of something, suggest that both you and your opponent know the accused thing isn't as uncommon as generally represented. Hint at shared knowledge. Make it clear you'll take them out with you. If they're actually good at this, they'll get the hint. (This is why it's critically important not to make enemies. You really, really don't want somebody around who doesn't mind going down with you, and your use of this strategy becomes difficult.)
Rule #6: Make escalation prohibitively costly.
You might recognize some elements of martial arts here. There are similarities, enough that one is useful to the other, but they are not the same.
You're in a fight, and your opponent is persistent, or you messed up and now things are serious. What now? First, continue to Manage the Fight. Your goal now is to end the fight; the total damage you're going to suffer is a function of both the amplitude of escalation and the length of the fight. You've failed to manage the amplitude; manage the length.
Rule #7: End fights fast.
At this point you've been reasonable and defensive, and that hasn't worked. Now you need to go on the offensive. Your defense should be light and easy, continuing to react with the lightest necessary touch, continuing to ignore anything you don't need to react to; your attack should be brutal, and put your opponent on the defensive immediately. Attack them on the basis of their harassment of you, first, and then build up to any personal attacks you've been holding back on - your goal is to impart a tone of somebody who has been put-upon and had enough.
Rule #8: Hit hard.
And immediately stop. If you've pulled off your counterattack right, they'll offer up defenses. Just quit the battle. Do not be tempted by a follow-up attack; you were angry, you vented your anger, you're done. By not following up on the attack, by not attacking their defenses, you're leaving them no reasonable way to respond. Any continuing attacks can be safely ignored; they will look completely pathetic going forward.
Rule #9: Recognize when you've won, and stop.
Defense follows different rules than attack. In defense, you aren't trying to inflict wounds, you're trying to avoid them. Ending the fight quickly is paramount to this.
[link] Pedro Domingos: "The Master Algorithm"
Interesting talk outlining five different approaches to AI.
https://www.youtube.com/watch?v=B8J4uefCQMc
Blurb from the YouTube description:
Machine learning is the automation of discovery, and it is responsible for making our smartphones work, helping Netflix suggest movies for us to watch, and getting presidents elected. But there is a push to use machine learning to do even more—to cure cancer and AIDS and possibly solve every problem humanity has. Domingos is at the very forefront of the search for the Master Algorithm, a universal learner capable of deriving all knowledge—past, present and future—from data. In this book, he lifts the veil on the usually secretive machine learning industry and details the quest for the Master Algorithm, along with the revolutionary implications such a discovery will have on our society.
Pedro Domingos is a Professor of Computer Science and Engineering at the University of Washington, and he is the cofounder of the International Machine Learning Society.
The value of ambiguous speech
This was going to be a reply in a discussion between ChristianKl and MattG in another thread about conlangs, but their discussion seemed to have enough significance, independent of the original topic, to deserve a thread of its own. If I'm doing this correctly (this sentence is an after-the-fact update), then you should be able to link to the original comments that inspired this thread here: http://lesswrong.com/r/discussion/lw/n0h/linguistic_mechanisms_for_less_wrong_cognition/cxb2
Is a lack of ambiguity necessary for clear thinking? Are there times when it's better to be ambiguous? This came up in the context of the extent to which a conlang should discourage ambiguity, as a means of encouraging cognitive correctness by its users. It seems to me that something is being taken for granted here, that ambiguity is necessarily an impediment to clear thinking. And I certainly agree that it can be. But if detail or specificity are the opposites of ambiguity, then surely maximal detail or specificity is undesirable when the extra information isn't relevant, so that a conlang would benefit from not requiring users to minimize ambiguity.
Moving away from the concept of conlangs, this opens up some interesting (at least to me) questions. Exactly what does "ambiguity" mean? Is there, for each speech act, an optimal level of ambiguity, and how much can be gained by achieving it? Are there reasons why a certain, minimal degree of ambiguity might be desirable beyond avoiding irrelevant information?
The meaning of words
This article aims to challenge the notion that the meaning of the words should and must be understood as the propositional or denotation content, in preference to the implied or connotational content. This is an assumption that I held for most of my life and which I suspect a great deal of aspiring rationalists will naturally tend towards. But before I begin, I must first clarify the argument that I am making. When a rationalist is engaged in conversation, it is very likely that they are seeking truth and that they want (or would at least claim to want) to know the truth regardless of the emotions that it might stir up. Emotions are seen as something that must be overcome and subjected to logic. The person who would object to statement due to its phrasing, rather than its propositional content is seen as acting irrationally. And these beliefs are indeed these are true to a large extent. Those who hide from emotions are often coddling themselves and those who object due to phrasing are often subverting the rules of fair play. But there are also situations where using particular words necessarily implies more than the strict denotational content and trying to ignore these connotations is foolhardy. For many people, this last sentence alone may be all that needs to be said on this topic, but I believe that there is still some value in breaking down precisely what words actually mean.
So why is there a widespread belief within certain circles that the meaning of a word or sentence is its denotational content? I would answer that this is a result of a desire to enforce norms that result in productive conversation. In general conversation, people will often take offense in a way that derails the conversation into a discussion of what is or is not offensive, instead of substantive disagreements. One way to address this problem is to create a norm that each person should only be criticised on their denotations, rather connotations. In practise, it is considerably more complicated as particularly blatant connotations will be treated by denotations, but this is a minor point. The larger point is that meaning consisting of purely the connotations is merely a social norm within a particular context and not an absolute truth.
This means that when the social norms are different and people complain about connotations in other social settings, the issues isn't that they don't understand how words work. The issue isn't that they can't tell the difference between a connotation and a denotation. The issue is that they are operating within different social norms. Sometimes people are defecting from these norms, such as when they engage in an excessively motivated reading, but this isn't a given. Instead, it must be seen the operating within a framework of meaning as denotation is merely a social, not an objective, norm, regardless of this norm’s considerable merits.
Reason as memetic immune disorder
A prophet is without dishonor in his hometown
I'm reading the book "The Year of Living Biblically," by A.J. Acobs. He tried to follow all of the commandments in the Bible (Old and New Testaments) for one year. He quickly found that
- a lot of the rules in the Bible are impossible, illegal, or embarassing to follow nowadays; like wearing tassels, tying your money to yourself, stoning adulterers, not eating fruit from a tree less than 5 years old, and not touching anything that a menstruating woman has touched; and
- this didn't seem to bother more than a handful of the one-third to one-half of Americans who claim the Bible is the word of God.
You may have noticed that people who convert to religion after the age of 20 or so are generally more zealous than people who grew up with the same religion. People who grow up with a religion learn how to cope with its more inconvenient parts by partitioning them off, rationalizing them away, or forgetting about them. Religious communities actually protect their members from religion in one sense - they develop an unspoken consensus on which parts of their religion members can legitimately ignore. New converts sometimes try to actually do what their religion tells them to do.
I remember many times growing up when missionaries described the crazy things their new converts in remote areas did on reading the Bible for the first time - they refused to be taught by female missionaries; they insisted on following Old Testament commandments; they decided that everyone in the village had to confess all of their sins against everyone else in the village; they prayed to God and assumed He would do what they asked; they believed the Christian God would cure their diseases. We would always laugh a little at the naivete of these new converts; I could barely hear the tiny voice in my head saying but they're just believing that the Bible means what it says...
How do we explain the blindness of people to a religion they grew up with?
The Market for Lemons: Quality Uncertainty on Less Wrong
Tl;dr: Articles on LW are, if unchecked (for now by you), heavily distorting a useful view (yours) on what matters.
[This is (though in part only) a five-year update to Patrissimo’s article Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality. However, I wrote most of this article before I became aware of its predecessor. Then again, this reinforces both our articles' main critique.]
I claim that rational discussions in person, conferences, forums, social media, and blogs suffer from adverse selection and promote unwished-for phenomena such as the availability heuristic. Bluntly stated, they do (as all other discussions) have a tendency to support ever worse, unimportant, or wrong opinions and articles. More importantly, articles of high relevancy regarding some topics are conspicuously missing. This can be also observed on Less Wrong. It is not the purpose of this article to determine the exact extent of this problem. It shall merely bring to attention that “what you get is not what you should see." However, I am afraid this effect is largely undervalued.
This result is by design and therefore to be expected. A rational agent will, by definition, post incorrect, incomplete, or not at all in the following instances:
- Cost-benefit analysis: A rational agent will not post information that reduces his utility by enabling others to compete better and, more importantly, by causing him any effort unless some gain (status, monetary, happiness,…) offsets the former effect. Example: Have you seen articles by Mark Zuckerberg? But I also argue that for random John Doe the personal cost-benefit-analysis from posting an article is negative. Even more, the value of your time should approach infinity if you really drink the LW Kool-Aid, however, this shall be the topic of a subsequent article. I suspect the theme of this article may also be restated as a free-riding problem as it postulates the non-production or under-production of valuable articles and other contributions.
- Conflicting with law: Topics like drugs (in the western world) and maybe politics or sexuality in other parts of the world are biased due to the risk of persecution, punishment, extortion, etc. And many topics such as in the spheres of rationality, transhumanism, effective altruism, are at least highly sensitive, especially when you continue arguing until you reach their moral extremes.
- Inconvenience of disagreement: Due to the effort of posting truly anonymously (which currently requires a truly anonymous e-mail address and so forth), disagreeing posts will be avoided, particularly when the original poster is of high status and the risk to rub off on one’s other articles thus increased. This is obviously even truer for personal interactions. Side note: The reverse situation may also apply: more agreement (likes) with high status.
- Dark knowledge: Even if I know how to acquire a sniper gun that cannot be traced, I will not share this knowledge (as for all other reasons, there are substantially better examples, but I do not want to make spreading dark knowledge a focus of this article).
- Signaling: Seriously, would you discuss your affiliation to LW in a job interview?! Or tell your friends that you are afraid we live in a simulation? (If you don’t see my point, your rationality is totally off base, see the next point). LW user “Timtyler” commented before: “I also found myself wondering why people remained puzzled about the high observed levels of disagreement. It seems obvious to me that people are poor approximations of truth-seeking agents—and instead promote their own interests. If you understand that, then the existence of many real-world disagreements is explained: people disagree in order to manipulate the opinions and actions of others for their own benefit.”
- WEIRD-M-LW: It is a known problem that articles on LW are going to be written by authors that are in the overwhelming majority western, educated, industrialized, rich, democratic, and male. The LW surveys show distinctly that there are most likely many further attributes in which the population on LW differs from the rest of the world. LW user “Jpet” argued in a comment very nicely: “But assuming that the other party is in fact totally rational is just silly. We know we're talking to other flawed human beings, and either or both of us might just be totally off base, even if we're hanging around on a rationality discussion board.” LW could certainly use more diversity. Personal anecdote: I was dumbfounded by the current discussion around LW T-shirts sporting slogans such as "Growing Mentally Stronger" which seemed to me intuitively highly counterproductive. I then asked my wife who is far more into fashion and not at all into LW. Her comment (Crocker's warning): “They are great! You should definitely buy one for your son if you want him to go to high school and to be all for himself for the next couple of years; that is, except for the mobbing, maybe.”
- Genes, minds, hormones & personal history: (Even) rational agents are highly influenced by those factors. This fact seems underappreciated. Think of SSC's "What universal human experiences are you missing without realizing it?" Think of inferential distances and the typical mind fallacy. Think of slight changes in beliefs after drinking coffee, been working out, deeply in love for the first time/seen your child born, being extremely hungry, wanting to and standing on the top of the mountain (especially Mt. Everest). Russell pointed out the interesting and strong effect of Schopenhauer’s and Nietzsche’s personal history on their misogyny. However, it would be a stretch to simply call them irrational. In every discussion, you have to start somewhere, but finding a starting point is a lot more difficult when the discussion partners are more diverse. All factors may not result in direct misinformation on LW but certainly shape the conversation (see also the next point).
- Priorities: Specific “darlings” of the LW sphere such as Newcomb’s paradox or MW are regularly discussed. Just one moment of not paying bias attention, and you may assume they are really relevant. For those of us currently not programming FAI, they aren’t and steal attention from more important issues.
- Other beliefs/goals: Close to selfishness, but not quite the same. If an agent’s beliefs and goals differ from most others, the discussion would benefit from your post. Even so, that by itself may not be a sufficient reason for an agent to post. Example: Imagine somebody like Ben Goertzel. His beliefs on AI, for instance, differed from the mainstream on LW. This did not necessarily result in him posting an article on LW. And to my knowledge, he won’t, at least not directly. Plus, LW may try to slow him down as he seems less concerned about the F of FAI.
- Vanity: Considering the amount of self-help threads, nerdiness, and alike on LW, it may be suspected that some refrain from posting due to self-respect. E.g. I do not want to signal myself that I belong to this tribe. This may sound outlandish but then again, have a look at the Facebook groups of LW and other rationalists where people ask frequently how they can be more interesting, or how “they can train how to pause for two seconds before they speak to increase their charisma." Again, if this sounds perfectly fine to you, that may be bad news.
- Barriers to entry: Your first post requires creating an account. Karma that signals the quality of your post is still absent. An aspiring author may question the relative importance of his opinion (especially for highly complex topics), his understanding of the problem, the quality of his writing, and if his research on the chosen topic is sufficient.
- Nothing new under the sun: Writing an article requires the bold assumption that its marginal utility is significantly above zero. The likelihood of which probably decreases with the number of posts, which is, as of now, quite impressive. Patrissimo‘s article (footnote [10]) addresses the same point, others mention being afraid of "reinventing the wheel."
- Error: I should point out that most of the reasons brought forward in this list talk about deliberate misinformation. In many cases, an article will just be wrong which the author does not realize. Examples: facts (the earth is flat), predications (planes cannot fly), and, seriously underestimated, horizon effects (if more information is provided the rational agent realizes that his action did not yield the desired outcome, e.g. ban of plastic bags).
- Protection of the group: Opinions though being important may not be discussed to protect the group or its image to outsiders. See “is LW a c***” and Roko’s ***." This argument can also be brought forward much more subtle: an agent may, for example, hold the opinion that rationality concepts are information hazards by nature if they reduce the happiness of the otherwise blissfully unaware.
- Topicality: This is a problem specific to LW. Many of the great posts as well as the sequences have originated about five to ten years ago. While the interest in AI has now reached mainstream awareness, the solid intellectual basis (centered around a few individuals) which LW offered seems to break away gradually and rationality topics experience their diaspora. What remains is a less balanced account of important topics in the sphere of rationality and new authors are discouraged to enter the conversation.
- Russell’s antinomy: Is the contribution that states its futility ever expressed? Random example article title: “Writing articles on LW is useless because only nerds will read them."
- +Redundancy: If any of the above reasons apply, I may choose not to post. However, I also expect a rational agent with sufficiently close knowledge to attain the same knowledge himself so it is at the same time not absolutely necessary to post. An article will “only” speed up the time required to understand a new concept and reduce the likelihood of rationalists diverting due to disagreement (if Aumann is ignored) or faulty argumentation.
This list is not exhaustive. If you do not find a factor in this list that you expect to accounts for much of the effect, I will appreciate a hint in the comments.
There are a few outstanding examples pointing in the opposite direction. They appear to provide uncensored accounts of their way of thinking and take arguments to their logical extremes when necessary. Most notably Bostrom and Gwern, but then again, feel free to read the latter’s posts on endured extortion attempts.
A somewhat flippant conclusion (more in a FB than LW voice): After reading the article from 2010, I cannot expect this article (or the ones possibly following that have already been written) to have a serious impact. It thus can be concluded that it should not have been written. Then again, observing our own thinking patterns, we can identify influences of many thinkers who may have suspected the same (hubris not intended). And step by step, we will be standing on the shoulders of giants. At the same time, keep in mind that articles from LW won’t get you there. They represent only a small piece of the jigsaw. You may want to read some, observe how instrumental rationality works in the “real world," and, finally, you have to draw the critical conclusions for yourself. Nobody truly rational will lay them out for you. LW is great if you have an IQ of 140 and are tired of superficial discussions with the hairstylist in your village X. But keep in mind that the instrumental rationality of your hairstylist may still surpass yours, and I don’t even need to say much about the one of your president, business leader, and club Casanova. And yet, they may be literally dead wrong, because they have overlooked AI and SENS.
A final personal note: Kudos to the giants for building this great website and starting point for rationalists and the real-life progress in the last couple of years! This is a rather skeptical article to start with, but it does have its specific purpose of laying out why I, and I suspect many others, almost refrained from posting.
Non-communicable Evidence
In this video, Douglas Crockford (JavaScript MASTER) says:
So I think programming would not be possible without System I; without the gut. Now, I have absolutely no evidence to support that statement, but my gut tells me it's true, so I believe it.
1
I don't think he has "absolutely no evidence". In worlds where DOUGLAS CROCKFORD has a gut feeling about something related to programming, how often does that gut feeling end up being correct? Probably a lot more than 50% of the time. So according to Bayes, his gut feeling is definitely evidence.
The problem isn't that he lacks evidence. It's that he lacks communicable evidence. He can't say "I believe A because X, Y and Z." The best he could do is say, "just trust me, I have a feeling about this".
Well, "just trust me, I have a feeling about this" does qualify as evidence if you have a good track record, but my point is that he can't communicate the rest of the evidence his brain used to produce the resulting belief.
2
How do you handle a situation where you're having a conversation with someone and they say, "I can't explain why I believe X; I just do."
Well, as far as updating beliefs, I think the best you could do is update on the track record of the person. I don't see any way around it. For example, you should update your beliefs when you hear Douglas Crockford say that he has a gut feeling about something related to programming. But I don't see how you could do any further updating of your beliefs. You can't actually see the evidence he used, so you can't use it to update your beliefs. If you do, the Bayes Police will come find you.
Perhaps it's also worth trying to dig the evidence out of the other persons subconscious.
- If the person has a good track record, maybe you could say, "Hmm, you have a good track record so I'm sad to hear that you're struggling to recall why it is you believe what you do. I'd be happy to wait for you to spend some time trying to dig it up."
- Maybe there are some techniques that can be used to "dig evidence out of one's subconscious". I don't know of any, but maybe they exist.
3
Ok, now let's talk about what you shouldn't do. You shouldn't say, "Well if you can't provide any evidence, you shouldn't believe what you do." The problem with that statement is that it assumes that the person has "no evidence". This was addressed in Section 1. It's akin to saying, "Well Douglas Crockford, you're telling me that you believe X and you have a fantastic track record, but I don't know anything about why you believe it, so I'm not going to update my beliefs at all, and you shouldn't either."
Brains are weird and fantastic thingys. They process information and produce outputs in the form of beliefs (amongst other things). Sometimes they're nice and they say, "Ok Adam - here is what you believe, and here is why you believe it". Other times they're not so nice and the conversation goes like this:
Brain: Ok Adam, here is what you think.
Adam: Awesome, thanks! But wait - why do I think that?
Brain: Fuck you, I'm not telling.
Adam: Fuck me? Fuck you!
Just because brains could be mean doesn't mean they should be discounted.
Reflexive self-processing is literally infinitely simpler than a many world interpretation
I recently stumbled upon the concept of "reflexive self-processing", which is Chris Langan's "Reality Theory".
I am not a physicist, so if I'm wrong or someone can better explain this, or if someone wants to break out the math here, that would be great.
The idea of reflexive self-processing is that in the double slit experiment for example, which path the photon takes is calculated by taking into account the entire state of the universe when it solves the wave function.
1. isn't this already implied by the math of how we know the wave function works? are there any alternate theories that are even consistent with the evidence?
2. don't we already know that the entire state of the universe is used to calculate the behavior of particles? for example, doesn't every body produce a gravitational field which acts, with some magntitude of force, at any distance, such that in order to calculate the trajectory of a particle to the nth decimal place, you would need to know about every other body in the universe?
This is, literally, infinitely more parsimonious than the many worlds theory, which posits that an infinite number of entire universes of complexity are created at the juncture of every little physical event where multiple paths are possible. Supporting MWI because of it's simplicity was always a really horrible argument for this reason, and it seems like we do have a sensible, consistent theory in this reflexive self-processing idea, which is infinitely simpler, and therefore should be infinitely preferred by a rationalist to MWI.
Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife
This is a bit rough, but I think that it is an interesting and potentially compelling idea. To keep this short, and accordingly increase the number of eyes over it, I have only sketched the bare bones of the idea.
1) Empirically, people have varying intuitions and beliefs about causality, particularly in Newcomb-like problems (http://wiki.lesswrong.com/wiki/Newcomb's_problem, http://philpapers.org/surveys/results.pl, and https://en.wikipedia.org/wiki/Irresistible_grace).
2) Also, as an empirical matter, some people believe in taking actions after the fact, such as one-boxing, or Calvinist “irresistible grace”, to try to ensure or conform with a seemingly already determined outcome. This might be out of a sense of retrocausality, performance, moral honesty, etc. What matters is that we know that they will act it out, despite it violating common sense causality. There has been some great work on decision theory on LW about trying to thread this needle well.
3) The second disjunct of the simulation argument (http://wiki.lesswrong.com/wiki/Simulation_argument) shows that the decision making of humanity is evidentially relevant in what our subjective credence should be that we are in a simulation. That is to say, if we are actively headed toward making simulations, we should increase our credence of being in a simulation, if we are actively headed away from making simulations, through either existential risk or law/policy against it, we should decrease our credence.
4) Many, if not most, people would like for there to be a pleasant afterlife after death, especially if we could be reunited with loved ones.
5) There is no reason to believe that simulations which are otherwise nearly identical copies of our world, could not contain, after the simulated bodily death of the participants, an extremely long-duration, though finite, "heaven"-like afterlife shared by simulation participants.
6) Our heading towards creating such simulations, especially if they were capable of nesting simulations, should increase credence that we exist in such a simulation and should perhaps expect a heaven-like afterlife of long, though finite, duration.
7) Those who believe in alternative causality, or retrocausality, in Newcomb-like situations should be especially excited about the opportunity to push the world towards surviving, allowing these types of simulations, and creating them, as it would potentially suggest, analogously, that if they work towards creating simulations with heaven-like afterlives, that they might in some sense be “causing” such a heaven to exist for themselves, and even for friends and family who have already died. Such an idea of life-after-death, and especially for being reunited with loved ones, can be extremely compelling.
8) I believe that people matching the above description, that is, holding both an intuition in alternative causality, and finding such a heaven-like-afterlife compelling, exist. Further, the existence of such people, and their associated motivation to try to create such simulations, should increase the credence even of two-boxing types, that we already live in such a world with a heaven-like afterlife. This is because knowledge of a motivated minority desiring simulations should increase credence in the likely success of simulations. This is essentially showing that “this probably happened before, one level up” from the two-box perspective.
9) As an empirical matter, I also think that there are people who would find the idea of creating simulations with heaven-like afterlives compelling, even if they are not one-boxers, from a simply altruistic perspective, both since it is a nice thing to do for the future sim people, who can, for example, probabilistically have a much better existence than biological children on earth can, and as it is a nice thing to do to increase the credence (and emotional comfort) of both one-boxers and two-boxers in our world thinking that there might be a life after death.
10) This creates the opportunity for a secular movement in which people work towards creating these simulations, and use this work and potential success in order to derive comfort and meaning from their life. For example, making donations to a simulation-creating or promoting, or existential threat avoiding, think-tank after a loved one’s death, partially symbolically, partially hopefully.
11) There is at least some room for Pascalian considerations even for two-boxers who allow for some humility in their beliefs. Nozick believed one-boxers will become two boxers if Box A is raised to 900,000, and two-boxers will become one-boxers if Box A is lowered to $1. Similarly, trying to work towards these simulations, even if you do not find it altruistically compelling, and even if you think that the odds of alternative or retrocausality is infinitesimally small, might make sense in that the reward could be extremely large, including potentially trillions of lifetimes worth of time spent in an afterlife “heaven” with friends and family.
Finally, this idea might be one worth filling in (I have been, in my private notes for over a year, but am a bit shy to debut that all just yet, even working up the courage to post this was difficult) if only because it is interesting, and could be used as a hook to get more people interested in existential risk, including the AI control problem. This is because existential catastrophe is probably the best enemy of credence in the future of such simulations, and accordingly in our reasonable credence in thinking that we have such a heaven awaiting us after death now. A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade, and game theory between higher and lower worlds, as a form of “compulsion” in which they punish worlds for not creating heaven containing simulations (therefore effecting their credence as observers of the simulation), in order to reach an equilibrium in which simulations with heaven-like afterlives are universal, or nearly universal. More on that later if this is received well.
Also, if anyone would like to join with me in researching, bull sessioning, or writing about this stuff, please feel free to IM me. Also, if anyone has a really good, non-obvious pin with which to pop my balloon, preferably in a gentle way, it would be really appreciated. I am spending a lot of energy and time on this if it is fundamentally flawed in some way.
Thank you.
*******************************
November 11 Updates and Edits for Clarification
1) There seems to be confusion about what I mean by self-location and credence. A good way to think of this is the Sleeping Beauty Problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem)
If I imagine myself as Sleeping Beauty (and who doesn’t?), and I am asked on Sunday what my credence is that the coin will be tails, I will say 1/2. If I am awakened during the experiment without being told which day it is and am asked what my credence is that the coin was tails, I will say 2/3. If I am then told it is Monday, I will update my credence to ½. If I am told it is Tuesday I update my credence to 1. If someone asks me two days after the experiment about my credence of it being tails, if I somehow do not know the days of the week still, I will say ½. Credence changes with where you are, and with what information you have. As we might be in a simulation, we are somewhere in the “experiment days” and information can help orient our credence. As humanity potentially has some say in whether or not we are in a simulation, information about how humans make decisions about these types of things can and should effect our credence.
Imagine Sleeping Beauty is a lesswrong reader. If Sleeping Beauty is unfamiliar with the simulation argument, and someone asks her about her credence of being in a simulation, she probably answers something like 0.0000000001% (all numbers for illustrative purposes only). If someone shows her the simulation argument, she increases to 1%. If she stumbles across this blog entry, she increases her credence to 2%, and adds some credence to the additional hypothesis that it may be a simulation with an afterlife. If she sees that a ton of people get really interested in this idea, and start raising funds to build simulations in the future and to lobby governments both for great AI safeguards and for regulation of future simulations, she raises her credence to 4%. If she lives through the AI superintelligence explosion and simulations are being built, but not yet turned on, her credence increases to 20%. If humanity turns them on, it increases to 50%. If there are trillions of them, she increases her credence to 60%. If 99% of simulations survive their own run-ins with artificial superintelligence and produce their own simulations, she increases her credence to 95%.
2) This set of simulations does not need to recreate the current world or any specific people in it. That is a different idea that is not necessary to this argument. As written the argument is premised on the idea of creating fully unique people. The point would be to increase our credence that we are functionally identical in type to the unique individuals in the simulation. This is done by creating ignorance or uncertainty in simulations, so that the majority of people similarly situated, in a world which may or may not be in a simulation, are in fact in a simulation. This should, in our ignorance, increase our credence that we are in a simulation. The point is about how we self-locate, as discussed in the original article by Bostrom. It is a short 12-page read, and if you have not read it yet, I would encourage it: http://simulation-argument.com/simulation.html. The point about past loved ones I was making was to bring up the possibility that the simulations could be designed to transfer people to a separate after-life simulation where they could be reunited after dying in the first part of the simulation. This was not about trying to create something for us to upload ourselves into, along with attempted replicas of dead loved ones. This staying-in-one simulation through two phases, a short life, and relatively long afterlife, also has the advantage of circumventing the teletransportation paradox as “all of the person" can be moved into the afterlife part of the simulation.
Survey Articles: A justification
There seems to be a growing consensus among the community that while Less Wrong is great at improving epistemic rationality, it is rather lacking when it comes to resources for instrumental rationality. I've been thinking about how to address this. This can be very hard because many of the questions most important to instrumental rationality lack an objective answer and depend heavily on individual circumstance. Consider for example the question, "How do I become a more interesting person?", that is the first survey article I've published. One person might easily have the resources to go travelling and gain new experiences, while another person might be prevented by their financial situation. One person may enjoy the process of broadening their experience by reading, while another may simply detest books. Ignoring these individual circumstances will lead to much of the advice being unsuitable
It therefore seems that in a general resource, that is forced by its very nature to ignore individual circumstances, that the best response would be to gather together as many ideas as possible. It is hoped that each rationalist has the capacity to critically examine each suggestion that is proposed and reject those that would be counterproductive. This differs from a standard list article as, instead of limiting itself to an arbitrary number of ideas, or only using ideas thought of by the author, I have made a comprehensive list and taken ideas from different sources. Taking ideas from different sources is extremely important - a single person can only possess so much creativity. It also decreases the influence of the author's subjective point of view - I might never have said something myself, but I might be willing to include it in a list of ideas. Another problem with lists is that if they are wordy, they take a long time to read through, while if they are concise, they may be misunderstood. Summarising whilst linking to a source means that extra detail is available for those who need it.
One flaw is that the production of this lists will always be greatly subjective. I really like Mark Manson and am probably going to quote him a lot in these lists, but another person might love The Secret and quote it everywhere instead. Regardless of this subjectivity, if you think that a particular source lacks value, then you can choose to just ignore that source and just read the rest of the article. If there is a noticeable omission, that can be addressed in the comments, or, in extreme cases, by producing a rival list. So I think that these articles can work well regardless of subjectivity.
What problem is this designed to solve?
This has already been discussed above, but I want to go into more detail about the current process when someone has one of these subjective questions. The current process probably looks like Googling the question or searching the question on a trusted source (ie. Quora or Reddit). There are many good answers and good ideas, but they are spread out all over the Internet. It is very possible for someone to fail to find a suggestion that would have helped them. Gathering together a large number of different resources helps to minimise this. It also helps people to discover new sources that they might not have thought to look at.
What feedback am I after?
As well as general support or criticisms of the idea, I'd also like to see some suggestions on which questions you'd love to see a survey for.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)