Seven Apocalypses
0: Recoverable Catastrophe
An apocalypse is an event that permanently damages the world. This scale is for scenarios that are much worse than any normal disaster. Even if 100 million people die in a war, the rest of the world can eventually rebuild and keep going.
1: Economic Apocalypse
The human carrying capacity of the planet depends on the world's systems of industry, shipping, agriculture, and organizations. If the planet's economic and infrastructural systems were destroyed, then we would have to rely on more local farming, and we could not support as high a population or standard of living. In addition, rebuilding the world economy could be very difficult if the Earth's mineral and fossil fuel resources are already depleted.
2: Communications Apocalypse
If large regions of the Earth become depopulated, or if sufficiently many humans die in the catastrophe, it's possible that regions and continents could be isolated from one another. In this scenario, globalization is reversed by obstacles to long-distance communication and travel. Telecommunications, the internet, and air travel are no longer common. Humans are reduced to multiple, isolated communities.
3: Knowledge Apocalypse
If the loss of human population and institutions is so extreme that a large portion of human cultural or technological knowledge is lost, it could reverse one of the most reliable trends in modern history. Some innovations and scientific models can take millennia to develop from scratch.
4: Human Apocalypse
Even if the human population were to be violently reduced by 90%, it's easy to imagine the survivors slowly resettling the planet, given the resources and opportunity. But a sufficiently extreme transformation of the Earth could drive the human species completely extinct. To many people, this is the worst possible outcome, and any further developments are irrelevant next to the end of human history.
5: Biosphere Apocalypse
In some scenarios (such as the physical destruction of the Earth), one can imagine the extinction not just of humans, but of all known life. Only astrophysical and geological phenomena would be left in this region of the universe. In this timeline we are unlikely to be succeeded by any familiar life forms.
6: Galactic Apocalypse
A rare few scenarios have the potential to wipe out not just Earth, but also all nearby space. This usually comes up in discussions of hostile artificial superintelligence, or very destructive chain reactions of exotic matter. However, the nature of cosmic inflation and extraterrestrial intelligence is still unknown, so it's possible that some phenomenon will ultimately interfere with the destruction.
7: Universal Apocalypse
This form of destruction is thankfully exotic. People discuss the loss of all of existence as an effect of topics like false vacuum bubbles, simulationist termination, solipsistic or anthropic observer effects, Boltzmann brain fluctuations, time travel, or religious eschatology.
The goal of this scale is to give a little more resolution to a speculative, unfamiliar space, in the same sense that the Kardashev Scale provides a little terminology to talk about the distant topic of interstellar civilizations. It can be important in x risk conversations to distinguish between disasters and truly worst-case scenarios. Even if some of these scenarios are unlikely or impossible, they are nevertheless discussed, and terminology can be useful to facilitate conversation.
A Weird Trick To Manage Your Identity
I’ve always been uncomfortable being labeled “American.” Though I’m a citizen of the United States, the term feels restrictive and confining. It obliges me to identify with aspects of the United States with which I am not thrilled. I have similar feelings of limitation with respect to other labels I assume. Some of these labels don’t feel completely true to who I truly am, or impose certain perspectives on me that diverge from my own.
These concerns are why it's useful to keep one's identity small, use identity carefully, and be strategic in choosing your identity.
Yet these pieces speak more to System 1 than to System 2. I recently came up with a weird trick that has made me more comfortable identifying with groups or movements that resonate with me while creating a System 1 visceral identity management strategy. The trick is to simply put the word “weird” before any identity category I think about.
I’m not an “American,” but a “weird American.” Once I started thinking about myself as a “weird American,” I was able to think calmly through which aspects of being American I identified with and which I did not, setting the latter aside from my identity. For example, I used the term “weird American” to describe myself when meeting a group of foreigners, and we had great conversations about what I meant and why I used the term. This subtle change enables my desire to identify with the label “American,” but allows me to separate myself from any aspects of the label I don’t support.
Beyond nationality, I’ve started using the term “weird” in front of other identity categories. For example, I'm a professor at Ohio State. I used to become deeply frustrated when students didn’t prepare adequately for their classes with me. No matter how hard I tried, or whatever clever tactics I deployed, some students simply didn’t care. Instead of allowing that situation to keep bothering me, I started to think of myself as a “weird professor” - one who set up an environment that helped students succeed, but didn’t feel upset and frustrated by those who failed to make the most of it.
I’ve been applying the weird trick in my personal life, too. Thinking of myself as a “weird son” makes me feel more at ease when my mother and I don’t see eye-to-eye; thinking of myself as a “weird nice guy,” rather than just a nice guy, has helped me feel confident about my decisions to be firm when the occasion calls for it.
So, why does this weird trick work? It’s rooted in strategies of reframing and distancing, two research-based methods for changing our thought frameworks. Reframing involves changing one’s framework of thinking about a topic in order to create more beneficial modes of thinking. For instance, in reframing myself as a weird nice guy, I have been able to say “no” to requests people make of me, even though my intuitive nice guy tendency tells me I should say “yes.” Distancing refers to a method of emotional management through separating oneself from an emotionally tense situation and observing it from a third-person, external perspective. Thus, if I think of myself as a weird son, I don’t have nearly as much negative emotions during conflicts with my mom. It enables me to have space for calm and sound decision-making.
Thinking of myself as "weird" also applies to the context of rationality and effective altruism for me. Thinking of myself as a "weird" aspiring rationalist and EA helps me be more calm and at ease when I encounter criticisms of my approach to promoting rational thinking and effective giving. I can distance myself from the criticism better, and see what I can learn from the useful points in the criticism to update and be stronger going forward.
Overall, using the term “weird” before any identity category has freed me from confinements and restrictions associated with socially-imposed identity labels and allowed me to pick and choose which aspects of these labels best serve my own interests and needs. I hope being “weird” can help you manage your identity better as well!
Open thread, Sep. 19 - Sep. 25, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Weekly LW Meetups
This summary was posted to LW Main on September 16th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
- Baltimore Area / UMBC Weekly Meetup: 18 September 2016 07:00PM
- Munich Meetup in September: 17 September 2016 04:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- [Moscow] Role playing game based on HPMOR in Moscow: 17 September 2016 03:00PM
- Moscow: keynote for the first Kocherga year, text analysis, rationality applications discussion: 18 September 2016 02:00PM
- Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
- Washington, D.C.: Steelmanning: 18 September 2016 03:30PM
- Vienna: 24 September 2016 03:00PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Why we may elect our new AI overlords
In which I examine some of the latest development in automated fact checking, prediction markets for policies and propose we get rich voting for robot politicians.
http://pirate.london/2016/09/why-we-may-elect-our-new-ai-overlords/
Rationality Quotes September 2016
Another month, another rationality quotes thread. The rules are:
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
- Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
New Pascal's Mugging idea for potential solution
I'll keep this quick:
In general, the problem presented by the Mugging is this: As we examine the utility of a given act for each possible world we could be in, in order from most probable to least probable, the utilities can grow much faster than the probabilities shrink. Thus it seems that the standard maxim "Maximize expected utility" is impossible to carry out, since there is no such maximum. When we go down the list of hypotheses multiplying the utility of the act on that hypothesis, by the probability of that hypothesis, the result does not converge to anything.
Here's an idea that may fix this:
For every possible world W of complexity N, there's another possible world of complexity N+c that's just like W, except that it has two parallel, identical universes instead of just one. (If it matters, suppose that they are connected by an extra dimension.) (If this isn't obvious, say so and I can explain.)
Moreover, there's another possible world of complexity N+c+1 that's just like W except that it has four such parallel identical universes.
And a world of complexity N+c+X that has R parallel identical universes, where R is the largest number that can be specified in X bits of information.
So, take any given extreme mugger hypothesis like "I'm a matrix lord who will kill 3^^^^3 people if you don't give me $5." Uncontroversially, the probability of this hypothesis will be something much smaller than the probability of the default hypothesis. Let's be conservative and say the ratio is 1 in a billion.
(Here's the part I'm not so confident in)
Translating that into hypotheses with complexity values, that means that the mugger hypothesis has about 30 more bits of information in it than the default hypothesis.
So, assuming c is small (and actually I think this assumption can be done away with) there's another hypothesis, equally likely to the Mugger hypothesis, which is that you are in a duplicate universe that is exactly like the universe in the default hypothesis, except with R duplicates, where R is the largest number we can specify in 30 bits.
That number is very large indeed. (See the Busy Beaver function.) My guess is that it's going to be way way way larger than 3^^^^3. (It takes less than 30 bits to specify 3^^^^3, no?)
So this isn't exactly a formal solution yet, but it seems like it might be on to something. Perhaps our expected utility converges after all.
Thoughts?
(I'm very confused about all this which is why I'm posting it in the first place.)
Rationality Quotes August 2016
Another month, another rationality quotes thread. The rules are:
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
- Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
Clean work gets dirty with time
Edited for clarity (hopefully) with thanks to Squirrell_in_Hell.
Lately, I find myself more and more interested in how the concept of "systematized winning" can be applied to large groups of people who have one thing in common, and that not even time, but a hobby or a general interest in a specific discipline. It doesn't seem (to me) to much trouble people working on their own individual qualities - performers, martial artists, managers (who would self-identify as belonging to these sets), but I am basing this on "general impressions" and will be glad to be corrected. It does seem to be a norm for some other sets, like sailors, who keep correcting maps every voyage.
The field in which I have been for some years (botany) does have something similar to what sailors do, which lets us to see how floras change over time, etc. However, different questions arise when novel sub-disciplines branch off the main trunk, and naturally, the people asking these new questions keep reaching back for some kind of pre-existing observations. And often they don't check how much weight can be assigned to these observations, which, I think, is a bad habit that won't lead to "winning".
It is not "industrial rationality" per se, but a distantly related thing, and I think we might have to recognize it somehow. Or at least, recognize that it requires different assumptions... No set victory, for example... Still, it probably matters to more living people than pure "industrial rationality" does, & ignoring it won't make it go away.
[CORE] Concepts for Understanding the World
Background:
I'm recently doing a big project to increase my scholarship and modeling power for both rationality and traditional "serious" topics. One thing I found very useful is taking notes with a clear structure.
The structure I'm using currently is as follows:
- write down useful concepts,
- write down (as a separate category) useful heuristics & things to do in various situations,
- do not write facts, opinions or anything else (I rely on unaided memory to get more filtering).
Heuristic: learn concepts before facts!
Note that you can be mistaken about facts, but you can't harm your epistemology by learning concepts. Even if a concept turns out to be useless or misleading, you are better off knowing about it, understanding how it's misleading, and being able to avoid the trap when you see it.
Let's share concepts!
Please give (at a minimum) a name and a reference (link). A short description in plain language is also welcome.
New LW Meetup: Boise ID, Bay City MI
This summary was posted to LW Main on July 8th. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
- Bay City Meetup: 19 August 2016 01:25PM
- Boise, ID Meetup: 24 July 2016 02:30PM
- Dallas Meetup: 09 July 2016 01:30PM
Irregularly scheduled Less Wrong meetups are taking place in:
- Ann Arbor Area Amalgam of Rationalist-Adjacent Anthropoids: Assemblage at Adam's: 09 July 2016 07:00PM
- Australian-ish Online Hangout July: 15 July 2016 07:30PM
- Baltimore Weekly Meetup: 10 July 2016 08:00PM
- European Community Weekend: 02 September 2016 03:35PM
- San Antonio Meetup: 10 July 2016 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Austin, TX - Quack's: 09 July 2016 01:30PM
- [Melbourne] Soldier Mindset and Scout Mindset : 09 July 2016 03:30PM
- [Moscow] Role playing game based on HPMOR in Moscow: 16 July 2016 03:00PM
- San Francisco Meetup: Cooking: 11 July 2016 06:15PM
- San Jose Meetup: Park Day (V): 10 July 2016 03:00PM
- Sydney Rationality Dojo - August 2016: 07 August 2016 04:00PM
- Sydney Rationality Dojo - September 2016: 04 September 2016 04:00PM
- Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
- Washington, D.C.: Webcomics: 10 July 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
July 2016 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
- Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
- If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
- Please post only under one of the already created subthreads, and never directly under the parent media thread.
- Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
- Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.
Two kinds of Expectations, *one* of which is helpful for rational thinking
Expectation is often used to refer to two totally distinct things: entitlement and anticipation. My basic opinion is that entitlement is a rather counterproductive mental stance to have, while anticipations are really helpful for improving your model of the world.
Here are some quick examples to whet your appetite…
1. Consider a parent who says to their teenager: “I expect you to be home by midnight.” The parent may or may not anticipate the teen being home on time (even after this remark). Instead, they’re staking out a right to be annoyed if they aren’t back on time.
Contrast this with someone telling the person they’re meeting for lunch “I expect I’ll be there by 12:10” as a way to let them know that they’re running a little late, so that the recipient of the message knows not to worry that maybe they’re not in the correct meeting spot, or that the other person has forgotten.
2. A slightly more involved example: I have a particular kind of chocolate bar that I buy every week at the grocery store. Or at least I used to, until a few weeks ago when they stopped stocking it. They still stock the Dark version, but not the Extra Dark version I’ve been buying for 3 years. So the last few weeks I’ve been disappointed when I go to look. (Eventually I’ll conclude that it’s gone forever, but for now I remain hopeful.)
There’s a temptation to feel indignant at the absence of this chocolate bar. I had an expectation that it would be there, and it wasn’t! How dare they not stock it? I’m a loyal customer, who shops there every week, and who even tells others about their points card program! I deserve to have my favorite chocolate bar in stock!
…says this voice. This is the voice of entitlement.
The entitlement also wants to not just politely ask a shelf stocker if they have any out back, but to do things like walk up to the customer service desk and demand that they give me a discount on the Dark ones because they’ve been out of the Extra Dark ones for three weeks now. To make a fuss.
Entitlement is the feeling that you have a right to something. That you deserve it. That it’s owed to you.
(Relevant aside: the word “ought” used to be a synonym for “owed”, i.e. the past tense of “to owe”.)
A brief history of entitlement
That’s not what the term “entitlement” used to mean though. It used to refer to not the feeling but simply the fact: that you were owed something. Everyone deserved different things, according to their titles: kings and queens an enormous amount, lords and landowners a lesser though still large amount, and so on down the line. In some cases, people at the bottom of the hierarchy may have in fact been considering deserving of scarcity and suffering.
What changed?
Western culture shifted from exalting rule by one (monarchy) or few (oligarchy) or the rich (plutocracy) to being broadly more democratic, meritocratic, and then ultimately relatively egalitarian, in terms of ideals. What this means is that in modern times, it may be the case that being rich or white does in fact grant someone certain privileges, in the sense that they may in fact be less likely to get arrested, or more likely to get promoted…
…but broadly speaking, mainstream culture will no longer agree that they deserve these privileges. They are no longer entitled to them.
More broadly, nobody is really considered to be entitled to much of anything anymore—oh, except for a bunch of very basic, universal rights. The U.S. Bill of Rights lays out the rights the state grants Americans. The U.N. Declaration of Human Rights lays out the rights that U.N. countries grant everyone. In theory, anyway.
And since we no longer think that people deserve special privileges, anyone who acts like they do is called “entitled”. But now we’re talking about the feeling of entitlement, not actually having the right to some benefit.
Also, note that this isn’t just about class anymore: given the meritocratic context and a few other factors, people sometimes find themselves feeling like they deserve something because they worked hard for it. This isn’t a totally unreasonable way to feel, but the world doesn’t automagically reward people who work hard.
This principle is at play when older generations criticize millennials as being entitled, and then the millennials retort “well you said that if we just got a degree, then we’d have decent careers.” What the millennials are saying is that they had an expectation that they’d have prosperity, if they did a thing.
But are they actually feeling entitled to that thing? Are they relating to it in an entitled way? It’s hard to say, and probably depends on the individual. Let’s take an easier example.
Meet James Altucher
In his article How To Break All The Rules And Get Everything You Want, Altucher describes a multipart story in which he breaks some rules to get what he wants.
We arrived at the “Boy Meets Girl” fashion show and the woman with the clipboard said, “You are not on the list.”
WHAT!?
I had been telling my daughter Mollie all week we would go to this show.
Mollie was very excited.
“Don’t worry,” Nathan had told me earlier in the day, “you will be on the list.” I am extremely grateful he got us invited to the show.
Two more times in the article, James has that “WHAT!?” reaction.
This reaction seems to me to be practically the epitome of an entitlement response: outrage. Particularly when he’s like: WHAT!? You let us in even though we weren’t on the list, but we’re at the back!? Note that the feeling of entitlement is usually not so obvious, even internally.
But note also that it’s possible to act entitled, even if you don’t feel entitled. I posit that we might call this something like “entitled to ask” or “entitled to try”.
To illustrate this, let’s take a response to James’ article called When “Life Hacking” Is Really White Privilege, Jen Dziura writes:
I have often had encounters with men who take something that’s not theirs, and when they encounter no outright resistance — there’s no loud talking, no playground-style tussle — they assume everything is fine.
It is not fine.
Sometimes, you take the best desk for yourself in the new office. Sometimes, you take credit for someone else’s work or ideas. Sometimes, you’re on a team, and someone from the client company assumes that you — the tallest, whitest member — are in charge, and you do not correct them. Sometimes, it’s just that someone baked cookies to congratulate their team on a job well-done, and you’re not on that team but you wanted a cookie, and no one seemed to mind.
I have been the cookie guy. Probably with literal cookies, although probably a different situation—not that I would know, since I was just paying attention to the cookies.
And if someone had refused me the cookies, I wouldn’t have been like “WHAT!?”. I would have said something polite and moved on. But if someone had suggested I was rude for asking, I might have been a bit indignant: “I was just asking…”
But in order to be “just asking”, I also had to be assuming that the person would feel comfortable saying no if my request didn’t make sense. Assuming that giving me a “no” isn’t a costly action. Which is often not a safe assumption, for a myriad of reasons that are outside the scope of this post. But the effect is that even without having a subjective feeling of entitlement to anything in particular, I can be relating to a situation in an entitled way.
But I’m a Nice Guy!
There’s a concept that’s been around for awhile, known as the Nice Guy phenomenon. The basic notion is of a person (canonically male, though not always) becoming frustrated when their attempts to transform a platonic friendship into a romantic and/or sexual relationship fall through, leading to rejection. Feminist circles have sometimes criticized these men as objectifying women, but as Dan Fincke points out, in many cases the men are trying to relate to them deeply.
Still, Dan writes:
They want to earn love with their moral virtues, with their genuine friendship, and with their woman-honoring priorities that put knowing women as people over trying to just bed them.
Uh oh. Trying to earn love is a recipe for the meritocratic flavour of entitlement. Dan again, a little further down:
So at this point we come to the actual entitlement issue. It’s not that they feel entitled to sex—it’s much deeper and less superficial than that and these men deserve the respect of having that acknowledged. What they really feel entitled to is love.
At any rate, there usually is a sense of entitlement here, and it makes for unpleasant interactions when the guy finally shares his feelings for his friend. He has his hopes all up and expects her to reciprocate. (Here we probably have both kinds of expectation going on—entitlement and anticipation.)
Miri at Brute Reason clarifies that the problem isn’t feeling sad when you’re rejected. That’s natural and can make lots of sense. Same with:
- Wishing the person would change their mind
- Thinking that you would’ve made a good partner for this person
- Thinking that you would’ve made a better partner for this person than whoever they’re interested in
- Feeling embarrassed that you were rejected
- Feeling like you don’t want to see them or talk to them anymore
Miri distinguishes these from the feeling “I deserve sex/romance from this person because I was their friend.” and goes on to name some actions which follow from this feeling of entitlement. These include:
- Pressuring the person to change their mind (which isn’t the same as saying “Well, let me know if you ever change your mind” and then stepping back)
- Guilt-tripping them for rejecting you (which isn’t the same as being honest about your feelings about the rejection)
- Becoming cruel to the person to get back at them (i.e. “Whatever, I never liked you anyway, you [gendered slur]”)
I think that what Miri has highlighted here is a really solid application of the two channels model: the idea that you can have multiple interpretations of something at the same time, that can be alike in valence (in this case, both negative/hurting) but different in structure and implication—and potentially leading to different actions.
The difference in action can be stark—”Whatever, I never liked you anyway” vs “I still think you’re cool, even if I feel pretty burned.”—or quite subtle… what, you might ask, is the difference between “guilt-tripping someone for rejecting you”, and “being honest about your feelings about the rejection”?
Without the two channels model, we might say that the former is when you’re entitled, and the latter is when you’re not. But the two channels model suggests that it’s more like, guilt-tripping is what happens when your entitlements own you, instead of you owning them.
So you feel entitled? Okay, accept that. Not in the sense of endorsing it, but in the sense of accepting reality as it is. The reality is that you feel entitled. One way to do this while staying outside of the frame is to say something like “so it seems that a bunch of what I’m feeling right now is entitlement”. Either to yourself, or if it makes sense, to share that with the person you’re talking with.
If the guy in this situation talks honestly about his feelings of rejection and loneliness, that could be experienced as guilt-tripping or as making the person take care of him:
I feel really rejected now. It’s so frustrating, like, I’m so unlovable. Forever alone, right here.
But maybe if he’s able to get outside of just being the feelings, and talk about the overarching structure of what’s going on:
“It seems I’m feeling both a sense of rejection, but also like I’ve been setting myself up to feel entitled to your love and affection… and I guess that doesn’t make sense. I’m feeling frustrated and lonely, and at the same time… wanting to not relate to you from there.”
If I try, I can imagine that that phrasing might sound over-the-top to some people, but it’s actually how me and many of my friends talk… and it allows us to navigate tense situations while remaining on the “same side”. We stay on the same side by putting the feelings in the center where they can be talked about, and being clear that the relating doesn’t need to be run by those feelings. I go into more detail about the value of this kind of language here.
I realize that it might not be possible to talk at this level in a given relationship. First of all, it requires the capacity to think thoughts like that when you’re in an emotional state (hint: practice when you’re calm!) Even more challengingly, it requires a certain kind of trust and shared assumptions in the relationship, which may not be available.
With those shared assumptions, much less verbose expressions can still have that same page feeling. Without them, even the most clear articulation can nonetheless be experienced as an attempt at manipulation.
Without a good segue, we now turn to the final section: expectations, entitlements, anticipations, and desire.
Anticipations and Desire
When I was maybe 15, a friend and had a principle we used for navigating relationships with our romantic interests. We would go into a situation with “no intentions and no expectations”. One framing of this is that it was to protect against disappointment, but I think it could also be understood as a defense against the whole entitlement debacle: if I had an “expectation” that me and my crush were going to kiss, but she didn’t want to, well… then what? I wouldn’t kiss her without her consent, but… was it okay to even expect that, if I didn’t know what she wanted?
And so we come back to the breakdown I introduced at the start: expectations as including both anticipations and entitlements. I seriously salute my 15-year-old self for managing to avoid the entitlement-related issues (well, at least in the situations when I remembered to use this principle).
The problem was, in turning off expectations, I had shut off not only entitlements but anticipations as well. And anticipations are important!
First of all, denotationally: from an epistemic perspective, you want to be able to predict what’s going to happen. Not just so that you could remember to bring condoms, but also to have a sense of being prepared psychologically for what sort of situation you might be navigating. Projecting what will happen in the future is important.
Then there’s the second, more connotational part of the term “anticipation”, which is the emotional quality: the pleasure of considering a longed-for event. The book Rekindling Desire contains quotations like:
Anticipation is the central ingredient in sexual desire.
[…] sex has a major cognitive component — the most important element for desire is positive anticipation.
What this means is that if you try to avoid having anticipations, you can end up with a reduced sense of desire. Hormones and curiosity being what they were, this wasn’t an issue for my teenage self on a physical level, but even now I notice a subtle effect that I think has the same roots…
I’ve sometimes found it hard to tap into my sense of what it is that I want in relationships or in physically intimate contexts. I know what feels good in the moment—pleasure gradients aren’t hard—but it’s been challenging to cultivate a sense of taste for the kinds of intimacy I want, and I think that a large part of that is the resistance I have for letting myself cultivate desire through anticipation.
An article published just a few days ago (but after I’d drafted this whole post) touches on how this may be a common phenomenon:
“I want more men to get to know their own bodies and desires. […]
“Feminist men often fall into the trap of thinking that the opposite of male sexual entitlement–the opposite of men using other people’s bodies to get themselves off without any concern for that person’s consent or desire–is to focus entirely on their partner’s pleasure and deny any preferences of their own. No. The opposite of male sexual entitlement is two (or more) people working together–playing together, rather–to create the experiences they want.”
So one conclusion I’m making as part of breaking down expectations into entitlements and anticipations is that I can start doing more anticipating of things, as long as I don’t let myself get trapped in having entitlements as well. As long as I don’t hinge my sense of self-worth on having my expectations fulfilled and on never experiencing rejection. As long as I can remember that having no preferences unsatisfied by way of having no preferences isn’t actually satisfying.
“The gap between vision and current reality is also a source of energy. If there were no gap, there would be no need for any action to move towards the vision. We call this gap creative tension.”
— Peter Senge, The Fifth Discipline
The Two Kinds of Expectations + Rationality
I’ve spent a lot of time talking about how this affects interpersonal dynamics, but I want to briefly note that this distinction matters a lot for thinking quality as well:
Having entitlement-based relationships to people or systems is kind of like writing the bottom line before you know what the argument will be. It’s assuming you know what makes sense or know what will work, even though you don’t have all of the information, and then precommitting to be reluctant to change your mind.
Having anticipations, on the contrary, is fundamental to making your beliefs pay rent: in order for your beliefs to be entangled with the real world, they necessarily must suggest which events to anticipate—and importantly, which events to not anticipate.
There’s a question to, of how expectations show up when trying to coordinate a team (or vague network of people with a shared goal). I think a sports analogy is actually valuable here: if we’re on a soccer team, it’s critical that I can expect that if I pass you the ball in a certain way, you’ll be able to kick it directly at the goal. I need to know this so that I know when to do it, because it’s an effective technique when performed well. But if that expectation is about entitlement rather than anticipation, then that will cause me to be less focused on whether my pass made sense in this situation and more focused on whether I can blame you for missing the shot.
My money’s on the team with anticipation, not the one with entitlement.
This article crossposted from malcolmocean.com.
Skills training for dating anxiety
A half-baked literature review: Skills training for dating anxiety
In order to infer whether sociosexual skills training is a useful adjunct to standard treatment of anxiety, the first page of Google scholar was systematically reviewed for unique interventional studies that include with any measure of anxiety as an outcome, studies with comment on methodological issues or otherwise theorising with implications for the interpretation of the empirical evidence were discovered using the search terms: (1) social skills training for anxiety and (2) heterosexual social skills and (3) dating anxiety. And (4) behavioural replication training and (5) sensitivity training 10 studies were found, each very dated. The search space was expanded from (1) to searches (2) till (5) due to the keywords found in potentially relevant studies.
Studies that did not contextualise in terms of sexual motivations (e.g. dating) were excluded (namely: the study - Social skills training augments the effectiveness of cognitive behavioral group therapy for social anxiety disorder : www.sciencedirect.com/science/article/pii/S0005789405800619)
The studies found were (strike out: excluded):
- Social skills training and systematic desensitization in reducing dating anxiety: www.sciencedirect.com/science/article/pii/0005796775900546
- Treatment strategies for dating anxiety in college men based on real-life practice.: psycnet.apa.org/psycinfo/1979-31475-001
- Evaluation of three dating-specific treatment approaches for heterosexual dating anxiety.: psycnet.apa.org/journals/ccp/43/2/259/
- A comparison between behavioral replication training and sensitivity training approaches to heterosexual dating anxiety.: psycnet.apa.org/journals/cou/23/3/190/
- Social skills training and systematic desensitization in reducing dating anxiety: www.sciencedirect.com/science/article/pii/0005796775900546
- Social skills training augments the effectiveness of cognitive behavioral group therapy for social anxiety disorder : www.sciencedirect.com/science/article/pii/S0005789405800619
- Skills training as an approach to the treatment of heterosexual-social anxiety: A review.: psycnet.apa.org/journals/bul/84/1/140/
- Self-ratings and judges' ratings of heterosexual social anxiety and skill: A generalizability study.: psycnet.apa.org/journals/ccp/47/1/164/
- Heterosexual social skills in a population of rapists and child molesters.: psycnet.apa.org/journals/ccp/53/1/55/
- The importance of behavioral and cognitive factors in heterosexual-social anxiety1: onlinelibrary.wiley.com/doi/10.1111/j.1467-6494.1980.tb00834.x/abstract
The search is halted prematurely due to the discovery of a systematic review (see: Skills training as an approach to the treatment of heterosexual-social anxiety: A review.: psycnet.apa.org/journals/bul/84/1/140/) However, other studies emerged after the review anyway. In any case, the review’s conclusions are likely to hold true and they do suggest that there is promise to sociosexual skills training, but methodological issues will hold back good empirical research. Therefore, it is not expected to be productive to continue this review.
It is hypothesised that the evidence is so dated due to changes in terminology. The literature approximates exposure treatments for social phobia or social anxiety. However, searches of the first page of Google Scholar (exposure therapy and social anxiety; exposure therapy and social phobia) yield no results except where pharmacotherapies are in adjunct to the therapy) which are inappropriate for our purposes.
Tl;dr. See: Skills training as an approach to the treatment of heterosexual-social anxiety: A review.: psycnet.apa.org/journals/bul/84/1/140/
Research translation idea
I have an idea for teaching certain vulnerable young people the skills needed to achieve social skills without intoxication. I was wondering if you have any feedback for my proposal so that I can revise it. Many students report they drink or get high for the disinhibiting effects that help them socialise with the other sex. It is hypothesised that this is because of latent anxieties and inproper self-medication. Due to the irresponsiveness of the target population at universities to respond to demand reduction programs and health promotion, the inflexibility of the university’s institutions to delivering supply reduction campaigns, and the relative resource intensity of harm minimisation programs, alternative, innovative interventions are sought. One innovative strategy is to treat the underlying anxiety that motivates substance use in young people. The purpose of this social skills training program is to train groups of young people to socialise romantically and sexually with the opposite sex to replace substance-assisted romantic and sexual initiatory behaviour. Initial steps will be surveying the evidence-base, followed by the design, implementation and evaluation of a pilot program. This will be disseminated for critique by the broader scientific and clinical community before scaling if and as appropriate. The success of the program will be evaluated by structured interview eliciting psychological distress.
Background reading
Gender differences in social anxiety disorder: results from the national epidemiologic sample on alcohol and related conditions. - www.ncbi.nlm.nih.gov/pubmed/21903358
Examining Sex and Gender Differences in Anxiety Disorders - www.intechopen.com/books/a-fresh-look-at-anxiety-disorders/examining-sex-and-gender-differences-in-anxiety-disorders
not academic but interesting: https://www.youtube.com/watch?v=YSZky8dk7OE
Secret Rationality Base in Europe
In short, I'm wondering what place/group/organisation/activity could do for rationality in Europe what Berkeley does for rationality in the US?
Soon, we'll have LWCW in Berlin, which I hope will be an occasion to do some networking among people who think seriously about developing rationality communities. But in the meantime, let's do some brainstorming.
Important note: in comments to this post, please use only consequentialist language. For example, say "If we decided for the base to be on Malta, then X would happen" instead of "I think it should be in Malta, because..."
- What would happen if the rationality base was located in [insert specific city/country]?
- What could such a place offer to you now, that would make you consider a temporary/permanent move?
- What would happen if the European rationality community efforts were centered around some particular research topic (e.g. AI)?
- Is there something you can think of that would speed up community-building in Europe?
Of course, share anything else that you think is relevant to the topic.
Also, see you all in Berlin :)
Open thread, Jun. 13 - Jun. 19, 2016
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Rationality Quotes June 2016
Another month, another rationality quotes thread. The rules are:
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
- Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
June 2016 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
- Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
- If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
- Please post only under one of the already created subthreads, and never directly under the parent media thread.
- Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
- Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.
Suggest best book as an introduction to computational neuroscience
Im trying to find a best place to start learning the field. I have no special math background. Im very eager to learn. Thanks alot!
Rationality Reading Group: Part Y: Challenging the Difficult
This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.
Welcome to the Rationality reading group. This fortnight we discuss Part Y: Challenging the Difficult (pp. 1605-1647). This post summarizes each article of the sequence, linking to the original LessWrong post where available.
Y. Challenging the Difficult
304. Tsuyoku Naritai! (I Want to Become Stronger) - Don't be satisfied knowing you are biased; instead, aspire to become stronger, studying your flaws so as to remove them. There is a temptation to take pride in confessions, which can impede progress.
305. Tsuyoku vs. the Egalitarian Instinct - There may be evolutionary psychological factors that encourage modesty and mediocrity, at least in appearance; while some of that may still apply today, you should mentally plan and strive to pull ahead, if you are doing things right.
306. Trying to Try - As a human, if you try to try something, you will put much less work into it than if you try something.
307. Use the Try Harder, Luke - A fictional exchange between Mark Hamill and George Lucas over the scene in Empire Strikes Back where Luke Skywalker attempts to lift his X-wing with the force.
308. On Doing the Impossible - A lot of projects seem impossible, meaning that we don't immediately see a way to do them. But after working on them for a long time, they start to look merely extremely difficult.
309. Make an Extraordinary Effort - It takes an extraordinary amount of rationality before you stop making stupid mistakes. Doing better requires making extraordinary efforts.
310. Shut Up and Do the Impossible! - The ultimate level of attacking a problem is the point at which you simply shut up and solve the impossible problem.
311. Final Words - The conclusion of the Beisutsukai series.
This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
The next reading will cover Part Z: The Craft and the Community (pp. 1651-1750). The discussion will go live on Wednesday, 4 May 2016, right here on the discussion forum of LessWrong.
Open thread, Apr. 18 - Apr. 24, 2016
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Weekly LW Meetups
This summary was posted to LW Main on April 15th. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
Irregularly scheduled Less Wrong meetups are taking place in:
- Ann Arbor Meetup: 16 April 2016 07:00PM
- Baltimore Weekly Meetup: How To Actually Change Your Mind: 17 April 2016 03:00PM
- European Community Weekend: 02 September 2016 03:35PM
- Rochester (NY) Rationalists: Utopia discussion meetup: 17 April 2016 01:00PM
- San Francisco Meetup: Board Games: 18 April 2016 06:15PM
- Sao Paulo: 16 April 2016 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Moscow social meetup: Excercises and games for rationality skills improvement: 17 April 2016 02:00PM
- [Moscow] Role playing game based on HPMOR in Moscow: 23 April 2016 03:00AM
- [Moscow] Games in Kocherga club: FallacyMania, Tower of Chaos, Training game: 27 April 2016 07:40PM
- Sydney Rationality Dojo - May: 01 May 2016 04:00PMVienna Meetup: 16 April 2016 02:00PM
- Vienna Meetup: 23 April 2016 03:00PM
- Washington, D.C.: What If: 17 April 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Many Worlds against Simulation?
Lets assume few things:
1. Many Worlds is real.
2. All identical consciousnesses measures as 1 in anthropics . So if we have set of consciousness: 1xA,1xB and 1000000xC, it is still 1/3 chance, to perceive being C.
Now say some intelligent being (i.e. human) starts another human brain simulation on silicon chip. The operations it does are all discrete, so despite the chip splitting in to many chips in many worlds, the simulated consciousness itself remain just 1 (because of #2 assumption).
But that is not true for human who started the simulation as he differs somehow in every Everett branch and reaches billions different consciousnesses really fast.
Is there some mistake in reasoning, that real persons should heavily outweigh simulations, despite, how many of them are running, given such assumptions?
Black box knowledge
When we want to censor an image we put a black box over it. Over the area we want to censor. In a similar sense we can purposely censor our knowledge. This comes in particular handiness when thinking about things that might be complicated but we don't need to know.
A deliberate black box around how toasters work would look like this:
bread -> black box -> toast
Not all processes need knowing, for now a black box can be a placeholder for the future.
With the power provided to us by a black box, we can identify what we don't know. We can say; Hey! I don't know what a toaster is but it would be about 2 hours to work it out. if I ever did want to work it out, I could just spend two hours to do it. Until then; I saved myself two hours. If we take other more time-burdensome fields it works even better. Say tax.
Need to file tax -> black box accountant -> don't need to file my tax because I got the accountant to do it for me.
I know I can file my own tax, but that might be 100-200 hours of knowing everything an accountant knows about tax. (It also might be 10 hours depending on your country and their tax system). For now I can assume that hiring an accountant saved me a number of hours in doing it myself. So - Winning!
Take car repairs. On the one hand; you could do it yourself and unpack the black box, or you could trade your existing currency $$ (which you already traded your time to earn) for someone else's skills and time to repair the car. The system looks like this:
Broken car -> black box mechanic -> working car
By deliberately not knowing how it works; we can tap out of even trying to figure it out for now. The other advantage is that we can look at; not just what we know in terms of black boxes but more importantly what we don't know. We can build better maps by knowing what we don't know.
Computers:
Logic gates -> Black box computeryness -> www.lesswrong.com
Or maybe it's like this: (for more advanced users)
Computers:
Logic gates -> flip flops -> Black box CPU -> black box GPU -> www.lesswrong.com
The black-box system happens to also have a meme about it:
Step 1. Get out of bed
Step 2. Build AGI
Step 3. ?????
Step 4. Profit
Only now we have a name for deliberately skipping finding out how step 3 works.
Another useful system:
Dieting
Food in (weight goes up) -> black box human body -> energy out (weight goes down)
Make your own black box systems in the comments.
Meta: short post, 1.5 hour to write, edit and publish. Felt it was an idea that provides useful ways to talk about things. Needed it to explain something to someone, now all can enjoy!
My Table of contents has my other writings in it.
All suggestions and improvements welcome!
Rationality 101 - how would you introduce a person to the rationalist concepts? What are the best topics to learn/explain first?
How do you think curriculum of rationality 101 should look like? I want to make a brief course(a series of short animated youtube videos), ideally on the level accessible to a normal 14-17 year old person. Can you help me to make the list of concepts I should start with?
The Talos Principle
Dear members of Less Wrong, this is my very first contribution to your society and I hope that you might help me to get out of my confusion.
Back a few months ago, I tested for the first time a video game created by Croteam Studio which is called 'The Talos Principle'.
At the time, i was astonished by all the philosophical questions that the game was rising. It has kinda changed the way I see the world now, also the way I see myself.
I wanted to share my thoughts with you on the subject of 'What does being a Human mean ?'
First, I'd like to introduce you to this principle.
In Greek mythology, Talos was a giant automaton made of bronze which protected Europa in Crete from pirates and invaders.
He was known to be a gift given to Europa by Zeus himself.
He was so strong that he could crush a man's skull using only one hand, and so tall that he could circle the island's shores three times daily.
He was able to talk, think and act like he wanted to. (Except he had to obey Europa's will)
Even though his body was not organic, he was composed of a liquid-metal flowing through his veins who behaved like blood.
And here is how the principle begins. What is the fundamental difference between Talos and us, Human ?
Considering the fact that like us, he's able to think by himself, move thanks to his will and communicate like everybody does. Is he really different from us ? Sharing our own culture, history and language don't make him Human as well ?
I'm pretty sure that your first thought might be 'No way ! We are part of a biological specie. We have nothing in common with a synthetic being'.
But does our body really defines us as a Human Being ?
From a strict biological point of view, Sir Darwin would say yes, of course. And we won't be able to argue with that.
But if you take a Human being, for instance Platon, and you just cut his leg off and replace it with a synthetic prosthesis.
Would this person still be Platon ?
It appears that the answer to this question is yes, according to all the people who suffered from any kind of accidents which led them to give up a part of their body.
They were still the same. Of course they suffered from phantom pains and others psychological damages, but in the end, they remain the same as before.
Let's get back to our example. Now imagine that this synthetic-leg-equipped-Platon just had an accident that has made him lose his right arm. Profused with empathy, you accept to give him a prosthethic one.
Now, would this person still be Platon ?
Again, the answer is yes. Indeed, these accidents would not leave a man without leaving any kind of trauma, but he is still able to think and act like a normal Human. Thus we are assuming that he's still one of us, and that he's still himself.
So, how many times do we have to repeat the process in order to touch something that we can't exchange with anything synthesis in order to preserve Platon's Humanity (and sanity).
The answer appears to be the brain.
Deleting Brain remains the same as deleting our being. We can live with artificial heart, lungs, stomach, etc. but we can't live without our natural brain.
The brain is one of the biggest unknowns in the Human body. Doctors are claiming that we only know less of the half of how does the brain work, mystify it by the same time.
But still, we can resume the brain to its physical material. Estimated to contain 15-33 billion neurons each connected by synapses to several thousand other neurons which communicate with one another by means of long protoplasmic fibers called axons carrying trains of signal pulses called action potentials to distant parts of the brain or body targeting specific recipient cells.
Indeed, even if we do not really know for sure how every cell interacts with others we know that everything is bounded by chemistry. Every kind of information transfer can be reduced to a chemical reaction, something physical.
Every thought of our being started and ended with a chemical reaction. And we know how to replace a chemical reaction by another. We know how to simulate a potential transfer and thus we are today able to simulate a very simple brain on a computer.
( You may want to check the Blue Brain Project which illustrates everything that i'm writing. This simulation does not consist simply of an artificial neural network but involves a biologically realistic model of neurons )
So if in a close future we are able to simulate correctly a Human's brain, and therefore a whole Human body as well, can we considerate it as a Human being ?
Being aware of the material reality of the brain might make you think twice about yourself and your specie in general.
How do you describe a human being now ? Would you describe Talos as a human being as well ? Or just call it a being, refusing to give him the title of 'Human' because of the biological difference between you and it ? Therefore, can a man entirely simulated in a computer still be called human ?
Also, do not forget how the body influences the brain. Just look back on what happened to you during puberty, when sex desire overwhelmed you, making you impossible to remain calm. This happened thanks to chemicals, but it's still very interesting to see how a single chemical can have a huge influence on your consciousness.
I'm for now in a haze, so instead of lying on my bed thinking, i'd rather ask for your point of view. I'm very curious, would you kindly give it to me ?
Thanks for reading it all, I'll see your reactions in the comment section below.
[By the way, i'm a 19 years old french engineering student, i beg for your pardon concerning my english expression]
The map of quantum (big world) immortality
The main idea of quantum (the name "big world immortality" may be better) is that if I die, I will continue to exist in another branch of the world, where I will not die in the same situation.
This map is not intended to cover all known topics about QI, so I need to clarify my position.
I think that QI may work, but I put it as Plan D for achieving immortality, after life extension(A), cryonics(B) and digital immortality(C). All plans are here.
I also think that it may be proved experimentally, namely that if I turn 120 years or will be only survivor in plane crash I will assign higher probability to it. (But you should not try to prove it before as you will get this information for free in next 100 years.)
There is also nothing quantum in quantum immortality, because it may work in very large non-quantum world, if it is large enough to have my copies. It was also discussed here: Shock level 5: Big worlds and modal realism.
There is nothing good in it also, because most of my survived branches will be very old and ill. But we could use QI to work for us, if we combine it with cryonics. Just sign up for it or have an idea to sign up, and most likely you will find your self in survived branch where you will be resurrected after cryostasis. (The same is true for digital immortality - record more about your self and future FAI will resurrect you, and QI rises chances of it.)
I do not buy "measure" objection. It said that one should care only about his "measure of existence", that is the number of all branches there he exists, and if this number diminish, he is almost dead. But if we take an example of a book, it still exist until at least one copy of it exist. We also can't measure the measure, because it is not clear how to count branches in infinite universe.
I also don't buy ethical objection that QI may lead unstable person to suicide and so we should claim that QI is false. I think that rational understanding of QI is that it or not work, or will result in severe injuries. The idea of soul existence may result in much stronger temptation to suicide as it at least promise another better world, but I never heard that it was hidden because it may result in suicide. Religions try to stop suicide (which is logical in their premises) by adding additional rule against it. So, QI itself is not promoting suicide and personal instability may be the main course of suicidal ideation.
I also think that it is nothing extraordinary in QI idea, and it adds up to normality (in immediate surroundings). We all already witness to examples of similar ideas. That is the anthropic principle and the fact that we found ourselves on habitable planet while most planets are dead. And the fact that I was born, but not my billions potential siblings. Survivalship bias could explain finding one self in very improbable conditions and QI is the same idea projected in the future.
The possibility of big world immortality depends on size of the world and of nature of “I”, that is the personal identity problem solution. This table show how big world immortality depends on these two variables. YES means that big world immortality will work, NO means that it will not work.
Both variables are unknown to us currently. Simply speaking, QI will not work if (actually existing) world is small or if personal identity is very fragile.
My apriori position is that quantum multiverse and very big universe are both true, and that information is all you need for personal identity. This position is most scientific one, as it correlate with current common knowledge about Universe and mind. If I could bet on theories, I would bet on it 50 per cent, and 50 per cent on all other combination of theories.
Even in this case QI may not work. It may work technically, but become unmeasurable, if my mind will suffer so much damage that it will be unable to understand that it works. In this case it will be completely useless, the same way as survival of atoms from which my body is composed is meaningless. But this maybe objected, if we say that only my copies that remember that me is me should be counted (and such copies will surely exist).
From practical point of view QI may help if everything failed, but we can't count on it as it completely unpredictable. QI should be considered only in context of other world-changing ideas, that is simulation argument, doomsday argument, future strong AI.

Weekly LW Meetups
This summary was posted to LW Main on January 15th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
- Baltimore Area: Epistemology of Disagreement: 24 January 2016 03:00PM
- Palo Alto Meetup: Introduction to Causal Inference: 19 January 2016 06:30PM
- San Francisco Meetup: Short Talks: 18 January 2016 06:15PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- [Seattle] Discussion of AI as a positive and negative factor in global risk: 17 January 2016 01:00PM
- Vienna: 16 January 2016 03:00PM
- Washington, D.C.: Economics Discussion: 17 January 2016 03:00PM
- [West LA] Finding Effective Altruism with Biased Inputs on Options - LA Rationality Weekly Meetup: 20 January 2016 07:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Stupid Questions, 2nd half of December
The most recent post in December's Stupid Questions article is from the 11th.
I suppose as the article's been pushed further down the list of new articles, it's had less exposure, so here's another one for the rest of December.
Plus I have a few questions, so I'll get it kicked off.
It was said in the last one, and it's good advice, I think:
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
Open thread, Dec. 21 - Dec. 27, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Engineering Religion
This topic is vague and open-ended. I'm leaving it that way deliberately. Perhaps some interesting, better defined topics will grow out of it. Or perhaps it's too far afield from the concept of less wrong cognition to be of interest here. So I view this topic as exploratory rather than as an attempt to solve a specific problem.
What useful purposes does religion serve? Are any of these purposes non-supernaturalistic in nature? What is success for a religion and what elements of a religion tend to cause it to become successful? How would you design a "rational religion", if such an entity is possible? How and why would a religion with that design become successful and serve a useful purpose? What are the relationships between aspects of a religion, and outcomes involving that religion? For example, Catholicism discourages birth control. Lack of birth control encourages higher birthrates among Catholics. This encourages there to be a larger number of Catholics in the next generation than would otherwise be the case, Surely there are other relationships like this? How do aspects of religion cause them to evolve differently over time?
timeless quantum immortality
Weekly LW Meetups
This summary was posted to LW Main on November 27th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
- Cologne meetup: 28 November 2015 05:00PM
- Prague Less Wrong Meetup: 02 December 2015 07:00PM
- San Antonio Meetup!: 29 November 2015 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- NYC Solstice: 19 December 2015 05:30PM
- Seattle Solstice: 19 December 2015 05:00PM
- [Vienna] Five Worlds Collide - Vienna: 04 December 2015 08:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
[link] Pedro Domingos: "The Master Algorithm"
Interesting talk outlining five different approaches to AI.
https://www.youtube.com/watch?v=B8J4uefCQMc
Blurb from the YouTube description:
Machine learning is the automation of discovery, and it is responsible for making our smartphones work, helping Netflix suggest movies for us to watch, and getting presidents elected. But there is a push to use machine learning to do even more—to cure cancer and AIDS and possibly solve every problem humanity has. Domingos is at the very forefront of the search for the Master Algorithm, a universal learner capable of deriving all knowledge—past, present and future—from data. In this book, he lifts the veil on the usually secretive machine learning industry and details the quest for the Master Algorithm, along with the revolutionary implications such a discovery will have on our society.
Pedro Domingos is a Professor of Computer Science and Engineering at the University of Washington, and he is the cofounder of the International Machine Learning Society.
Creating lists
Suppose you are trying to create a list. It may be of the "best" popular science books, or most controversial movies of the last twenty years, tips for getting over a breakup or the most interesting cat gifs posted in the last few days.
There are many reasons for wanting to create one of these lists, but only a few main simple methods:
- Voting model - This is the simplest model, but popularity doesn't always equal quality. It is also particularly problematic for regularly updated lists (like Reddit), where a constantly changing audience can result in large amounts of duplicate content and where easily consumable content has an advantage.
- Curator model - A single expect can often do an admirable job of collecting high-quality content, but this is subject to their own personal biases. It is also effort intensive to evaluate different curators to see if they have done a good job.
- Voting model with (content) rules - This can cut out the irrelevant or sugary content that is often upvoted, but creating good rules is hard. Often there is no objective line between high and low-quality content. These rules can often result in conflict.
- Voting model with sections - This is a solution to some of the limitations of 1 and 3. Instead of declaring some things off-topic outright, they can be thrown into their own section. This is the optimal solution, but is usually neglected.
- Voting model with selection - This covers any model where only certain people are allowed to vote. Sometimes selection is extraordinarily rigorous, however, it can still be very effective when it isn't. As an example, Metafilter charges a $5 one-time only fee and that is sufficient to keep the quality high.
New LW Meetup: Cambridge UK
This summary was posted to LW Main on November 13th. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
Irregularly scheduled Less Wrong meetups are taking place in:
- Prague Less Wrong Meetup: 02 December 2015 07:00PM
- San Francisco Meetup: Projects: 16 November 2015 06:15PM
- Warsaw November Meetup: 14 November 2015 04:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- [Moscow] FallacyMania game in Kocherga club: 25 November 2015 07:30PM
- NYC Solstice: 19 November 2015 06:30PM
- Seattle Solstice: 19 December 2015 05:00PM
- Tel Aviv: Black Holes after Jacob Bekenstein: 24 November 2015 08:00AM
- Vienna: 21 November 2015 04:00PM
- [Vienna] Five Worlds Collide - Vienna: 04 December 2015 08:00PM
- [West LA] Scrum: A Philosophy of Life?: 18 November 2015 07:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Goal setting journal (November)
Inspired by the group rationality diary and open thread, this is the second goal setting journal (GSJ) thread.
If you have a goal worth setting then it goes here.
Notes for future GSJ posters:
1. Please add the 'gsj' tag.
2. Check if there is an active GSJ thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. GSJ Threads should be posted in Discussion, and not Main.
4. GSJ Threads should run for no longer than 1 week, but you may set goals, subgoals and tasks for as distant into the future as you please.
5. No one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it.
Meetup: Cambridge UK
(Apparently just posting a new meetup doesn't provide much visibility, so I'm posting a discussion article too.)
WHEN: 15 November 2015 05:00:00PM (+0000)
WHERE: JCR Trinity College, Cambridge, UK
First Cambridge meetup in a long time! Hopefully of many. Come to Trinity's JCR at 17 next sunday and get to know all the other aspiring rationalists around and have a good time! (Place and time are only provisional, they might change depending on your availability so comment here to see how we can arrange it properly)
New LW Meetup: Zurich
This summary was posted to LW Main on October 30th. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
Irregularly scheduled Less Wrong meetups are taking place in:
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Austin, TX - Caffe Medici - Holidays: 31 October 2015 01:30PM
- Moscow meetup: science research issues, global risks, fallacymania: 01 November 2015 02:00PM
- Sydney Rationality Dojo - November: 01 November 2015 04:00PM
- Tel Aviv: Black Holes after Jacob Bekenstein: 24 November 2015 08:00AM
- Vienna: 21 November 2015 04:00PM
- [Vienna] Five Worlds Collide - Vienna: 04 December 2015 08:00PM
- Washington, D.C.: Fun & Games: 01 November 2015 04:00PM
- West LA: Futarchy: 04 November 2015 08:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Using the Copernican mediocrity principle to estimate the timing of AI arrival
Gott famously estimated the future time duration of the Berlin wall's existence:
“Gott first thought of his "Copernicus method" of lifetime estimation in 1969 when stopping at the Berlin Wall and wondering how long it would stand. Gott postulated that the Copernican principle is applicable in cases where nothing is known; unless there was something special about his visit (which he didn't think there was) this gave a 75% chance that he was seeing the wall after the first quarter of its life. Based on its age in 1969 (8 years), Gott left the wall with 75% confidence that it wouldn't be there in 1993 (1961 + (8/0.25)). In fact, the wall was brought down in 1989, and 1993 was the year in which Gott applied his "Copernicus method" to the lifetime of the human race”. “https://en.wikipedia.org/wiki/J._Richard_Gott
The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task. So it is reasonable to apply Gott’s method.
AI research began in 1950, and so is now 65 years old. If we are currently in a random moment during AI research then it could be estimated that there is a 50% probability of AI being created in the next 65 years, i.e. by 2080. Not very optimistic. Further, we can say that the probability of its creation within the next 1300 years is 95 per cent. So we get a rather vague prediction that AI will almost certainly be created within the next 1000 years, and few people would disagree with that.
But if we include the exponential growth of AI research in this reasoning (the same way as we do in Doomsday argument where we use birth rank instead of time, and thus update the density of population) we get a much earlier predicted date.
We can get data on AI research growth from Luke’s post:
“According to MAS, the number of publications in AI grew by 100+% every 5 years between 1965 and 1995, but between 1995 and 2010 it has been growing by about 50% every 5 years. One sees a similar trend in machine learning and pattern recognition.”
From this we could conclude that doubling time in AI research is five to ten years (update by adding the recent boom in neural networks which is again five years)
This means that during the next five years more AI research will be conducted than in all the previous years combined.
If we apply the Copernican principle to this distribution, then there is a 50% probability that AI will be created within the next five years (i.e. by 2020) and a 95% probability that AI will be created within next 15-20 years, thus it will be almost certainly created before 2035.
This conclusion itself depends of several assumptions:
• AI is possible
• The exponential growth of AI research will continue
• The Copernican principle has been applied correctly.
Interestingly this coincides with other methods of AI timing predictions:
• Conclusions of the most prominent futurologists (Vinge – 2030, Kurzweil – 2029)
• Survey of the field of experts
• Prediction of Singularity based on extrapolation of history acceleration (Forrester – 2026, Panov-Skuns – 2015-2020)
• Brain emulation roadmap
• Computer power brain equivalence predictions
• Plans of major companies
It is clear that this implementation of the Copernican principle may have many flaws:
1. The one possible counterargument here is something akin to a Murphy law, specifically one which claims that any particular complex project requires much more time and money before it can be completed. It is not clear how it could be applied to many competing projects. But the field of AI is known to be more difficult than it seems to be for researchers.
2. Also the moment at which I am observing AI research is not really random, as it was in the Doomsday argument created by Gott in 1993, and I probably will not be able to apply it to a time before it become known.
3. The number of researchers is not the same as the number of observers in the original DA. If I were a researcher myself, it would be simpler, but I do not do any actual work on AI.
Perhaps this method of future prediction should be tested on simpler tasks. Gott successfully tested his method by predicting the running time of Broadway shows. But now we need something more meaningful, but testable in a one year timeframe. Any ideas?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)