Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Omega's Idiot Brother, Epsilon

0 OrphanWilde 25 November 2015 07:57PM

Epsilon walks up to you with two boxes, A and b, labeled in rather childish-looking handwriting written in crayon.

"In box A," he intones, sounding like he's trying to be foreboding, which might work better when he hits puberty, "I may or may not have placed a million of your human dollars."  He pauses for a moment, then nods.  "Yes.  I may or may not have placed a million dollars in this box.  If I expect you to open Box B, the million dollars won't be there.  Box B will contain, regardless of what you do, one thousand dollars.  You may choose to take one box, or both; I will leave with any boxes you do not take."

You've been anticipating this.  He's appeared to around twelve thousand people so far.  Out of eight thousand people who accepted both boxes, eighty found the million dollars missing, and walked away with $1,000; the other seven thousand nine hundred and twenty people walked away with $1,001,000 dollars.  Out of the four thousand people who opened only box A, only four found it empty.

The agreement is unanimous: Epsilon is really quite bad at this.  So, do you one-box, or two-box?

There are some important differences here with the original problem.  First, Epsilon won't let you open either box until you've decided whether to open one or both, and will leave with the other box.  Second, while Epsilon's false positive rate on identifying two-boxers is quite impressive, making mistakes about one-boxers only .1% of the time, his false negative rate is quite unimpressive - he catches 1% of everybody who engages in it.  Whatever heuristic he's using, clearly, he prefers to let two-boxers slide than to accidentally punish one-boxers.

I'm curious to know whether anybody would two-box in this scenario and why, and particularly curious in the reasoning of anybody whose answer is different between the original Newcomb problem and this one.

HPMOR and the Power of Consciousness

-1 Algernoq 25 November 2015 07:00AM

Throughout HPMOR, the author has included many fascinating details about how the real world works, and how to gain power. The Mirror of CEV seems like a lesson in what a true Friendly AI could look like and do.

I've got a weirder theory. (Roll for sanity...)

The entire story is plausible-deniability cover for explaining how to get the Law of Intention to work reliably.

(All quoted text is from HPMOR.)

This Mirror reflects itself perfectly and therefore its existence is absolutely stable. 

"This Mirror" is the Mind, or consciousness. The only thing a Mind can be sure of is that it is a Mind.

The Mirror's most characteristic power is to create alternate realms of existence, though these realms are only as large in size as what can be seen within the Mirror

A Mind's most characteristic power is to create alternate realms of existence, though these realms are only as large in size as what can be seen within the Mind.

Showing any person who steps before it an illusion of a world in which one of their desires has been fulfilled.

The final property upon which most tales agree, is that whatever the unknown means of commanding the Mirror - of that Key there are no plausible accounts - the Mirror's instructions cannot be shaped to react to individual people...the legends are unclear on what rules can be given, but I think it must have something to do with the Mirror's original intended use - it must have something to do with the deep desires and wishes arising from within the person.

More specifically, the Mirror shows a universe that obeys a consistent set of physical laws. From the set of all wish-fulfillment fantasies, it shows a universe that could actually plausibly exist.

It is known that people and other objects can be stored therein

Actors store other minds within their own Mind. Engineers store physical items within their Mind. The Mirror is a Mind.

the Mirror alone of all magics possesses a true moral orientation

The Mind alone of all the stuff that exists possesses a true moral orientation.

If that device had been completed, the story claimed, it would have become an absolutely stable existence that could withstand the channeling of unlimited magic in order to grant wishes. And also - this was said to be the vastly harder task - the device would somehow avert the inevitable catastrophes any sane person would expect to follow from that premise. 

An ideal Mind would grant wishes without creating catastrophes. Unfortunately, we're not quite ideal minds, even though we're pretty good.

Professor Quirrell made to walk away from the Mirrror, and seemed to halt just before reaching the point where the Mirror would no longer have reflected him, if it had been reflecting him.

My self-image can only go where it is reflected in my Mind. In other words, I can't imagine what it would be like to be a philosophical zombie.

Most powers of the Mirror are double-sided, according to legend. So you could banish what is on the other side of the Mirror instead. Send yourself, instead of me, into that frozen instant. If you wanted to, that is.

Let's interpret this scene: We've got a Mind/consciousness (the Mirror), we've got a self-image (Riddle) as well as the same spirit in a different self-image (Harry), and we've got a specific Extrapolated Volition instance in the mind (Dumbledore shown in the Mirror). This Extrapolated Volition instance is a consistent universe that could actually exist.

It sounds like the Process of the Timeless trap causes some Timeless Observer to choose one side of the Mirror as the real Universe, trapping the universe on the other side of the mirror in a frozen instant from the Timeless Observer's perspective.

The implication: the Mind has the power to choose which Universes it experiences from the set of all possible Universes extending from the current point.

All right, screw this nineteenth-century garbage. Reality wasn't atoms, it wasn't a set of tiny billiard balls bopping around. That was just another lie. The notion of atoms as little dots was just another convenient hallucination that people clung to because they didn't want to confront the inhumanly alien shape of the underlying reality. No wonder, then, that his attempts to Transfigure based on that hadn't worked. If he wanted power, he had to abandon his humanity, and force his thoughts to conform to the true math of quantum mechanics.

There were no particles, there were just clouds of amplitude in a multiparticle configuration space and what his brain fondly imagined to be an eraser was nothing except a gigantic factor in a wavefunction that happened to factorize, it didn't have a separate existence any more than there was a particular solid factor of 3 hidden inside the number 6, if his wand was capable of altering factors in an approximately factorizable wavefunction then it should damn well be able to alter the slightly smaller factor that Harry's brain visualized as a patch of material on the eraser -

Had to see the wand as enforcing a relation between separate past and future realities, instead of changing anything over time - but I did it, Hermione, I saw past the illusion of objects, and I bet there's not a single other wizard in the world who could have. 

This seems like another giant hint about magical powers.

"I had wondered if perhaps the Words of False Comprehension might be understandable to a student of Muggle science. Apparently not."

The author is disappointed that we don't get his hints. 

If the conscious mind was in reality a wish-granting machine, then how could I test this without going insane?

The Mirror of Perfect Reflection has power over what is reflected within it, and that power is said to be unchallengeable. But since the True Cloak of Invisibility produces a perfect absence of image, it should evade this principle rather than challenging it.

A method to test this seems to be to become aware of one's own ego-image (stand in front of the Mirror), vividly imagine a different ego-image without identifying with it (bring in a different personality containing the same Self under an Invisibility Cloak), suddenly switch ego-identification to the other personality (swap the Invisibility Cloak in less than a second), and then become distracted so the ego-switch becomes permanent (Dumbledore traps himself in the Mirror).

I can't think of a way to test this without sanity damage. Comments?

Creating lists

1 casebash 25 November 2015 04:41AM

Suppose you are trying to create a list. It may be of the "best" popular science books, or most controversial movies of the last twenty years, tips for getting over a breakup or the most interesting cat gifs posted in the last few days.

There are many reasons for wanting to create one of these lists, but only a few main simple methods:


  1. Voting model - This is the simplest model, but popularity doesn't always equal quality. It is also particularly problematic for regularly updated lists (like Reddit), where a constantly changing audience can result in large amounts of duplicate content and where easily consumable content has an advantage.
  2. Curator model - A single expect can often do an admirable job of collecting high-quality content, but this is subject to their own personal biases. It is also effort intensive to evaluate different curators to see if they have done a good job.
  3. Voting model with (content) rules - This can cut out the irrelevant or sugary content that is often upvoted, but creating good rules is hard. Often there is no objective line between high and low-quality content. These rules can often result in conflict.
  4. Voting model with sections - This is a solution to some of the limitations of 1 and 3. Instead of declaring some things off-topic outright, they can be thrown into their own section. This is the optimal solution, but is usually neglected.
  5. Voting model with selection - This covers any model where only certain people are allowed to vote. Sometimes selection is extraordinarily rigorous, however, it can still be very effective when it isn't. As an example, Metafilter charges a $5 one-time only fee and that is sufficient to keep the quality high.
The main point is that model 1 shouldn't automatically be selected. The other models have advantages too.


Mark Manson and Rationality

4 casebash 25 November 2015 03:34AM

As those of you on the Less Wrong chat may know, Mark Manson is my favourite personal development author. I thought I'd share those articles that are most related to rationality, as I figured that they would have the greatest chance of being appreciated.


Immediately after writing this article, I realised that I left one thing unclear, so I'll explain it now. Why have I included articles discussing the terms "life purpose" and "finding yourself"? The reason is that I think that it is very important to provide linguistic bridges between some of the vague everyday language that people often use and the more precise language expected by rationalists.


Why I’m wrong about everything (and so are you):

“When looked at from this perspective, personal development can actually be quite scientific. The hypotheses are our beliefs. Our actions and behaviors are the experiments. The resulting internal emotions and thought patterns are our data. We can then take those and compare them to our original beliefs and then integrate them into our overall understanding of our needs and emotional make-up for the future.”

“You test those beliefs out in the real world and get real-world feedback and emotional data from them. You may find that you, in fact, don’t enjoy writing every day as much as you thought you would. You may discover that you actually have a lot of trouble expressing some of your more exquisite thoughts than you first assumed. You realize that there’s a lot of failure and rejection involved in writing and that kind of takes the fun out of it. You also find that you spend more time on your site’s design and presentation than you do on the writing itself, that that is what you actually seem to be enjoying. And so you integrate that new information and adjust your goals and behaviors accordingly.”


7 strange questions that can help you find your life purpose:


Mark Manson deconstructs the notion of “life purpose”, replacing it with a question that is much more tractable:


“Part of the problem is the concept of “life purpose” itself. The idea that we were each born for some higher purpose and it’s now our cosmic mission to find it. This is the same kind of shitty logic used to justify things like spirit crystals or that your lucky number is 34 (but only on Tuesdays or during full moons).

Here’s the truth. We exist on this earth for some undetermined period of time. During that time we do things. Some of these things are important. Some of them are unimportant. And those important things give our lives meaning and happiness. The unimportant ones basically just kill time.

So when people say, “What should I do with my life?” or “What is my life purpose?” what they’re actually asking is: “What can I do with my time that is important?””

5 lessons from 5 years travelling the world:

While this isn’t the only way that the cliche of “finding yourself” can be broken down into something more understandable, it is quite a good attempt:

“Many people embark on journeys around the world in order to “find themselves.” In fact, it’s sort of cliché, the type of thing that sounds deep and important but doesn’t actually mean anything.

Whenever somebody claims they want to travel to “find themselves,” this is what I think they mean: They want to remove all of the major external influences from their lives, put themselves into a random and neutral environment, and then see what person they turn out to be.

By removing their external influences — the overbearing boss at work, the nagging mother, the pressure of a few unsavory friends — they’re then able to see how they actually feel about their life back home.

So perhaps a better way to put it is that you don’t travel to “find yourself,” you travel in order to get a more accurate perception of who you were back home, and whether you actually like that person or not.””

Love is not enough:

Mark Manson attacks one of the biggest myths in our society:

“In our culture, many of us idealize love. We see it as some lofty cure-all for all of life’s problems. Our movies and our stories and our history all celebrate it as life’s ultimate goal, the final solution for all of our pain and struggle. And because we idealize love, we overestimate it. As a result, our relationships pay a price.

When we believe that “all we need is love,” then like Lennon, we’re more likely to ignore fundamental values such as respect, humility and commitment towards the people we care about. After all, if love solves everything, then why bother with all the other stuff — all of the hard stuff?

But if, like Reznor, we believe that “love is not enough,” then we understand that healthy relationships require more than pure emotion or lofty passions. We understand that there are things more important in our lives and our relationships than simply being in love. And the success of our relationships hinges on these deeper and more important values.”


6 Healthy Relationship Habits Most People Think Are Toxic:


Edit: Read the warning in the comments


I included this article because of the discussion of the first habit.


"There’s this guy. His name is John Gottman. And he is like the Michael Jordan of relationship research. Not only has he been studying intimate relationships for more than 40 years, but he practically invented the field.

His “thin-slicing” process boasts a staggering 91% success rate in predicting whether newly-wed couples will divorce within 10 years — a staggeringly high result for any psychological research.


Gottman devised the process of “thin-slicing” relationships, a technique where he hooks couples up to all sorts of biometric devices and then records them having short conversations about their problems. Gottman then goes back and analyzes the conversation frame by frame looking at biometric data, body language, tonality and specific words chosen. He then combines all of this data together to predict whether your marriage sucks or not.

And the first thing Gottman says in almost all of his books is this: The idea that couples must communicate and resolve all of their problems is a myth."


I highly recommend these articles. They are based on research to an extent, but also upon his experiences, so they are not completely research based. If that's what you want, then you should try looking for a review article.

Attachment theory

The guide to happiness

The guide to self-discipline

Is it sensible for an ambitious nonsmoker to use e-cigarettes?

1 hg00 24 November 2015 10:48PM

Many of you have already seen Gwern's page on the topic of nicotine use. Nicotine is interesting because it's a stimulant, it may increase intelligence (I believe Daniel Kahneman said he was smarter back when he used to smoke), and it may be useful for habit formation.

However, the Cleveland Clinic thinks they put your heart at risk. This site covers some of the same research, and counterpoint is offered:

Elaine Keller, president of the CASAA, pointed to other recently published research that she said shows outcomes in the “real world” as opposed to a laboratory. One study showed that smokers put on nicotine replacement therapy after suffering an acute coronary event like a heart attack or stroke had no greater risk of a second incident within one year than those who were not.

I managed to get ahold of the study in question, and it seems to me that it damns e-cigarettes by faint praise. Based on a quick skim, researchers studied smokers who recently suffered an acute coronary syndrome (ACS). The treatment group was given e-cigarettes for nicotine replacement therapy, while the control group was left alone. Given that baseline success rates in quitting smoking are on the order of 10-20%, it seems safe to say that the control group mostly continued smoking as they had previously. (The study authors say "tobacco use during follow-up could not be accurately assessed because of the variability in documentation and, therefore, was not included in the present analysis", so we are left guessing.)

29% of the nicotine replacement group suffered an adverse event in the year following the intervention, and 31% of the control group did--similar numbers. So one interpretation of this study is that if you are a smoker in your fifties and you have already experienced an acute coronary syndrome, switching from cigarettes to e-cigs will do little to help you avoid further health issues in the next year. Doesn't exactly inspire confidence.

Another more recent article states that older smokers should see health gains from quitting cigarettes, which hammers the nail in further for e-cigarettes. It also states:

More conclusive answers about how e-cigarettes affect the body long-term are forthcoming, Rose said. Millions in research dollars are being funneled toward this topic.

“There is some poor science,” Rose said. “Everybody is trying to get something out quick in order to get funding.”

So based on this very cursory analysis I'm inclined to hold off until more research comes in. But these are just a few data points--I haven't read this government review which claims "e-cigarettes are about 95% less harmful than tobacco cigarettes", for example.

The broad issue I see is that most e-cigarette literature is focused on whether switching from cigarettes to e-cigarettes is a good idea, not whether using e-cigarettes as a nonsmoker is a good idea. I'm inclined to believe the first is true, but I'd hesitate to use research that proves the first to prove the second (as exemplified by the study I took a look at).

Anyway, if you're in the US and you want to buy e-cigarette products it may be best to do it soon before they're regulated out of existence.


The Winding Path

7 OrphanWilde 24 November 2015 09:23PM

The First Step

The first step on the path to truth is superstition.  We all start there, and should acknowledge that we start there.

Superstition is, contrary to our immediate feelings about the word, the first stage of understanding.  Superstition is the attribution of unrelated events to a common (generally unknown or unspecified) cause - it could be called pattern recognition. The "supernatural" component generally included in the definition is superfluous, because supernatural merely refers to that which isn't part of nature - which means reality -, which is an elaborate way of saying something whose relationship to nature is not yet understood, or else nonexistent.  If we discovered that ghosts are real, and identified an explanation - overlapping entities in a many-worlds universe, say - they'd cease to be supernatural and merely be natural.

Just as the supernatural refers to unexplained or imaginary phenomena, superstition refers to unexplained or imaginary relationships, without the necessity of cause.  If you designed an AI in a game which, after five rounds of being killed whenever it went into rooms with green-colored walls, started avoiding rooms with green-colored walls, you've developed a good AI.  It is engaging in superstition, it has developed an incorrect understanding of the issue.  But it hasn't gone down the wrong path - there is no wrong path in understanding, there is only the mistake of stopping.  Superstition, like all belief, is only useful if you're willing to discard it.

The Next Step

Incorrect understanding is the first - and necessary - step to correct understanding.  It is, indeed, every step towards correct understanding.  Correct understanding is a path, not an achievement, and it is pursued, not by arriving at the correct conclusion in the first place, but by testing your ideas and discarding those which are incorrect.

No matter how much intelligent you are, you cannot skip the "incorrect understanding" step of knowledge, because that is every step of knowledge.  You must come up with wrong ideas in order to get at the right ones - which will always be one step further.  You must test your ideas.  And again, the only mistake is stopping, in assuming that you have it right now.

Intelligence is never your bottleneck.  The ability to think faster isn't necessarily the ability to arrive at the right answer faster, because the right answer requires many wrong ones, and more importantly, identifying which answers are indeed wrong, which is the slow part of the process.

Better answers are arrived at by the process of invalidating wrong answers.

The Winding Path

The process of becoming Less Wrong is the process of being, in the first place, wrong.  It is the state of realizing that you're almost certainly incorrect about everything - but working on getting incrementally closer to an unachievable "correct".  It is a state of anti-hubris, and requires a delicate balance between the idea that one can be closer to the truth, and the idea that one cannot actually achieve it.

The art of rationality is the art of walking this narrow path.  If ever you think you have the truth - discard that hubris, for three steps from here you'll see it for superstition, and if you cannot see that, you cannot progress, and there your search for truth will end.  That is the path of the faithful.

But worse, the path is not merely narrow, but winding, with frequent dead ends requiring frequent backtracking.  If ever you think you're closer to the truth - discard that hubris, for it may inhibit you from leaving a dead end, and there your search for truth will end.  That is the path of the crank.

The path of rationality is winding and directionless.  It may head towards beauty, then towards ugliness; towards simplicity, then complexity.  The correct direction isn't the aesthetic one; those who head towards beauty may create great art, but do not find truth.  Those who head towards simplicity might open new mathematical doors and find great and useful things inside - but they don't find truth, either.  Truth is its own path, found only by discarding what is wrong.  It passes through simplicity, it passes through ugliness; it passes through complexity, and also beauty.  It doesn't belong to any one of these things.

The path of rationality is a path without destination.



Written as an experiment in the aesthetic of Less Wrong.  I'd appreciate feedback into the aesthetic interpretation of Less Wrong, rather than the sense of deep wisdom emanating from it (unless the deep wisdom damages the aesthetic).

Open thread, Nov. 23 - Nov. 29, 2015

5 MrMind 23 November 2015 07:59AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Some thoughts on decentralised prediction markets

-4 Clarity 23 November 2015 04:35AM

**Thought experiment 1 – arbitrage opportunities in prediction market**

You’re Mitt Romney, biding your time before riding in on your white horse to win the US republican presidential preselection (bear with me, I’m Australian and don’t know US politics). Anyway, you’ve had your run and you’re not too fussed, but some of the old guard want you back in the fight.

Playing out like a XKCD comic strip ‘Okay’, you scheme. ‘Maybe I can trump Trump at his own game and make a bit of dosh on the election’.

A data-scientist you keep on retainer sometimes talks about LessWrong and other dry things. One day she mentions that decentralised prediction markets are being developed, one of which is Augur. She says one can bet on the outcome of events such as elections.

You’ve made a fair few bucks in your day. You read the odd Investopedia page and a couple of random forum blog posts. And there’s that financial institute you run. Arbitrage opportunity, you think.

You don’t fancy your chance of winning the election. 40% chance, you reckon. So, you bet against yourself. Win the election, lose the bet. Lose the bet, win the election. Losing the election doesn’t mean much to you, losing the bet doesn’t mean much to you, winning the election means a lot of to you and winning the bet doesn’t mean much to you. There ya go. Perhaps if you put

Let’s turn this into a probability weighted decision table (game theory):

Not participating in prediction market:

Election win (+2 value)

Election lose (-1 value)



Cumulative probability weighted value: (0.4*2) + (0.6*-1)=+0.2 value

participating in prediction market::


Election win +2

Election lose -1

Bet win (0)



Bet lose (0)




Cumulative probability weighted value: (0.4*2) + (0.6*-1)=+0.2 value

They’re the same outcome!
Looks like my intuitions were wrong. Unless you value winning more than losing, then placing an additional bet, even in a different form of capital (cash v.s. political capital for instance), then taking on additional risks isn’t an arbitrage opportunity.

For the record, Mitt Romney probably wouldn’t make this mistake, but what does post suggest I know about prediction?


**Thought experiment 2 – insider trading**

Say you’re a C level executive in a publicly listed enterprise. However, for this example you don’t need to be part of a publicly listed organisatiion, but it serves to illustrate my intuitions. Say you have just been briefed by your auditors of massive fraud by a mid level manager that will devastate your company. Ordinarily, you may not know how to safely dump your stocks on the stock exchange because of several reasons, one of which is insider trading.

Now, on a prediction market, the executive could retain their stocks, thus not signalling distrust of the company themselves (which itself is information the company may be legally obliged to disclose since it materially influences share price) but make a bet on a prediction market of impending stock losses, thus hedging (not arbitraging, as demonstrated above) their bets.


**Thought experiment 3 – market efficiency**

I’d expect that prediction opportunities will be most popular where individuals weighted by their capital believe they gave private, market relevant information. For instance, if a prediction opportunity is that Canada’s prime minister says ‘I’m silly’ on his next TV appearance, many people might believe they know him personally well enough that it’s a higher probability that the otherwise absurd sounding proposition sounds. They may give it a 0.2% chance rather than 0.1% chance. However, if you are the prime minister yourself, you may decide to bet on this opportunity and make a quick, easy profit…I’m not sure where I was going with this anymore. But it was something about incentives to misrepresent how much relevant market information one has, and the amount that competitor betters have (people who bet WITH you)

[Link] A rational response to the Paris attacks and ISIS

0 Gleb_Tsipursky 23 November 2015 01:47AM

Here's my op-ed that uses long-term orientation, probabilistic thinking, numeracy, consider the alternative, reaching our actual goals, avoiding intuitive emotional reactions and attention bias, and other rationality techniques to suggest more rational responses to the Paris attacks and the ISIS threat. It's published in the Sunday edition of The Plain Dealer​, a major newspaper (16th in the US). This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.

[Link] Less Wrong Wiki article with very long summary of Daniel Kahneman's Thinking, Fast and Slow

7 Gleb_Tsipursky 22 November 2015 04:32PM

I've made very extensive notes, along with my assessment, of Daniel Kahneman's Thinking, Fast and Slow, and have passed it around to aspiring rationalist friends who found my notes very useful. So I though I would share these with the Less Wrong community by creating a Less Wrong Wiki article with these notes. Feel free to optimize the article based on your own notes as well. Hope this proves as helpful to you as it did to those others whom I shared my notes with.



A map: Causal structure of a global catastrophe

4 turchin 21 November 2015 04:07PM

New LW Meetup: Cambridge UK

2 FrankAdamek 20 November 2015 04:52PM

This summary was posted to LW Main on November 13th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

Goal setting journal (November)

2 Clarity 20 November 2015 07:54AM

Inspired by the group rationality diary and open thread, this is the second goal setting journal (GSJ) thread.

If you have a goal worth setting then it goes here.


Notes for future GSJ posters:

1. Please add the 'gsj' tag.

2. Check if there is an active GSJ thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. GSJ Threads should be posted in Discussion, and not Main.

4. GSJ Threads should run for no longer than 1 week, but you may set goals, subgoals and tasks for as distant into the future as you please.

5. No one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it.

Stupid Questions November 2015

4 Tem42 19 November 2015 10:36PM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

Rationality Reading Group: Part N: A Human's Guide to Words

7 Gram_Stone 18 November 2015 11:50PM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.

Welcome to the Rationality reading group. This fortnight we discuss Part N: A Human's Guide to Words (pp. 677-801) and Interlude: An Intuitive Explanation of Bayes's Theorem (pp. 803-826). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

N. A Human's Guide to Words

153. The Parable of the Dagger - A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no?

154. The Parable of Hemlock - Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition. Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?

You try to establish any sort of empirical proposition as being true "by definition". Socrates is a human, and humans, by definition, are mortal. So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock? It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn't keel over - where he's immune to hemlock by a quirk of biochemistry, say. Logical truths are true in all possible worlds, and so never tell you which possible world you live in - and anything you can establish "by definition" is a logical truth.

You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave. You know perfectly well that Bob is "human", even though, on your definition, you can never call Bob "human" without first observing him to be mortal.

155. Words as Hidden Inferences - The mere presence of words can influence thinking, sometimes misleading it.

The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg."

156. Extensions and Intensions - You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example. "What is red?" "Red is a color." "What's a color?" "It's a property of a thing?" "What's a thing? What's a property?" It never occurs to you to point to a stop sign and an apple.

The extension doesn't match the intension. We aren't consciously aware of our identification of a red light in the sky as "Mars", which will probably happen regardless of your attempt to define "Mars" as "The God of War".

157. Similarity Clusters - Your verbal definition doesn't capture more than a tiny fraction of the category's shared characteristics, but you try to reason as if it does. When the philosophers of Plato's Academy claimed that the best definition of a human was a "featherless biped", Diogenes the Cynic is said to have exhibited a plucked chicken and declared "Here is Plato's Man." The Platonists promptly changed their definition to "a featherless biped with broad nails".

158. Typicality and Asymmetrical Similarity - You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters. Ducks and penguins are less typical birds than robins and pigeons. Interestingly, a between-groups experiment showed that subjects thought a disease was more likely to spread from robins to ducks on an island, than from ducks to robins.

159. The Cluster Structure of Thingspace - A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions. Not every human has ten fingers, or wears clothes, or uses language; but if you look for an empirical cluster of things which share these characteristics, you'll get enough information that the occasional nine-fingered human won't fool you.

160. Disguised Queries - You ask whether something "is" or "is not" a category member but can't name the question you really want answered. What is a "man"? Is Barney the Baby Boy a "man"? The "correct" answer may depend considerably on whether the query you really want answered is "Would hemlock be a good thing to feed Barney?" or "Will Barney make a good husband?"

161. Neural Categories - You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn't use them. It's much easier for a human to notice whether an object is a "blegg" or "rube"; than for a human to notice that red objects never glow in the dark, but red furred objects have all the other characteristics of bleggs. Other statistical algorithms work differently.

162. How An Algorithm Feels From Inside - You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain. The ancient philosophers said "Socrates is a man", not, "My brain perceptually classifies Socrates as a match against the 'human' concept".

You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference. After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what's left to ask by arguing, "Is it a blegg?" But if your brain's categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there's a leftover question.

163. Disputing Definitions - You allow an argument to slide into being about definitions, even though it isn't what you originally wanted to argue about. If, before a dispute started about whether a tree falling in a deserted forest makes a "sound", you asked the two soon-to-be arguers whether they thought a "sound" should be defined as "acoustic vibrations" or "auditory experiences", they'd probably tell you to flip a coin. Only after the argument starts does the definition of a word become politically charged.

164. Feel the Meaning - You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept. When someone shouts, "Yikes! A tiger!", evolution would not favor an organism that thinks, "Hm... I have just heard the syllables 'Tie' and 'Grr' which my fellow tribemembers associate with their internal analogues of my owntiger concept and which aiiieeee CRUNCH CRUNCH GULP." So the brain takes a shortcut, and it seems that the meaning of tigerness is a property of the label itself. People argue about the correct meaning of a label like "sound".

165. The Argument from Common Usage - You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say. The human ability to associate labels to concepts is a tool for communication. When people want to communicate, we're hard to stop; if we have no common language, we'll draw pictures in sand. When you each understand what is in the other's mind, you are done.

You pull out a dictionary in the middle of an empirical or moral argument. Dictionary editors are historians of usage, not legislators of language. If the common definition contains a problem - if "Mars" is defined as the God of War, or a "dolphin" is defined as a kind of fish, or "Negroes" are defined as a separate category from humans, the dictionary will reflect the standard mistake.

You pull out a dictionary in the middle of any argument ever. Seriously, what the heck makes you think that dictionary editors are an authority on whether "atheism" is a "religion" or whatever? If you have any substantive issue whatsoever at stake, do you really think dictionary editors have access to ultimate wisdom that settles the argument?

You defy common usage without a reason, making it gratuitously hard for others to understand you. Fast stand up plutonium, with bagels without handle.

166. Empty Labels - You use complex renamings to create the illusion of inference. Is a "human" defined as a "mortal featherless biped"? Then write: "All [mortal featherless bipeds] are mortal; Socrates is a [mortal featherless biped]; therefore, Socrates is mortal." Looks less impressive that way, doesn't it?

167. Taboo Your Words - If Albert and Barry aren't allowed to use the word "sound", then Albert will have to say "A tree falling in a deserted forest generates acoustic vibrations", and Barry will say "A tree falling in a deserted forest generates no auditory experiences". When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.

168. Replace the Symbol with the Substance - The existence of a neat little word prevents you from seeing the details of the thing you're trying to think about. What actually goes on in schools once you stop calling it "education"? What's a degree, once you stop calling it a "degree"? If a coin lands "heads", what's its radial orientation? What is "truth", if you can't say "accurate" or "correct" or "represent" or "reflect" or "semantic" or "believe" or "knowledge" or "map" or "real" or any other simple term?

169. Fallacies of Compression - You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket. It's part of a detective's ordinary work to observe that Carol wore red last night, or that she has black hair; and it's part of a detective's ordinary work to wonder if maybe Carol dyes her hair. But it takes a subtler detective to wonder if there are two Carols, so that the Carol who wore red is not the same as the Carol who had black hair.

170. Categorizing Has Consequences - You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension. In Japan, it is thought that people of blood type A are earnest and creative, blood type Bs are wild and cheerful, blood type Os are agreeable and sociable, and blood type ABs are cool and controlled.

171. Sneaking in Connotations - You try to sneak in the connotations of a word, by arguing from a definition that doesn't include the connotations. A "wiggin" is defined in the dictionary as a person with green eyes and black hair. The word "wiggin" also carries the connotation of someone who commits crimes and launches cute baby squirrels, but that part isn't in the dictionary. So you point to someone and say: "Green eyes? Black hair? See, told you he's a wiggin! Watch, next he's going to steal the silverware."

172. Arguing "By Definition" - You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition. You define "human" as a "featherless biped", and point to Socrates and say, "No feathers - two legs - he must be human!" But what you really care about is something else, like mortality. If what was in dispute was Socrates's number of legs, the other fellow would just reply, "Whaddaya mean, Socrates's got two legs? That's what we're arguing about in the first place!"

You claim "Ps, by definition, are Qs!" If you see Socrates out in the field with some biologists, gathering herbs that might confer resistance to hemlock, there's no point in arguing "Men, by definition, are mortal!" The main time you feel the need to tighten the vise by insisting that something is true "by definition" is when there's other information that calls the default inference into doubt.

You try to establish membership in an empirical cluster "by definition". You wouldn't feel the need to say, "Hinduism, by definition, is a religion!" because, well, of course Hinduism is a religion. It's not just a religion "by definition", it's, like, an actual religion. Atheism does not resemble the central members of the "religion" cluster, so if it wasn't for the fact that atheism is a religion by definition, you might go around thinking that atheism wasn't a religion. That's why you've got to crush all opposition by pointing out that "Atheism is a religion" is true by definition, because it isn't true any other way.

173. Where to Draw the Boundary? - Your definition draws a boundary around things that don't really belong together. You can claim, if you like, that you are defining the word "fish" to refer to salmon, guppies, sharks, dolphins, and trout, but not jellyfish or algae. You can claim, if you like, that this is merely a list, and there is no way a list can be "wrong". Or you can stop playing nitwit games and admit that you made a mistake and that dolphins don't belong on the fish list.

174. Entropy, and Short Codes - You use a short word for something that you won't need to describe often, or a long word for something you'll need to describe often. This can result in inefficient thinking, or even misapplications of Occam's Razor, if your mind thinks that short sentences sound "simpler". Which sounds more plausible, "God did a miracle" or "A supernatural universe-creating entity temporarily suspended the laws of physics"?

175. Mutual Information, and Density in Thingspace - You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences. Since green-eyed people are not more likely to have black hair, or vice versa, and they don't share any other characteristics in common, why have a word for "wiggin"?

176. Superexponential Conceptspace, and Simple Words - You draw an unsimple boundary without any reason to do so. The act of defining a word to refer to all humans, except black people, seems kind of suspicious. If you don't present reasons to draw that particular boundary, trying to create an "arbitrary" word in that location is like a detective saying: "Well, I haven't the slightest shred of support one way or the other for who could've murdered those orphans... but have we considered John Q. Wiffleheim as a suspect?"

177. Conditional Independence, and Naive Bayes - You use categorization to make inferences about properties that don't have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes. No way am I trying to summarize this one. Just read the blog post.

178. Words as Mental Paintbrush Handles - You think that words are like tiny little LISP symbols in your mind, rather than words being labels that act as handles to direct complex mental paintbrushes that can paint detailed pictures in your sensory workspace. Visualize a "triangular lightbulb". What did you see?

179. Variable Question Fallacies - You use a word that has different meanings in different places as though it meant the same thing on each occasion, possibly creating the illusion of something protean and shifting."Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?

180. 37 Ways That Words Can Be Wrong - Contains summaries of the sequence of posts about the proper use of words.

Interlude: An Intuitive Explanation of Bayes's Theorem - Exactly what it says on the tin.


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover The World: An Introduction (pp. 834-839) and Part O: Lawful Truth (pp. 843-883). The discussion will go live on Wednesday, 2 December 2015, right here on the discussion forum of LessWrong.

The Market for Lemons: Quality Uncertainty on Less Wrong

7 signal 18 November 2015 10:06PM

Tl;dr: Articles on LW are, if unchecked (for now by you), heavily distorting a useful view (yours) on what matters.


[This is (though in part only) a five-year update to Patrissimo’s article Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality. However, I wrote most of this article before I became aware of its predecessor. Then again, this reinforces both our articles' main critique.]


I claim that rational discussions in person, conferences, forums, social media, and blogs suffer from adverse selection and promote unwished-for phenomena such as the availability heuristic. Bluntly stated, they do (as all other discussions) have a tendency to support ever worse, unimportant, or wrong opinions and articles. More importantly, articles of high relevancy regarding some topics are conspicuously missing. This can be also observed on Less Wrong. It is not the purpose of this article to determine the exact extent of this problem. It shall merely bring to attention that “what you get is not what you should see." However, I am afraid this effect is largely undervalued.


This result is by design and therefore to be expected. A rational agent will, by definition, post incorrect, incomplete, or not at all in the following instances:

  • Cost-benefit analysis: A rational agent will not post information that reduces his utility by enabling others to compete better and, more importantly, by causing him any effort unless some gain (status, monetary, happiness,…) offsets the former effect. Example: Have you seen articles by Mark Zuckerberg? But I also argue that for random John Doe the personal cost-benefit-analysis from posting an article is negative. Even more, the value of your time should approach infinity if you really drink the LW Kool-Aid, however, this shall be the topic of a subsequent article. I suspect the theme of this article may also be restated as a free-riding problem as it postulates the non-production or under-production of valuable articles and other contributions.
  • Conflicting with law: Topics like drugs (in the western world) and maybe politics or sexuality in other parts of the world are biased due to the risk of persecution, punishment, extortion, etc. And many topics such as in the spheres of rationality, transhumanism, effective altruism, are at least highly sensitive, especially when you continue arguing until you reach their moral extremes.
  • Inconvenience of disagreement: Due to the effort of posting truly anonymously (which currently requires a truly anonymous e-mail address and so forth), disagreeing posts will be avoided, particularly when the original poster is of high status and the risk to rub off on one’s other articles thus increased. This is obviously even truer for personal interactions. Side note: The reverse situation may also apply: more agreement (likes) with high status.
  • Dark knowledge: Even if I know how to acquire a sniper gun that cannot be traced, I will not share this knowledge (as for all other reasons, there are substantially better examples, but I do not want to make spreading dark knowledge a focus of this article).
  • Signaling: Seriously, would you discuss your affiliation to LW in a job interview?! Or tell your friends that you are afraid we live in a simulation? (If you don’t see my point, your rationality is totally off base, see the next point). LW user “Timtyler” commented before: “I also found myself wondering why people remained puzzled about the high observed levels of disagreement. It seems obvious to me that people are poor approximations of truth-seeking agents—and instead promote their own interests. If you understand that, then the existence of many real-world disagreements is explained: people disagree in order to manipulate the opinions and actions of others for their own benefit.”
  • WEIRD-M-LW: It is a known problem that articles on LW are going to be written by authors that are in the overwhelming majority western, educated, industrialized, rich, democratic, and male. The LW surveys show distinctly that there are most likely many further attributes in which the population on LW differs from the rest of the world. LW user “Jpet” argued in a comment very nicely: “But assuming that the other party is in fact totally rational is just silly. We know we're talking to other flawed human beings, and either or both of us might just be totally off base, even if we're hanging around on a rationality discussion board.” LW could certainly use more diversity. Personal anecdote: I was dumbfounded by the current discussion around LW T-shirts sporting slogans such as "Growing Mentally Stronger" which seemed to me intuitively highly counterproductive. I then asked my wife who is far more into fashion and not at all into LW. Her comment (Crocker's warning): “They are great! You should definitely buy one for your son if you want him to go to high school and to be all for himself for the next couple of years; that is, except for the mobbing, maybe.”
  • Genes, minds, hormones & personal history: (Even) rational agents are highly influenced by those factors. This fact seems underappreciated. Think of SSC's "What universal human experiences are you missing without realizing it?" Think of inferential distances and the typical mind fallacy. Think of slight changes in beliefs after drinking coffee, been working out, deeply in love for the first time/seen your child born, being extremely hungry, wanting to and standing on the top of the mountain (especially Mt. Everest). Russell pointed out the interesting and strong effect of Schopenhauer’s and Nietzsche’s personal history on their misogyny. However, it would be a stretch to simply call them irrational. In every discussion, you have to start somewhere, but finding a starting point is a lot more difficult when the discussion partners are more diverse. All factors may not result in direct misinformation on LW but certainly shape the conversation (see also the next point).
  • Priorities: Specific “darlings” of the LW sphere such as Newcomb’s paradox or MW are regularly discussed. Just one moment of not paying bias attention, and you may assume they are really relevant. For those of us currently not programming FAI, they aren’t and steal attention from more important issues.
  • Other beliefs/goals: Close to selfishness, but not quite the same. If an agent’s beliefs and goals differ from most others, the discussion would benefit from your post. Even so, that by itself may not be a sufficient reason for an agent to post. Example: Imagine somebody like Ben Goertzel. His beliefs on AI, for instance, differed from the mainstream on LW. This did not necessarily result in him posting an article on LW. And to my knowledge, he won’t, at least not directly. Plus, LW may try to slow him down as he seems less concerned about the F of FAI.
  • Vanity: Considering the amount of self-help threads, nerdiness, and alike on LW, it may be suspected that some refrain from posting due to self-respect. E.g. I do not want to signal myself that I belong to this tribe. This may sound outlandish but then again, have a look at the Facebook groups of LW and other rationalists where people ask frequently how they can be more interesting, or how “they can train how to pause for two seconds before they speak to increase their charisma." Again, if this sounds perfectly fine to you, that may be bad news.
  • Barriers to entry: Your first post requires creating an account. Karma that signals the quality of your post is still absent. An aspiring author may question the relative importance of his opinion (especially for highly complex topics), his understanding of the problem, the quality of his writing, and if his research on the chosen topic is sufficient.
  • Nothing new under the sun: Writing an article requires the bold assumption that its marginal utility is significantly above zero. The likelihood of which probably decreases with the number of posts, which is, as of now, quite impressive. Patrissimo‘s article (footnote [10]) addresses the same point, others mention being afraid of "reinventing the wheel."
  • Error: I should point out that most of the reasons brought forward in this list talk about deliberate misinformation. In many cases, an article will just be wrong which the author does not realize. Examples: facts (the earth is flat), predications (planes cannot fly), and, seriously underestimated, horizon effects (if more information is provided the rational agent realizes that his action did not yield the desired outcome, e.g. ban of plastic bags).
  • Protection of the group: Opinions though being important may not be discussed to protect the group or its image to outsiders. See “is LW a c***” and Roko’s ***." This argument can also be brought forward much more subtle: an agent may, for example, hold the opinion that rationality concepts are information hazards by nature if they reduce the happiness of the otherwise blissfully unaware.
  • Topicality: This is a problem specific to LW. Many of the great posts as well as the sequences have originated about five to ten years ago. While the interest in AI has now reached mainstream awareness, the solid intellectual basis (centered around a few individuals) which LW offered seems to break away gradually and rationality topics experience their diaspora. What remains is a less balanced account of important topics in the sphere of rationality and new authors are discouraged to enter the conversation.
  • Russell’s antinomy: Is the contribution that states its futility ever expressed? Random example article title: “Writing articles on LW is useless because only nerds will read them."
  • +Redundancy: If any of the above reasons apply, I may choose not to post. However, I also expect a rational agent with sufficiently close knowledge to attain the same knowledge himself so it is at the same time not absolutely necessary to post. An article will “only” speed up the time required to understand a new concept and reduce the likelihood of rationalists diverting due to disagreement (if Aumann is ignored) or faulty argumentation.

This list is not exhaustive. If you do not find a factor in this list that you expect to accounts for much of the effect, I will appreciate a hint in the comments.


There are a few outstanding examples pointing in the opposite direction. They appear to provide uncensored accounts of their way of thinking and take arguments to their logical extremes when necessary. Most notably Bostrom and Gwern, but then again, feel free to read the latter’s posts on endured extortion attempts.


A somewhat flippant conclusion (more in a FB than LW voice): After reading the article from 2010, I cannot expect this article (or the ones possibly following that have already been written) to have a serious impact. It thus can be concluded that it should not have been written. Then again, observing our own thinking patterns, we can identify influences of many thinkers who may have suspected the same (hubris not intended). And step by step, we will be standing on the shoulders of giants. At the same time, keep in mind that articles from LW won’t get you there. They represent only a small piece of the jigsaw. You may want to read some, observe how instrumental rationality works in the “real world," and, finally, you have to draw the critical conclusions for yourself. Nobody truly rational will lay them out for you. LW is great if you have an IQ of 140 and are tired of superficial discussions with the hairstylist in your village X. But keep in mind that the instrumental rationality of your hairstylist may still surpass yours, and I don’t even need to say much about the one of your president, business leader, and club Casanova. And yet, they may be literally dead wrong, because they have overlooked AI and SENS.


A final personal note: Kudos to the giants for building this great website and starting point for rationalists and the real-life progress in the last couple of years! This is a rather skeptical article to start with, but it does have its specific purpose of laying out why I, and I suspect many others, almost refrained from posting.



[Link] Audio recording of Stephen Wolfram discussing AI and the Singularity

1 RaelwayScot 18 November 2015 09:41PM

Marketing Rationality

25 Viliam 18 November 2015 01:43PM

What is your opinion on rationality-promoting articles by Gleb Tsipursky / Intentional Insights? Here is what I think:

continue reading »

Open thread, Nov. 16 - Nov. 22, 2015

7 MrMind 16 November 2015 08:03AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

[Link] Lifehack Article Promoting LessWrong, Rationality Dojo, and Rationality: From AI to Zombies

10 Gleb_Tsipursky 14 November 2015 08:34PM

Nice to get this list-style article promoting LessWrong, Rationality Dojo, and Rationality: From AI to Zombies, as part of a series of strategies for growing mentally stronger, published on Lifehack, a very popular self-improvement website. It's part of my broader project of promoting rationality and effective altruism to a broad audience, Intentional Insights.


EDIT: To be clear, based on my exchange with gjm below, the article does not promote these heavily and links more to Intentional Insights. I was excited to be able to get links to LessWrong, Rationality Dojo, and Rationality: From AI to Zombies included in the Lifehack article, as previously editors had cut out such links. I pushed back against them this time, and made a case for including them as a way of growing mentally stronger, and thus was able to get them in.

Weekly LW Meetups

0 FrankAdamek 13 November 2015 04:31PM

Reflexive self-processing is literally infinitely simpler than a many world interpretation

-9 mgin 13 November 2015 02:46PM

I recently stumbled upon the concept of "reflexive self-processing", which is Chris Langan's "Reality Theory".

I am not a physicist, so if I'm wrong or someone can better explain this, or if someone wants to break out the math here, that would be great.

The idea of reflexive self-processing is that in the double slit experiment for example, which path the photon takes is calculated by taking into account the entire state of the universe when it solves the wave function.

1. isn't this already implied by the math of how we know the wave function works? are there any alternate theories that are even consistent with the evidence?

2. don't we already know that the entire state of the universe is used to calculate the behavior of particles? for example, doesn't every body produce a gravitational field which acts, with some magntitude of force, at any distance, such that in order to calculate the trajectory of a particle to the nth decimal place, you would need to know about every other body in the universe?

This is, literally, infinitely more parsimonious than the many worlds theory, which posits that an infinite number of entire universes of complexity are created at the juncture of every little physical event where multiple paths are possible. Supporting MWI because of it's simplicity was always a really horrible argument for this reason, and it seems like we do have a sensible, consistent theory in this reflexive self-processing idea, which is infinitely simpler, and therefore should be infinitely preferred by a rationalist to MWI.

Optimizing Rationality T-shirts

4 Gleb_Tsipursky 12 November 2015 10:15PM

Thanks again for all the feedback on the first set of Rationality slogan t-shirts, which Intentional Insights developed as part of our  broader project of promoting rationality to a wide audience. As a reminder, the t-shirts are meant for aspiring rationalists to show their affiliation with rationality, to remind themselves and other aspiring rationalists to improve, and to spread positive memes broadly. All profits go to promoting rationality widely.


For the first set, we went with a clear and minimal style that conveyed the messages clearly and had an institutional affiliation, based on the advice Less Wrongers gave earlier. While some liked and bought these, plenty wanted something more stylish and designed. As an aspiring rationalist, I am glad to update my beliefs. So we are going back to the drawing board, and trying to design something more stylish.


Now, we are facing the limitation of working with a print on demand service. We need to go with POD as we can't afford to buy shirts and then sell them, it would cost way too much to do so. We decided on CafePress as the most popular and well-known service with the most variety of options. It does limit our ability to design things, though.


So for the next step, we got some aspiring rationalist volunteers for Intentional Insights to find a number of t-shirt designs they liked, and we will create t-shirts that use designs of that style, but with rationality slogans. I'd like to poll fellow Less Wrongers for which designs they like most among the ones found by our volunteers. I will list links below associated with numbers, and in comments, please indicate the t-shirt numbers that you liked best, so that we can make those. Also please link to other shirts you like, or make any other comments on t-shirt designs and styles.


1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17


Thanks all for collaborating on optimizing rationality t-shirts!




Post-doctoral Fellowships at METRICS

12 Anders_H 12 November 2015 07:13PM
The Meta-Research Innovation Center at Stanford (METRICS) is hiring post-docs for 2016/2017. The full announcement is available at http://metrics.stanford.edu/education/postdoctoral-fellowships. Feel free to contact me with any questions; I am currently a post-doc in this position.

METRICS is a research center within Stanford Medical School. It was set up to study the conditions under which the scientific process can be expected to generate accurate beliefs, for instance about the validity of evidence for the effect of interventions.

METRICS was founded by Stanford Professors Steve Goodman and John Ioannidis in 2014, after Givewell connected them with the Laura and John Arnold Foundation, who provided the initial funding. See http://blog.givewell.org/2014/04/23/meta-research-innovation-centre-at-stanford-metrics/ for more details.

Meetup: Cambridge UK

2 Salokin 11 November 2015 08:08PM

(Apparently just posting a new meetup doesn't provide much visibility, so I'm posting a discussion article too.)

WHEN: 15 November 2015 05:00:00PM (+0000)

WHERE: JCR Trinity College, Cambridge, UK

First Cambridge meetup in a long time! Hopefully of many. Come to Trinity's JCR at 17 next sunday and get to know all the other aspiring rationalists around and have a good time! (Place and time are only provisional, they might change depending on your availability so comment here to see how we can arrange it properly)

[Link] Mainstreaming Tell Culture

0 Gleb_Tsipursky 11 November 2015 06:06PM

Mainstreaming Tell Culture and other rational relationship strategies in this listicle for Lifehack, a very popular self-improvement website, as part of my broader project, Intentional Insights​, of promoting rationality and science-based thinking to a broad audience. What are your thoughts about this piece?

Link: The Cook and the Chef: Musk's Secret Sauce - Wait But Why

3 taygetea 11 November 2015 05:46AM

This is the fourth of Tim Urban's series on Elon Musk, and this time it's about some reasoning processes that are made explicit, which LW readers should find very familiar. It's a potentially useful explicit model of how to make decisions for yourself.


Utility, probability and false beliefs

1 Stuart_Armstrong 09 November 2015 09:43PM

A putative new idea for AI control; index here.

This is part of the process of rigourising and formalising past ideas.

Paul Christiano recently asked why I used utility changes, rather than probability changes, to have an AI believe (or act as if it believed) false things. While investigating that, I developed several different methods for achieving the belief changes that we desired. This post analyses these methods.


Different models of forced beliefs

Let x and ¬x refer to the future outcome of a binary random variable X (write P(x) as a shorthand for P(X=x), and so on). Assume that we want P(x):P(¬x) to be in the 1:λ ratio for some λ (since the ratio is all that matters, λ=∞ is valid, meaning P(x)=0). Assume that we have an agent, who has utility u, has seen past evidence e, and wishes to assess the expected utility of their action a.

Typically, for expected utility, we sum over the possible worlds. In practice, we almost always sum over sets of possible worlds, the sets determined by some key features of interest. In assessing the quality of health interventions, for instance, we do not carefully and separately treat each possible position of atoms in the sun. Thus let V be the set of variables or values we can about, and v a possible value vector V can take. As usual, we'll be writing P(v) as a shorthand for P(V=v). The utility function u assigns utilities to possible v's.

One of the advantages of this approach is that it can avoid many issues of conditionals like P(A|B) when P(B)=0.

The first obvious idea is to condition on x and ¬x:

  • (1) Σv u(v)(P(v|x,e,a)+λP(v|¬x,e,a))

The second one is to use intersections rather than conditionals (as in this post):

  • (2) Σv u(v)(P(v,x|e,a)+λP(v,¬x|e,a))

Finally, imagine that we have a set of variables H, that "screen off" the effects of e and a, up until X. Let h be a set of values H can take. Thus P(x|h,e,a)=P(x|h). One could see H as the full set of possible pre-X histories, but it could be much smaller - maybe just the local environment around X. This gives a third definition:

  • (3) Σv Σh u(v)(P(v|h,x,e,a)+λP(v|h,¬x,e,a))P(h|,e,a)


Changing and unchangeable P(x)

An important thing to note is that all three definitions are equivalent for fixed P(x), up to changes of λ. The equivalence of (2) and (1) derives from the fact that Σv u(v)(P(v,x|e,a)+λP(v,¬x|e,a)) = Σv u(v)(P(x)P(v|x,e,a)+λP(¬x)P(v|¬x,e,a)) (we write P(x) rather than P(x|e,a) since the probability of x is fixed). Thus a type (2) agent with λ is equivalent with a type (1) agent with λ'=λP(x)/P(¬x).

Similarly, P(v|h,x,e,a)=P(v,h,x|e,a)/(P(x|h,e,a)*P(h|e,a)). Since P(x|h,e,a)=P(x), equation (3) reduces to Σv Σh u(v)(P(x)P(v,h,x|e,a)+λP(¬x)P(v,h,¬x|e,a)). Summing over h, this becomes Σv u(v)(P(x)P(v,x|e,a)+λP(¬x)P(v,¬x|e,a))=Σv u(v)(P(v|x,e,a)+λP(v|¬x,e,a)), ie the same as (1).

What about non-constant x? Let c(x) and c(¬x) be two contracts that pay out under x and ¬x, respectively. If the utility u is defined as 1 if a payout is received (and 0 otherwise), it's clear that both agent (1) and agent (3) assess c(x) as having an expected utility of 1 while c(¬x) has an expected utility of λ. This assessment is unchanging, whatever the probability of x. Therefore agents (1) and (3), in effect, see the odds of x as being a constant ratio 1:λ.

Agent (2), in contrast, gets a one-off artificial 1:λ update to the odds of x and then proceeds to update normally. Suppose that X is a coin toss that the agent believes is fair, having extensively observed the coin. Then it will believe that the odds are 1:λ. Suppose instead that it observes the coin has a λ:1 odd ratio; then it will believe the true odds are 1:1. It will be accurate, with a 1:λ ratio added on.

The effects of this percolate backwards in time from X. Suppose that X was to be determined by the toss of one of two unfair coins, one with odds ε:1 and one with odds 1:ε. The agent would assess the odds of the first coin being used rather than the second as around 1:λ. This update would extend to the process of choosing the coins, and anything that that depended on. Agent (1) is similar, though its update rule always assumes the odds of x:¬x being fixed; thus any information about the processes of coin selection is interpreted as a change in the probability of the processes, not a change in the probability of the outcome.

Agent (3), in contrast, is completely different. It assess the probability of H=h objectively, but then assumes that the odds of x and ¬x, given any h, is 1:λ. Thus if given updates about the probability of which coin is used, it will assess those updates objectively, but then assume that both coins are "really" giving 1:1 odds. It cuts off the update process at h, thus ensuring that it is "incorrect" only about x and its consequences, not its pre-h causes.


Utility and probability: assessing goal stability

Agents with unstable goals are likely to evolve towards being (equivalent to) expected utility maximisers. The converse is more complicated, but we'll assume here that an agent's goal is stable if it is an expected utility maximiser for some probability distribution.

Which one? I've tended to shy away from changing the probability, preferring to change the utility instead. If we divide the probability in equation (2) by 1+λ, it becomes a u-maximiser with a biased probability distribution. Alternatively, if we defined u'(v,x)=u(v) and u'(v,¬x)=λu(v), then it is a u'-maximiser with an unmodified probability distribution. Since all agents are equivalent for fixed P(x), we can see that in that case, all agents can be seen as expected utility maximisers with the standard probability distribution. 

Paul questioned whether the difference was relevant. I preferred the unmodified probability distribution - maybe the agent uses the distribution for induction, maybe having false probability beliefs will interfere with AI self-improvement, or maybe agents with standard probability distributions are easier to corrige - but for agent (2) the difference seems to be arguably a matter of taste.

Note that though agent (2) is stable, it's definition is not translation invariant in u. If we add c to u, we add c(P(x|e,a)+λP(¬x|e,a)) to u'. Thus, if the agent can affect the value of P(x) through its actions, different constants c likely give different behaviours.

Agent (1) is different. Except for the cases λ=0 and λ=∞, the agent cannot be an expected utility maximiser. To see this, just notice that an update about the process that could change the probability of x, gets reinterpreted as an update on the probability of that process. If we have the ε:1 and 1:ε coins, then any update about their respective probabilities of being used gets essentially ignored (as long as the evidence that the coins are biased is much stronger than the evidence as to which coin is used).

In the cases λ=0 and λ=∞, though, agent (1) is a u-maximiser that uses the probability distribution that assumes x or ¬x is certain, respectively. This is the main point of agent (1) - providing a simple maximiser for those cases.

What about agent (3)? Define u' by: u'(v,h,x)=u(v)/P(x|h), and u'(v,h,¬x)=λu(v)/P(¬x|h). Then consider the u'-maximiser:

  • (4) Σv Σh u'(v,h,x)P(v,h,x|e,a)+u'(v,h¬x)P(v,h,¬x|e,a)

Now P(v,h,x|e,a)=P(v|h,x,e,a)P(x|h,e,a)P(h|e,a). Because of the screening off assumptions, the middle term is the constant P(x|h). Multiplying this by u'(v,h,x)=u(v)/P(x|h) gives u(v)P(v|h,x,e,a)P(h|e,a). Similarly, the second term becomes λu(v)P(v|h,¬x,e,a)P(h|e,a). Thus a u'-maximiser, with the standard probability distribution, is the same as agent (3), thus proving the stability of that agent type.


Beyond the future: going crazy or staying sane

What happens after the event X has come to pass? In that case, agent (4), the u'-maximiser will continue as normal. Its behaviour will not be unusual as long as neither λ nor 1/λ is close to 0. The same goes for agent (2).

In contrast, agent (3) will no longer be stable after X, as H no longer screens off evidence after that point. And agent (1) was never stable in the first place, and now it denies all the evidence it sees to determine that impossible events actually happened. But what of those two agents, or the stable ones if λ or 1/λ were close to 0? In particular, what if λ falls below the probability that the agent is deluded in its observation of X?

In those cases, it's easy to argue that the agents would effectively go insane, believing wild and random things to justify their delusions.

But maybe not, in the end. Suppose that you, as a human, believe an untrue fact - maybe that Kennedy was killed on the 23rd of November rather than the 22nd. Maybe you construct elaborate conspiracy theories to account for the discrepancy. Maybe you posit an early mistake by some reporter that was then picked up and repeated. After a while, you discover that all the evidence you can find points to the 22nd. Thus, even though you believe with utter conviction that the assassination was on the 23rd, you learn to expect that the next piece of evidence will point to the 22nd. You look for the date-changing conspiracy, and never discover anything about it; and thus learn to expect they have covered their tracks so well they can't be detected.

In the end, the expectations of this "insane" agent could come to resemble those of normal agents, as long as there's some possibility of a general explanation of all the normal observations (eg a well-hidden conspiracy) given the incorrect assumption.

Of course, the safer option is just to corrige the agent to some sensible goal soon after X.

“Be A Superdonor!”: Promoting Effective Altruism by Appealing to the Heart

9 Gleb_Tsipursky 09 November 2015 06:20PM

(Cross-posted on The Life You Can Save blog, the Intentional Insights blog, and the Effective Altruism Forum).


This will be mainly of interest to Effective Altruists

Effective Altruism does a terrific job of appealing to the head. There is no finer example than GiveWell’s meticulously researched and carefully detailed reports laying out the impact per dollar on giving to various charities. As a movement, we are at the cutting edge of what we can currently evaluate about the effectiveness of how we optimize QALYs, although of course much work remains to be done.


However, as seen in Tom Davidson’s recent piece, "EA's Image Problem," and my Making Effective Altruism More Emotionally Appealing,” we currently do not do a very good job of appealing to the heart. We tend to forget Peter Singer’s famous quote that Effective Altruism “combines both the heart and the head.” When we try to pitch the EA movement to non-EAs, we focus on the head, not the heart.


Now, I can really empathize with this perspective. I am much more analytically oriented than the baseline, and I find this to be the case for EAs in general. Yet if we want to expand the EA movement, we can't fall into typical mind fallacy and assume that what worked to convince us will convince others who are less analytical and more emotionally oriented thinkers.


Otherwise, we leave huge sums of money on the table that otherwise could have gone to effective charities. For this reason, I and several others have started a nonprofit organization, Intentional Insights, dedicated to spreading rational thinking and effective altruism to a wide audience using effective marketing techniques. Exploring the field of EA organizations, I saw that The Life You Can Save already has some efforts to reach out to a broad audience, through its Charity Impact Calculator and its Giving Games, and actively promoted its efforts.


I was excited when Jon Behar, the COO & Director of Philanthropy Education at TLYCS, reached out to me and suggested collaborating on promoting EA to a broad audience using contemporary marketing methods that appeal to the heart. In a way, this is not surprising, as Peter Singer’s drowning child problem is essentially an effort to appeal to people’s hearts in a classroom setting. Using marketing methods that aim to reach a broad audience is a natural evolution of this insight.


Jon and I problem-solved how to spread Effective Altruism effectively, and came up with the idea of a catchphrase that we thought would appeal to people’s emotions well: “Be a Superdonor!” This catchphrase conveys in a short burst crucial information about Effective Altruism, namely that one can have the most powerful impact of one’s donations through giving to the charities that optimize QALYs for the most.


More importantly, it appeals to the heart well. Superdonor conveys the feeling of power – you can be super in your donations! Superdonor conveys an especially strong degree of generosity. Superdonor conveys a feeling of superiority, as in better than other donors. In other words, even if you donate less, if you donate more effectively, you can still be better than other donors by donating more effectively. This appeals to the “Keeping Up With the Joneses” effect, a powerful force in guiding our spending.


Just as importantly, “Be a Superdonor!” is easily shareable on social media, a vital component of modern marketing in the form of social proof. People get to show their pride and increase their social status by posting on their Facebook or Twitter how they are a Superdonor. This makes their friends curious about what it means to be a Superdonor, since that is an appealing and emotionally resonant phrase. Their friends check out their links, and get to find out about Effective Altruism. Of course, it is important that the link go to a very clear and emotionally exciting description of how one can be a Superdonor through donating.


Likewise, people should get credit for being a Superdonor through getting others to donate through sharing about it on social media, through talking about it to friends, through getting their friends to go to their local EA groups. Thus, we get the power of social affiliation, a crucial aspect of motivation, working on behalf of Effective Altruism. A particularly effective strategy for social affiliation here might be to combine “Be A Superdonor” with Giving Games, both the in-person version that TLYCS runs now and perhaps a web app version that helps create a virtual community setting conducive to social affiliation.


Now, some EAs might be concerned that the EA movement would lose its focus on the head through these efforts. I think that is a valid concern, and we need to be aware of the dangers here. We still need to put energy into the excellent efforts of GiveWell and other effective charity evaluators. We still need to be concerned with existential risk, even if it does not present us in the best light to external audiences.


Therefore, as part of the Superdonor efforts, we should develop compassionate strategies to educate emotionally-oriented newcomers about more esoteric aspects of Effective Altruism. For example, EA groups can have people who are specifically assigned as mentors for new members, who can help guide for their intellectual and emotional development alike. At the same time, we need to accept that some of those emotionally-oriented thinkers will not be interested in doing so.


This is quite fine, as long as we remember our goal of making the strongest impact on the world by optimizing QALYs through not leaving huge sums of money on the table. Consider the kind of benefit you can bring to the EA movement if you can channel the giving of emotionally-oriented thinkers toward effective charities. Moreover, think of the positive network effect of them getting their friends to donate to effective charities. Think of whether you can make a much bigger difference in doing the most good per energy of effort by focusing more of your own volunteering and giving on EA outreach in comparison to other EA-related activities. This is what inspired my own activities at Intentional Insights, and the recent shifts of the TLYCS toward effective outreach.


What are your thoughts about reaching out to more emotionally-oriented thinkers using these and other modern marketing strategies? If you support doing so, what do you think you can do personally to promote Effective Altruism effectively? Would love to hear your thoughts about it in comments below, and happy to talk to anyone who wants to engage with the Intentional Insights project: my email is gleb@intentionalinsights.org.


Open thread, Nov. 09 - Nov. 15, 2015

3 MrMind 09 November 2015 08:07AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Solstice 2015: What Memes May Come (Part II - Atheism, Rationality and Death)

7 Raemon 07 November 2015 10:23PM

Winter is coming, and so is Solstice season. There'll be large rationality-centric-or-adjaecent events in NYC, the Bay Area, and Seattle(and possibly other places - if you're interested in running a Solstice event or learning what that involves, send me a PM). In NYC, there'll be a general megameetup throughout the weekend, for people who want to stay through Sunday afternoon, and if you're interested in shared housing you can fill out this form.

The NYC Solstice isn't running a kickstarter this year, but I'll need to pay for the venue by November 19th ($6125). So if you are planning on coming it's helpful to purchase tickets sooner rather than later. (Or preorder the next album or 2016 Book of Traditions, if you can't attend but want to support the event).


This is the second post in the Solstice 2015 sequence, discussing plans and musings on the potential cultural impact of the Solstice. The first post was here

This explores the Solstice's relationship with Atheism, Rationality, and Death.


Some may be surprised that I don't consider atheism particularly core to the Solstice.

This probably will remain a part of it for the forseeable future. Atheists happen to be the demographic most hungry for some kind of meaningful winter traditions. And Beyond the Reach of God, a powerful essay that (often) plays an important role in the holiday, happens to frame it's argument around the non-existence of God.

But this doesn't actually seem especially inevitable or necessary. Beyond the Reach of God *isn't* about God, per se (at least, I don't see it that way). It's about the absolute, unforgiving neutrality of the laws of physics. It's about all the other sacred things that even atheists believe in, which they may make excuses for.

I think it's *currently* useful for there to a moment where we acknowledge that there is no God to bail us out, and that this is really important. But this may not always be the case. I would be pretty happy if, in 50 years, all references to God were gone from the Solstice (because the question of God was no longer one that preoccupied our society in the first place), but those crucial points were made in other ways. It can be a holiday for atheists without being about that in any specific way.


It's common throughout the secular world to speak highly of "rationality." But oftentimes, what that means in practice is pointing out the mistakes that other people are making, the fallacies they're committing.

The brand of rationality that spawned the Solstice has a different meaning: a specific dedication to looking at the way your own mind and beliefs are flawed, and actively seeking to correct them. Looking for the sacred cows of your culture (be it liberal, libertarian, academic or otherwise) and figuring out how they have blinded you.

Rationality is... sort of a central theme, but in an understated way. It underlies everything going on in the event, but hasn't really been a central character.

This might be a mistake. In particular because rationality's role is very subtle, and easy to be missed. Axial Tilt is the reason for the season, not crazy sun gods. But the reason that's important is a larger principle: that beliefs are entangled, that habits of excuse-making for outdated beliefs can be dangerous -- and that this can apply not just to antiquated beliefs about sun gods but (more importantly) to your current beliefs about politics and finance and love and relationships.

Aesthetically, in a culture of rationalists, I think it's correct for "rationality" to be very understated at the Solstice - there are plenty of other times to dwell upon it. But since Solstice is going to get promoted outside of the culture that spawned it, it's possible it may be best to include songs or stories that make it's epistemic core more explicit, so as not to be forgotten. It would be very easy for the Solstice to become about making fun of religion, and that is very much not my goal.
This year I have a story planned that will end up putting this front and center, but that won't make for a very good "permanent" feature of the Solstice. I'm interested in people's comments on how to address that in a more longterm way.


I think one of the most valuable elements of the Solstice is the way it addresses' death. Atheists or "nones" don't really have a centralized funeral culture, and this can actually be a problem - it means that when someone dies, you suddenly have to scramble to put together an event that feels earnest and true, that helps you grapple with one of life's harshest events, and many people are too overwhelmed to figure out how to do so.

Funerals, more than any kind of secular ceremony, benefit from prior ritualization - a set of clear instructions on what to do that feel familiar and comfortable. It's the not the time to experiment with novel, crazy ideas, even genuinely good ones.

So Solstice provides a venue to test out pieces of funeral ritual, and let the good ones become familiar. It also provides a time, in the interim, for people who haven't had the chance to grieve properly because their loved one's funeral was theistic-by-default.

I think for this to work optimally, it needs to be a bit more deliberate. There's a lot of death-centric songs in the Solstice (probably more than there should be), but relatively few that actually feel appropriate for a funeral. I'd like to look for opportunities to do things more directly-funeral-relevant, while still appropriate for the overall Solstice arc.

There's also a deeper issue here: secular folk vary wildly in how they relate to death. Some people are looking for a way to accept it. Other people think the very idea of accepting death is appalling.

Common Ground

I have my own opinions here, and I'll dive a bit more deeply into this in my next post. But for now, I'll just note that I want to help shape a funeral culture that does feel distinctive, with traditions that feel at least a little oddly specific (to avoid a sort of generic, store-brand feel), but which also strike a kind of timeless, universal chord. Funerals are a time when wildly disparate friends and family need to come together and find common ground.

When my grandmother died, I went to a Catholic mass. Two hundred people spoke in unison "our father, who art in heaven, hallowed be thy name." The words themselves meant very little, but the fact that two hundred people who speak them flawlessly together felt very meaningful to me. And I imagine it'd have been even more meaningful, if I believed in them.

In the secular world, not everyone's into chanting things as a group. But it still seems to me that having words that are familiar to you, which you can at least listen together and know that two hundred people also find them meaningful, could be very important.

Now, humanity has certainly not lacked for beautiful poetry surrounding death. Nor even beautiful non-supernatural poetry surrounding death. Nor even beautiful poetry-surrounding-death-that-matches-you-(yes-you)-'re-specific-worldview-surrounding-death. But what it does seem to be lacking is are well-known cultural artifacts that a wide array of people would feel comforted by, in a very primal way.

There's a particular poem that's meaningful to me. There's another poem (very similar, both relating to the turning of the seasons and our changing relationship with the seasons of over time), that's meaningful to my girlfriend. But they're just different enough that neither would be feel safe and familiar to both of us, in the event of someone's death.

So something I'd like to do with the Solstice, is to coordinate (across all Solstices, across the nation, and perhaps in other holidays and events) to find words or activities to share, that can become well known enough that everyone at a funeral could feel united.

An actionable question:

In particular, I think I'm looking for a poem (not intended to be the only element-addressing-death in the Solstice, but one that has a shot at widespread adoption),  with a few qualities:

 - Short enough (or with a simple refrain) that people can speak it aloud together.
 - Whether metaphorical or not, hints at a theme of relating to memories and the preserving thereof. (I think this is something most worldviews can relate to)
 - All things being equal, something fairly commonly known.
 - Since everyone's going to want their own favorite poem to be the one adopted, people interested in this problem should try applying some meta-cooperative-considerations - what do you wish other people with their own favorite poems were doing to try and settle on this?

If you have either suggestions for a poetic contender, or disagreements with my thought process here, let me know!


In the next (probably final) post of this mini-sequence, I'll be talking about Humanism, Transhumanism, and the Far Future.

How do you choose areas of scientific research?

5 FrameBenignly 07 November 2015 01:15AM

I've been thinking lately about what is the optimal way to organize scientific research both for individuals and for groups. My first idea: research should have a long-term goal. If you don't have a long-term goal, you will end up wasting a lot of time on useless pursuits. For instance, my rough thought process of the goal of economics is that it should be “how do we maximize the productive output of society and distribute this is in an equitable manner without preventing the individual from being unproductive if they so choose?”, the goal of political science should be “how do we maximize the government's abilities to provide the resources we want while allowing individuals the freedom to pursue their goals without constraint toward other individuals?”, and the goal of psychology should be “how do we maximize the ability of individuals to make the decisions they would choose if their understanding of the problems they encounter was perfect?” These are rough, as I said, but I think they go further than the way most researchers seem to think about such problems.


Political science seems to do the worst in this area in my opinion. Very little research seems to have anything to do with what causes governments to make correct decisions, and when they do research of this type, their evaluation of correct decision making often is based on a very poor metric such as corruption. I think this is a major contributor to why governments are so awful, and yet very few political scientists seem to have well-developed theories grounded in empirical research on ways to significantly improve the government. Yes, they have ideas on how to improve government, but they're frequently not grounded in robust scientific evidence.


Another area I've been considering is search parameters of moving through research topics. An assumption I have is that the overwhelming majority of possible theories are wrong such that only a minority of areas of research will result in something other than a null outcome. Another assumption is that correct theories are generally clustered. If you get a correct result in one place, there will be a lot more correct results in a related area than for any randomly chosen theory. There seems like two major methods for searching through the landscape of possibilities. One method is to choose an area where you have strong reason to believe there might be a cluster nearby that fits with your research goals and then randomly pick isolated areas of that research area until you get to a major breakthrough, then go through the various permutations of that breakthrough until you have a complete understanding of that particular cluster area of knowledge. Another method would be to take out large chunks of research possibilities, and to just throw the book at it basically. If you come back with nothing, then you can conclude that the entire section is empty. If you get a hit, you can then isolate the many subcomponents and figure out what exactly is going on. Technically I believe the chunking approach should be slightly faster than the random approach, but only by a slight amount unless the random approach is overly isolated. If the cluster of most important ideas are at 10 to the -10th power, and you isolate variables at 10 to the -100th power, then time will be wasted going back up to the correct level. You have to guess what level of isolation will result in the most important insights.


One mistake I think is to isolate variables, and then proceed through the universe of possibilities systematically one at a time. If you get a null result in one place, it's likely true that very similar research will also result in a null result. Another mistake I often see is researchers not bothering to isolate after they get a hit. You'll sometimes see thousands of studies on the exact same thing without any application of reductionism eg the finding that people who eat breakfast are generally healthier. Clinical and business researchers seem to most frequently make this mistake of forgetting reductionism.


I'm also thinking through what types of research are most critical, but haven't gotten too far in that vein yet. It seems like long-term research (40+ years until major breakthrough) should be centered around the singularity, but what about more immediate research?

New LW Meetup: Zurich

2 FrankAdamek 06 November 2015 03:46PM

This summary was posted to LW Main on October 30th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

Updating on hypotheticals

4 casebash 06 November 2015 11:49AM

This post is based on a discussion with ChristianKl on Less Wrong Chat. Thanks!

Many people disagreed with my previous writings on hypotheticals on Less Wrong (link1link2). For those who still aren’t convinced, I’ll provide another argument on why you should take hypotheticals seriously. Suppose you are discussing the issue of whether it’d be okay to flick the switch if a train was about to collide with and destroy an entire world as a way to try to sell someone on utilitarian ethics (see trolley problem). The other person objects that this is an unrealistic situation and so there is no point wasting time on this discussion.

This may seem unreasonable, but I suppose a person who believes that their time is very valuable may not feel that it is actually worth their time indulging in the hypothetical that A->B unless the other person is willing to explain to them why this result would relate to how we should act in the real world. This might be especially likely to be true if they have had similar discussion before and so they have a low prior that the other person will be able to relate it to the real world.

However, at this stage, they almost certainly have to update, in the sense that if you are following the rule of updating on new evidence, you have most likely already received new evidence. The argument is as follows: As soon as you have heard A->B (if it would save a world, I would flick a switch), your brain has already performed a surface level evaluation on that argument. Realistically, the thinker in the situation probably knows that it is really tough to make the argument that we should allow an entire world to be destroyed instead of ending one life. Now, the fact that it is tough to argue against something doesn’t mean that it should be accepted. For example, many philosophical proofs or halves of mathematical paradoxes seem very hard to argue against at first, but we may have an intuitive sense that there is a flaw there to be found if we are smart enough and look hard enough.

However, even if we aren’t confident in the logic we still have to update our priors, once we know that there is an argument for it that at least appears to check out. Obviously we will update to a much lesser degree than if we were confident in the logic, but we still have to update to some extent, even if we think the chance of A->B being analogous to the real world is incredibly small, as there will always be *some* chance that it is analogous assuming the other person isn’t talking nonsense. So even though the analogy hardly seems to fit the real world and even though you’ve perhaps spent only second thinking about whether A->B checks out, you’ve still got to update. I'll add another quick note: you only have to update on the first instance, when you see the same or a very similar problem again, you don't have to update.

How does this play out? An intellectually honest response would be along the lines of: “Okay, your argument seems to check out on first glance, but I’m rather skeptical that it’d hold up if I spent enough time thinking about it. Anyway, supposing that it was true, why should the real world be anything like A?”. This is much more honest than simply trying to dismiss the hypothetical by stating that A is nothing like reality.

There’s one objection that I need to answer. Maybe you say that you haven’t considered A->B at all. I would be really skeptical of this. There is a small chance I’m committing the typical mind fallacy, but I’m pretty sure that your mind considered both A->B and “this is analogous with reality” and you decided to argue for the second because you didn’t find a strong counter-argument against A->B. And if you did actually find a strong counter-argument, but choose to challenge the hypothetical instead, why not use your counter-argument? Why not engage with your opponent directly and take down their argument as this is more persuasive than dodging the question? There probably are situations where this seems reasonable, such if the argument against A->B is very long and complicated, but you think it is much easier to convince the other person that the situation isn’t analogous. These situations might exist, but I would suspect that these situations are relatively rare.

LINK: An example of the Pink Flamingo, the obvious-yet-overlooked cousin of the Black Swan

3 polymathwannabe 05 November 2015 04:55PM

India vs. Pakistan: the nuclear option is dangerously close, and nobody seems to want to prevent it


Rationality Reading Group: Part M: Fragile Purposes

5 Gram_Stone 05 November 2015 02:08AM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.

Welcome to the Rationality reading group. This fortnight we discuss Part M: Fragile Purposes (pp. 617-674). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

M. Fragile Purposes

143. Belief in Intelligence - What does a belief that an agent is intelligent look like? What predictions does it make?

144. Humans in Funny Suits - It's really hard to imagine aliens that are fundamentally different from human beings.

145. Optimization and the Intelligence Explosion - An introduction to optimization processes and why Yudkowsky thinks that an intelligence explosion would be far more powerful than calculations based on human progress would suggest.

146. Ghosts in the Machine - There is a way of thinking about programming a computer that conforms well to human intuitions: telling the computer what to do. The problem is that the computer isn't going to understand you, unless you program the computer to understand. If you are programming an AI, you are not giving instructions to a ghost in the machine; you are creating the ghost.

147. Artificial Addition - If you imagine a world where people are stuck on the "artifical addition" (i.e. machine calculator) problem, the way people currently are stuck on artificial intelligence, and you saw them trying the same popular approaches taken today toward AI, it would become clear how silly they are. Contrary to popular wisdom (in that world or ours), the solution is not to "evolve" an artificial adder, or invoke the need for special physics, or build a huge database of solutions, etc. -- because all of these methods dodge the crucial task of understanding what addition involves, and instead try to dance around it. Moreover, the history of AI research shows the problems of believing assertions one cannot re-generate from one's own knowledge.

148. Terminal Values and Instrumental Values - Proposes a formalism for a discussion of the relationship between terminal and instrumental values. Terminal values are world states that we assign some sort of positive or negative worth to. Instrumental values are links in a chain of events that lead to desired world states.

149. Leaky Generalizations - The words and statements that we use are inherently "leaky", they do not precisely convey absolute and perfect information. Most humans have ten fingers, but if you know that someone is a human, you cannot confirm (with probability 1) that they have ten fingers. The same holds with planning and ethical advice.

150. The Hidden Complexity of Wishes - There are a lot of things that humans care about. Therefore, the wishes that we make (as if to a genie) are enormously more complicated than we would intuitively suspect. In order to safely ask a powerful, intelligent being to do something for you, that being must share your entire decision criterion, or else the outcome will likely be horrible.

151. Anthropomorphic Optimism - Don't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer. It really isn't listening.

152. Lost Purposes - On noticing when you're still doing something that has become disconnected from its original purpose.


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part N: A Human's Guide to Words (pp. 677-801) and Interlude: An Intuitive Explanation of Bayes's Theorem (pp. 803-826). The discussion will go live on Wednesday, 18 November 2015, right here on the discussion forum of LessWrong.

Using the Copernican mediocrity principle to estimate the timing of AI arrival

2 turchin 04 November 2015 11:42AM

Gott famously estimated the future time duration of the Berlin wall's existence:

“Gott first thought of his "Copernicus method" of lifetime estimation in 1969 when stopping at the Berlin Wall and wondering how long it would stand. Gott postulated that the Copernican principle is applicable in cases where nothing is known; unless there was something special about his visit (which he didn't think there was) this gave a 75% chance that he was seeing the wall after the first quarter of its life. Based on its age in 1969 (8 years), Gott left the wall with 75% confidence that it wouldn't be there in 1993 (1961 + (8/0.25)). In fact, the wall was brought down in 1989, and 1993 was the year in which Gott applied his "Copernicus method" to the lifetime of the human race”. “https://en.wikipedia.org/wiki/J._Richard_Gott

The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task. So it is reasonable to apply Gott’s method.

AI research began in 1950, and so is now 65 years old. If we are currently in a random moment during AI research then it could be estimated that there is a 50% probability of AI being created in the next 65 years, i.e. by 2080. Not very optimistic. Further, we can say that the probability of its creation within the next 1300 years is 95 per cent. So we get a rather vague prediction that AI will almost certainly be created within the next 1000 years, and few people would disagree with that. 

But if we include the exponential growth of AI research in this reasoning (the same way as we do in Doomsday argument where we use birth rank instead of time, and thus update the density of population) we get a much earlier predicted date.

We can get data on AI research growth from Luke’s post

“According to MAS, the number of publications in AI grew by 100+% every 5 years between 1965 and 1995, but between 1995 and 2010 it has been growing by about 50% every 5 years. One sees a similar trend in machine learning and pattern recognition.”

From this we could conclude that doubling time in AI research is five to ten years (update by adding the recent boom in neural networks which is again five years)

This means that during the next five years more AI research will be conducted than in all the previous years combined. 

If we apply the Copernican principle to this distribution, then there is a 50% probability that AI will be created  within the next five years (i.e. by 2020) and a 95% probability that AI will be created within next 15-20 years, thus it will be almost certainly created before 2035. 

This conclusion itself depends of several assumptions: 

•   AI is possible

•   The exponential growth of AI research will continue 

•   The Copernican principle has been applied correctly.


Interestingly this coincides with other methods of AI timing predictions: 

•   Conclusions of the most prominent futurologists (Vinge – 2030, Kurzweil – 2029)

•   Survey of the field of experts

•   Prediction of Singularity based on extrapolation of history acceleration (Forrester – 2026, Panov-Skuns – 2015-2020)

•   Brain emulation roadmap

•   Computer power brain equivalence predictions

•   Plans of major companies


It is clear that this implementation of the Copernican principle may have many flaws:

1. The one possible counterargument here is something akin to a Murphy law, specifically one which claims that any particular complex project requires much more time and money before it can be completed. It is not clear how it could be applied to many competing projects. But the field of AI is known to be more difficult than it seems to be for researchers.

2. Also the moment at which I am observing AI research is not really random, as it was in the Doomsday argument created by Gott in 1993, and I probably will not be able to apply it to a time before it become known.

3. The number of researchers is not the same as the number of observers in the original DA. If I were a researcher myself, it would be simpler, but I do not do any actual work on AI.


Perhaps this method of future prediction should be tested on simpler tasks. Gott successfully tested his method by predicting the running time of Broadway shows. But now we need something more meaningful, but testable in a one year timeframe. Any ideas?



Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife

6 crmflynn 02 November 2015 11:03PM

This is a bit rough, but I think that it is an interesting and potentially compelling idea. To keep this short, and accordingly increase the number of eyes over it, I have only sketched the bare bones of the idea. 

     1)      Empirically, people have varying intuitions and beliefs about causality, particularly in Newcomb-like problems (http://wiki.lesswrong.com/wiki/Newcomb's_problemhttp://philpapers.org/surveys/results.pl, and https://en.wikipedia.org/wiki/Irresistible_grace).

     2)      Also, as an empirical matter, some people believe in taking actions after the fact, such as one-boxing, or Calvinist “irresistible grace”, to try to ensure or conform with a seemingly already determined outcome. This might be out of a sense of retrocausality, performance, moral honesty, etc. What matters is that we know that they will act it out, despite it violating common sense causality. There has been some great work on decision theory on LW about trying to thread this needle well.

     3)      The second disjunct of the simulation argument (http://wiki.lesswrong.com/wiki/Simulation_argument) shows that the decision making of humanity is evidentially relevant in what our subjective credence should be that we are in a simulation. That is to say, if we are actively headed toward making simulations, we should increase our credence of being in a simulation, if we are actively headed away from making simulations, through either existential risk or law/policy against it, we should decrease our credence.

      4)      Many, if not most, people would like for there to be a pleasant afterlife after death, especially if we could be reunited with loved ones.

     5)      There is no reason to believe that simulations which are otherwise nearly identical copies of our world, could not contain, after the simulated bodily death of the participants, an extremely long-duration, though finite, "heaven"-like afterlife shared by simulation participants.

     6)      Our heading towards creating such simulations, especially if they were capable of nesting simulations, should increase credence that we exist in such a simulation and should perhaps expect a heaven-like afterlife of long, though finite, duration.

     7)      Those who believe in alternative causality, or retrocausality, in Newcomb-like situations should be especially excited about the opportunity to push the world towards surviving, allowing these types of simulations, and creating them, as it would potentially suggest, analogously, that if they work towards creating simulations with heaven-like afterlives, that they might in some sense be “causing” such a heaven to exist for themselves, and even for friends and family who have already died. Such an idea of life-after-death, and especially for being reunited with loved ones, can be extremely compelling.

     8)      I believe that people matching the above description, that is, holding both an intuition in alternative causality, and finding such a heaven-like-afterlife compelling, exist. Further, the existence of such people, and their associated motivation to try to create such simulations, should increase the credence even of two-boxing types, that we already live in such a world with a heaven-like afterlife. This is because knowledge of a motivated minority desiring simulations should increase credence in the likely success of simulations. This is essentially showing that “this probably happened before, one level up” from the two-box perspective.

     9)      As an empirical matter, I also think that there are people who would find the idea of creating simulations with heaven-like afterlives compelling, even if they are not one-boxers, from a simply altruistic perspective, both since it is a nice thing to do for the future sim people, who can, for example, probabilistically have a much better existence than biological children on earth can, and as it is a nice thing to do to increase the credence (and emotional comfort) of both one-boxers and two-boxers in our world thinking that there might be a life after death.

     10)   This creates the opportunity for a secular movement in which people work towards creating these simulations, and use this work and potential success in order to derive comfort and meaning from their life. For example, making donations to a simulation-creating or promoting, or existential threat avoiding, think-tank after a loved one’s death, partially symbolically, partially hopefully.

     11)   There is at least some room for Pascalian considerations even for two-boxers who allow for some humility in their beliefs. Nozick believed one-boxers will become two boxers if Box A is raised to 900,000, and two-boxers will become one-boxers if Box A is lowered to $1. Similarly, trying to work towards these simulations, even if you do not find it altruistically compelling, and even if you think that the odds of alternative or retrocausality is infinitesimally small, might make sense in that the reward could be extremely large, including potentially trillions of lifetimes worth of time spent in an afterlife “heaven” with friends and family.

Finally, this idea might be one worth filling in (I have been, in my private notes for over a year, but am a bit shy to debut that all just yet, even working up the courage to post this was difficult) if only because it is interesting, and could be used as a hook to get more people interested in existential risk, including the AI control problem. This is because existential catastrophe is probably the best enemy of credence in the future of such simulations, and accordingly in our reasonable credence in thinking that we have such a heaven awaiting us after death now. A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade, and game theory between higher and lower worlds, as a form of “compulsion” in which they punish worlds for not creating heaven containing simulations (therefore effecting their credence as observers of the simulation), in order to reach an equilibrium in which simulations with heaven-like afterlives are universal, or nearly universal. More on that later if this is received well.

Also, if anyone would like to join with me in researching, bull sessioning, or writing about this stuff, please feel free to IM me. Also, if anyone has a really good, non-obvious pin with which to pop my balloon, preferably in a gentle way, it would be really appreciated. I am spending a lot of energy and time on this if it is fundamentally flawed in some way.

Thank you.


November 11 Updates and Edits for Clarification

     1)      There seems to be confusion about what I mean by self-location and credence. A good way to think of this is the Sleeping Beauty Problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem)

If I imagine myself as Sleeping Beauty (and who doesn’t?), and I am asked on Sunday what my credence is that the coin will be tails, I will say 1/2. If I am awakened during the experiment without being told which day it is and am asked what my credence is that the coin was tails, I will say 2/3. If I am then told it is Monday, I will update my credence to ½. If I am told it is Tuesday I update my credence to 1. If someone asks me two days after the experiment about my credence of it being tails, if I somehow do not know the days of the week still, I will say ½. Credence changes with where you are, and with what information you have. As we might be in a simulation, we are somewhere in the “experiment days” and information can help orient our credence. As humanity potentially has some say in whether or not we are in a simulation, information about how humans make decisions about these types of things can and should effect our credence.

Imagine Sleeping Beauty is a lesswrong reader. If Sleeping Beauty is unfamiliar with the simulation argument, and someone asks her about her credence of being in a simulation, she probably answers something like 0.0000000001% (all numbers for illustrative purposes only). If someone shows her the simulation argument, she increases to 1%. If she stumbles across this blog entry, she increases her credence to 2%, and adds some credence to the additional hypothesis that it may be a simulation with an afterlife. If she sees that a ton of people get really interested in this idea, and start raising funds to build simulations in the future and to lobby governments both for great AI safeguards and for regulation of future simulations, she raises her credence to 4%. If she lives through the AI superintelligence explosion and simulations are being built, but not yet turned on, her credence increases to 20%. If humanity turns them on, it increases to 50%. If there are trillions of them, she increases her credence to 60%. If 99% of simulations survive their own run-ins with artificial superintelligence and produce their own simulations, she increases her credence to 95%. 

2)  This set of simulations does not need to recreate the current world or any specific people in it. That is a different idea that is not necessary to this argument. As written the argument is premised on the idea of creating fully unique people. The point would be to increase our credence that we are functionally identical in type to the unique individuals in the simulation. This is done by creating ignorance or uncertainty in simulations, so that the majority of people similarly situated, in a world which may or may not be in a simulation, are in fact in a simulation. This should, in our ignorance, increase our credence that we are in a simulation. The point is about how we self-locate, as discussed in the original article by Bostrom. It is a short 12-page read, and if you have not read it yet, I would encourage it:  http://simulation-argument.com/simulation.html. The point about past loved ones I was making was to bring up the possibility that the simulations could be designed to transfer people to a separate after-life simulation where they could be reunited after dying in the first part of the simulation. This was not about trying to create something for us to upload ourselves into, along with attempted replicas of dead loved ones. This staying-in-one simulation through two phases, a short life, and relatively long afterlife, also has the advantage of circumventing the teletransportation paradox as “all of the person" can be moved into the afterlife part of the simulation.  


Solstice 2015: What Memes May Come? (Part I)

13 Raemon 02 November 2015 05:13PM

Winter is coming, and so is Solstice season. There'll be large rationality-centric-or-adjaecent events in NYC, the Bay Area, and Seattle (and possibly other places - if you're interested in running a Solstice event or learning what that involves, send me a PM). In NYC, there'll be a general megameetup throughout the weekend, for people who want to stay through Sunday afternoon, and if you're interested in shared housing you can fill out this form.

The NYC Solstice isn't running a kickstarter this year, but I'll need to pay for the venue by November 19th ($6125). So if you are planning on coming it's helpful to purchase tickets sooner rather than later. (Or preorder the next album or 2016 Book of Traditions, if you can't attend but want to support the event).


I've been thinking for the past couple years about the Solstice as a memetic payload.

The Secular Solstice is a (largely Less Wrong inspired) winter holiday, celebrating how humanity faced the darkest season and transformed it into a festival of light. It celebrates science and civilization. It honors the past, revels in the present and promises to carry our torch forward into the future.

For the first 2-3 years, I had a fair amount of influences over the Solstices held in Boston and San Francisco, as well as the one I run in NYC. Even then, the holiday has evolved in ways I didn't quite predict. This has happened both because different communities took them in somewhat different directions, and because (even in the events I run myself), factors come into play that shaped it. Which musicians are available to perform, and how does their stage presence affect the event? Which people from which communities will want to attend, and how will their energy affect things? Which jokes will they laugh at? What will they find poignant?

On top of that, I'm deliberately trying to spread the Solstice to a larger audience. Within a couple years, if I succeed, more of the Solstice will be outside of my control than within it. 

Is it possible to steer a cultural artifact into the future, even after you let go of the reins? How? Would you want to?

In this post, I lay out my current thoughts on this matter. I am interested in feedback, collaboration and criticism.

Lessons from History?

(Epistemic status: I have not really fact checked this. I wouldn't be surprised if the example turned out to be false, but I think it illustrates an interesting point regardless of whether it's true)

Last year after Solstice, I was speaking with a rationalist friend with a Jewish background. He made an observation. I lack the historical background to know if this is exactly accurate (feel free to weigh in on the comments), but his notion was as follows:

Judaism has influenced the world in various direct ways. But a huge portion of its influence (perhaps the majority) has been indirectly through Christianity. Christianity began with a few ideas it took from Judaism that were relatively rare. Monotheism is one example. The notion that you can turn to the Bible for historical and theological truth is another.

But buried in that second point is something perhaps more important: religious truth is not found in the words of your tribal leaders and priests. It's found in a book. The book contains the facts-of-the-matter. And while you can argue cleverly about the book's contents, you can't disregard it entirely.

Empiricists may get extremely frustrated with creationists, for refusing to look outside their book for answers (instead of the natural world). But there was a point where the fact of the matter lay entirely in "what the priests/ruler said" as opposed to "what the book said". 

In this view, Judaism's primary memetic success is in helping to seed the idea of scholarship, and a culture of argument and discussion.

I suspect this story is simplified, but these two points seem meaningful: a memeplex's greatest impact may be indirect, and may not have much to do with the attributes that are most salient on first glance to a layman.



So far, I've deliberately encouraged people to experiment with the Solstice. Real rituals evolve in the wild, and adapt to the needs of their community. And a major risk of ritual is that it becomes ossified, turning either hollow or dangerous. But if a ritual is designed to be mutable, what gives it it's identity? What separates a Secular Solstice from a generic humanist winter holiday?

The simplest, most salient and most fun aspects of a ritual will probably spread the fastest and farthest. If I had to sum up the Solstice in nine words, they would be:

Light. Darkness. Light.
Past. Present. Future.
Humanity. Science. Civilization.

I suspect that without any special effort on my part (assuming I keep promoting the event but don't put special effort into steering its direction), those 9 pieces would remain a focus of the event, even if groups I never talk to adopt it for themselves.

The most iconic image of the Solstice is the Candelit story. At the apex of the event, when all lights but a single candle have been extinguished, somebody tells a story that feels personal, visceral. It reminds us that this world can be unfair, but that we are not alone, and we have each other. And then the candle is blown out, and we stand in the absolute darkness together.

If any piece of the Solstice survives, it'll be that moment.

If that were all that survived, I think that'd be valuable. But it'd also be leaving 90%+ of the potential value of the Solstice on the table.

Complex Value

There are several pieces of the Solstice that are subtle and important. There are also pieces of it that currently exist that should probably be tapered down, or adjusted to become more useful. Each of them warrants a fairly comprehensive post of its own. A rough overview of topics to explore:

Existential Risk.
The Here and Now.
The Distant Future.

My thoughts about each of these are fairly complex. In the coming weeks I'll dive into each of them. The next post, discussing Atheism, Rationality and Death, is here.

Open thread, Nov. 02 - Nov. 08, 2015

4 MrMind 02 November 2015 10:07AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

View more: Next