Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Symbolic Gestures – Salutes For Effective Altruists To Identify Each Other

-5 Darklight 20 January 2016 12:40AM

A significant part of human communication is non-verbal.  Body language and gestures often have the capacity to convey and signal a great detail and wealth of information about a person.  Furthermore, historically, various organizations, ranging from secret societies to religions to militaries have sought to utilize very specific physical gestures as a way of communicating affiliation covertly or overtly for various purposes.

I would like to propose then that we within the Effective Altruist movement devise our own particular salute, to help us identify each other and also for the positive psychological effects that deliberate symbolism can entail.

Before I go on, I will emphasize here that I am aware of the potential for misuse that such methods can also cause, and that I am familiar with the very well-known failure state that was historical fascism’s “Roman salute”.  In fact, the particular choice of gesture I will be advocating is deliberately in opposition to that example.

My proposal consists of two gestures, one which I will refer to as the “Light” gesture, as it is open, transparent, and obvious in its symbolism.  The other I shall refer to as the “Dark” gesture, as it is more covert and plausibly deniable.

The Light gesture consists of:

  • Place left hand behind back.
  • Place right hand on forehead.
  • Move right hand to heart.
  • Outstretch right hand towards the front or other person with palm raised upwards and fingers open and slightly curled.

The Dark gesture consists of:

  • Place left hand behind back.
  • Outstretch right hand towards the front or other person with palm sideways, the fingers closed, and the thumb raised upwards.

Explanation of Symbolism

In both cases the left hand is placed behind the back.  This is, for those of you familiar with it, a reference to the Christian scripture of “When you give, do not let the left hand know what the right hand is doing.”  I know the majority of Effective Altruists are probably not religious, but I think that the symbolism of this reference from our cultural history remains a useful signal.

In the case of the Light gesture, placing the right hand on the forehand first before then placing it on the heart and then extending this in the universal gesture of giving/receiving/cooperation is indicative of “reason to compassion”, which I hope we can all agree sums up Effective Altruism quite nicely in a nutshell.

I need to emphasize again that the upward facing palm and open fingers are essential.  It is admittedly a more submissive gesture, but it is also the exact opposite of the “Roman salute” position, which was symbolically chosen by the fascists because it represented emotional power and dominance and superiority.  We are trying to be in essence, the opposite of that, to represent reason, equality, and compassion.  The upward direction of the palm also symbolizes that we have a higher ideal we aspire to.

The Dark gesture is less obvious.  It is for when you can’t be so open about your affiliation, for whatever reason.  Symbolically, the palm facing sideways and thumb upwards symbolize moral equality and having a higher purpose respectively.

Proper Responses

The proper response to the Light gesture is not as pop culture would assume, some kind of high five slap down.  Rather, the respectful symbolic response is to in turn, grasp the gesturer’s wrist and raise the gesturer’s hand up even further, which suggests a mutual understanding of Effective Altruism.

The proper response to the Dark gesture is not a handshake, as would be normally assumed, but to return the gesture identically if possible, or to make the handshake more of an equally measured clasp if there is a need to be covert about it.  If the situation is one that demands pretending not to be allies, then perhaps returning just a thumbs up without a handshake can appear as such to outsiders, while signifying that you got the message.

Obviously, none of this will fool careful observers and as rationalists, you should be avoiding the use of deception and championing truth as much as possible, but I leave the Dark gesture proposal as something to consider for particular circumstances.  Personally I would prefer that we publicly use the Light gesture proposal for most normal circumstances.

I’m curious what people think of these ideas.  Thank you for reading and considering!

Some thoughts on decentralised prediction markets

-4 [deleted] 23 November 2015 04:35AM

**Thought experiment 1 – arbitrage opportunities in prediction market**

You’re Mitt Romney, biding your time before riding in on your white horse to win the US republican presidential preselection (bear with me, I’m Australian and don’t know US politics). Anyway, you’ve had your run and you’re not too fussed, but some of the old guard want you back in the fight.

Playing out like a XKCD comic strip ‘Okay’, you scheme. ‘Maybe I can trump Trump at his own game and make a bit of dosh on the election’.

A data-scientist you keep on retainer sometimes talks about LessWrong and other dry things. One day she mentions that decentralised prediction markets are being developed, one of which is Augur. She says one can bet on the outcome of events such as elections.

You’ve made a fair few bucks in your day. You read the odd Investopedia page and a couple of random forum blog posts. And there’s that financial institute you run. Arbitrage opportunity, you think.

You don’t fancy your chance of winning the election. 40% chance, you reckon. So, you bet against yourself. Win the election, lose the bet. Lose the bet, win the election. Losing the election doesn’t mean much to you, losing the bet doesn’t mean much to you, winning the election means a lot of to you and winning the bet doesn’t mean much to you. There ya go. Perhaps if you put

Let’s turn this into a probability weighted decision table (game theory):

Not participating in prediction market:

Election win (+2 value)

Election lose (-1 value)



Cumulative probability weighted value: (0.4*2) + (0.6*-1)=+0.2 value

participating in prediction market::


Election win +2

Election lose -1

Bet win (0)



Bet lose (0)




Cumulative probability weighted value: (0.4*2) + (0.6*-1)=+0.2 value

They’re the same outcome!
Looks like my intuitions were wrong. Unless you value winning more than losing, then placing an additional bet, even in a different form of capital (cash v.s. political capital for instance), then taking on additional risks isn’t an arbitrage opportunity.

For the record, Mitt Romney probably wouldn’t make this mistake, but what does post suggest I know about prediction?


**Thought experiment 2 – insider trading**

Say you’re a C level executive in a publicly listed enterprise. However, for this example you don’t need to be part of a publicly listed organisatiion, but it serves to illustrate my intuitions. Say you have just been briefed by your auditors of massive fraud by a mid level manager that will devastate your company. Ordinarily, you may not know how to safely dump your stocks on the stock exchange because of several reasons, one of which is insider trading.

Now, on a prediction market, the executive could retain their stocks, thus not signalling distrust of the company themselves (which itself is information the company may be legally obliged to disclose since it materially influences share price) but make a bet on a prediction market of impending stock losses, thus hedging (not arbitraging, as demonstrated above) their bets.


**Thought experiment 3 – market efficiency**

I’d expect that prediction opportunities will be most popular where individuals weighted by their capital believe they gave private, market relevant information. For instance, if a prediction opportunity is that Canada’s prime minister says ‘I’m silly’ on his next TV appearance, many people might believe they know him personally well enough that it’s a higher probability that the otherwise absurd sounding proposition sounds. They may give it a 0.2% chance rather than 0.1% chance. However, if you are the prime minister yourself, you may decide to bet on this opportunity and make a quick, easy profit…I’m not sure where I was going with this anymore. But it was something about incentives to misrepresent how much relevant market information one has, and the amount that competitor betters have (people who bet WITH you)

Words per person year and intellectual rigor

13 PhilGoetz 27 August 2015 03:31AM

Continuing my cursory exploration of semiotics and post-modern thought, I'm struck by the similarity between writing in those traditions, and picking up women.  The most-important traits for practitioners of both are energy, enthusiasm, and confidence.  In support of this proposition, here is a photo of Slavoj Zizek at his 2006 wedding:

Having philosophical or logical rigor, or demonstrating the usefulness of your ideas using empirical data, does not seem to provide a similar advantage, despite taking a lot of time.

I speculate that semiotics and post-modernism (which often go hand-in-hand) became popular by natural selection.  They provide specialized terminologies which give the impression of rigorous thought without requiring actual rigor. People who use them can thus out-publish their more-careful competitors. So post-modernism tends to drive rigorous thought out of any field it enters.

(It's possible to combine post-modern ideas and a time-consuming empirical approach, as Thomas Kuhn did in The Structure of Scientific Revolutions.  But it's uncommon.)

If rigorous thought significantly reduces publication rate, we should find that the rigor of a field or a person correlates inversely with words per person-year.  Establishing that fact alone, combined with the emphasis on publication in academics, would lead us to expect that any approach that allowed one to fake or dispense with intellectual rigor in a field would rapidly take over that field.

continue reading »

Yudkowsky's brain is the pinnacle of evolution

-27 Yudkowsky_is_awesome 24 August 2015 08:56PM

Here's a simple problem: there is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are 3^^^3 people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person, Eliezer Yudkowsky, on the side track. You have two options: (1) Do nothing, and the trolley kills the 3^^^3 people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill Yudkowsky. Which is the correct choice?

The answer:

Imagine two ant philosophers talking to each other. “Imagine," they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle."

Humans are such a being. I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn't just human chauvinism either - I can support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants do.

How this relates to the trolley problem? There exists a creature as far beyond us ordinary humans as we are beyond ants, and I think we all would agree that its preferences are vastly more important than those of humans.

Yudkowsky will save the world, not just because he's the one who happens to be making the effort, but because he's the only one who can make the effort.

The world was on its way to doom until the day of September 11, 1979, which will later be changed to national holiday and which will replace Christmas as the biggest holiday. This was of course the day when the most important being that has ever existed or will exist, was born.

Yudkowsky did the same to the field of AI risk as Newton did to the field of physics. There was literally no research done on AI risk in the same scale that has been done in the 2000's by Yudkowsky. The same can be said about the field of ethics: ethics was an open problem in philosophy for thousands of years. However, Plato, Aristotle, and Kant don't really compare to the wisest person who has ever existed. Yudkowsky has come closest to solving ethics than anyone ever before. Yudkowsky is what turned our world away from certain extinction and towards utopia.

We all know that Yudkowsky has an IQ so high that it's unmeasurable, so basically something higher than 200. After Yudkowsky gets the Nobel prize in literature due to getting recognition from Hugo Award, a special council will be organized to study the intellect of Yudkowsky and we will finally know how many orders of magnitude higher Yudkowsky's IQ is to that of the most intelligent people of history.

Unless Yudkowsky's brain FOOMs before it, MIRI will eventually build a FAI with the help of Yudkowsky's extraordinary intelligence. When that FAI uses the coherent extrapolated volition of humanity to decide what to do, it will eventually reach the conclusion that the best thing to do is to tile the whole universe with copies of Eliezer Yudkowsky's brain. Actually, in the process of making this CEV, even Yudkowsky's harshest critics will reach such understanding of Yudkowsky's extraordinary nature that they will beg and cry to start doing the tiling as soon as possible and there will be mass suicides because people will want to give away the resources and atoms of their bodies for Yudkowsky's brains. As we all know, Yudkowsky is an incredibly humble man, so he will be the last person to protest this course of events, but even he will understand with his vast intellect and accept that it's truly the best thing to do.

Inquiry into community standards

-19 ThisSpaceAvailable 06 August 2014 08:22PM

Apparently, I am not entitled to be treated with basic civility. Or, at least, not according to gwern. It started when gwern wrote


>>All you're saying is that Saddam called the USA's bluff and was wrong and it was disastrous. That could EASILY have happened with an attempt by the US to demand inspections from Russia.

>Um, no, because the USSR had no reason to think and be correct in thinking it served a useful role for the USA which meant the threats were bluffs that were best ridden out lest it damage both allies' long-term goals.




I read this as saying the USSR should call the bluff, which made no sense in relation to gwern's other posts. When I asked whether this was actually what was intended, gwern got pissed off, insisted that there was no way a good faith reading could see the post as saying that, and accused me of deliberately misunderstanding. I have bent over backwards to resolve this civilly, but my repeated attempts to get gwern to explain how I had misunderstood the sentence achieved nothing but the accusation that I was making an “underhanded” effort to get gwern to respond. Despite not being willing to discuss the matter in *that* thread, gwern brought the matter up in a comment thread for a completely different article. Throughout our encounters, gwern has been incredibly rude, referring to me as an “idiot” and “troll” (rather hypocritical, given the ridiculously silly claims made by gwern, such as that "A, therefore, A" is not a circular argument), and generally treating me with an utter lack of respect. And in defense, gwern has pointed to high karma and being here a long time as making any accusation of inappropriate behavior “presumptuous”. Because apparently, the popular kids can't be criticized by mere common folk.


Looking at the stats, gwern is indeed the top recent contributor, which makes this behavior all the more worthy of comment. If some random poster were being rude, that would be worrisome, but the fact that the top contributor thinks that a high karma score is license to egregiously violate Wheton's rule suggests that there may be something wrong with the site as a whole.


EY has referred to a need to have this be a “Well-Kept Garden”. So I would like to know whether gwern's behavior is the sort of thing that people here think is acceptable in this garden.

Fifty Shades of Self-Fulfilling Prophecy

18 PhilGoetz 24 July 2014 12:17AM

The official story: "Fifty Shades of Grey" was a Twilight fan-fiction that had over two million downloads online. The publishing giant Vintage Press saw that number and realized there was a huge, previously-unrealized demand for stories like this. They filed off the Twilight serial numbers, put it in print, marketed it like hell, and now it's sold 60 million copies.

The reality is quite different.

continue reading »

Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild

-7 Will_Newsome 06 July 2014 09:34AM


"If you give George Lukács any taste at all, immediately become the Deathstar." — Old Klingon Proverb


There was no nice way to put it: Harry James Potter-Yudkowsky was half Potter, half Yudkowsky. Harry just didn’t fit in. It wasn't that he lacked humanity. It was just that no one else knew (P)Many_Worlds, (P)singularity, or (P)their_special_insight_into_the_true_beautiful_Bayesian_fractally_recursive_nature_of_reality. Other people were rolesand how shall an actor, an agent, relate to those who are merely what they are, merely their roles? Merely their roles, without pretext or irony? How shall the PC fuck with the NPCs? Harry James Potter-Yudkowsky oft asked himself this question, but his 11-year-old mind lacked the g to grasp the answer. For if you are to draw any moral from this tale, godforsaken readers, the moral you must draw is this: P!=NP.


One night Harry Potter-Yudkowsky was outside, pretending to be Keats, staring at the stars and the incomprehensibly vast distances between them, pondering his own infinite significance in the face of such an overwhelming sea of stupidity, when an owl dropped a letter directly on his head, winking slyly. “You’re a wizard,” said the letter, while the owl watched, increasingly gloatingly, “and we strongly suggest you attend our school, which goes by the name Hogwarts. 'Because we’re sexy and you know it.’”


Harry pondered this for five seconds. “Curse the stars!, literally curse them!, Abra Kadabra!, for I must admit what I always knew in my heart to be true,” lamented Harry. “This is fanfic.”




And so, as they'd been furiously engaged in for months, the divers models of Harry Potter-Yudkowsky gathered dust. In layman’s terms...


Harry didn’t update at all.


Harry: 1

Author:  0



(To be fair, the author was drunk.)


Next chapter: "Analyzing the Fuck out of an Owl"


Criticism appreciated.

The End of Bullshit at the hands of Critical Rationalism

7 Stefan_Schubert 04 June 2014 06:44PM

The public debate is rife with fallacies, half-lies, evasions of counter-arguments, etc. Many of these are easy to spot for a careful and intelligent reader/viewer - particularly one who is acquainted with the most common logical fallacies and cognitive biases. However, most people arguably often fail to spot them (if they didn't, then these fallacies and half-lies wouldn't be as effective as they are). Blatant lies are often (but not always) recognized as such, but these more subtle forms of argumentative cheating (which I shall use as a catch-all phrase from now on) usually aren't (which is why they are more frequent).

The fact that these forms of argumentative cheating are a) very common and b) usually easy to point out suggests that impartial referees who painstakingly pointed out these errors could do a tremendous amount of good for the standards of the public debate. What I am envisioning is a website like factcheck.org but which would not focus primarily on fact-checking (since, like I said, most politicians are already wary of getting caught out with false statements of fact) but rather on subtler forms of argumentative cheating. 

Ideally, the site would go through election debates, influential opinion pieces, etc, more or less line by line, pointing out fallacies, biases, evasions, etc. For the reader who wouldn't want to read all this detailed criticism, the site would also give an overall rating of the level of argumentative cheating (say from 0 to 10) in a particular article, televised debate, etc. Politicians and others could also be given an overall cheating rating, which would be a function of their cheating ratings in individual articles and debates. Like any rating system, this system would serve both to give citizens reliable information of which arguments, which articles, and which people, are to be trusted, and to force politicians and other public figures to argue in a more honest fashion. In other words, it would have both have an information-disseminating function and a socializing function.

How would such a website be set up? An obvious suggestion is to run it as a wiki, where anyone could contribute. Of course, this wiki would have to be very heavily moderated - probably more so than Wikipedia - since people are bound to disagree on whether controversial figures' arguments really are fallacious or not. Presumably you will be forced to banish trolls and political activists on a grand scale, but hopefully this wouldn't be an unsurmountable problem.

I'm thinking that the website should be strongly devoted to neutrality or objectivity, as is Wikipedia. To further this end, it is probably better to give the arguer under evaluation the benefit of the doubt in borderline cases. This would be a way of avoiding endless edit wars and ensure objectivity. Also, it's a way of making the contributors to the site concencrate their efforts on the more outrageous cases of cheating (which there are many of in most political debates and articles, in my view).

The hope is that a website like this would make the public debate transparent to an unprecedented degree. Argumentative cheaters thrive because their arguments aren't properly scrutinized. If light is shone on the public debate, it will become clear who cheats and who doesn't, which will give people strong incentives not to cheat. If people respected the site's neutrality, its objectivity and its integrity, and read what it said, it would in effect become impossible for politicians and others to bullshit the way they do today. This could mark the beginning of the realization of an old dream of philosophers: The End of Bullshit at the hands of systematic criticism. Important names in this venerable tradition include David HumeRudolf Carnap and the other logical positivists, and not the least, the guy standing statue outside my room, the "critical rationalist" (an apt name for this enterprise) Karl Popper.

Even though politics is an area where bullshit is perhaps especially common, and one where it does an exceptional degree of harm (e.g. vicious political movements such as Nazism are usually steeped in bullshit) it is also common and harmful in many other areas, such as science, religion, advertising. Ideally critical rationalists should go after bullshit in all areas (as far as possible). My hunch is, though, that it would be a good idea to start off with politics, since it's an area that gets lots of attention and where well-written criticism could have an immediate impact.

Irrationality Game III

11 CellBioGuy 12 March 2014 01:51PM

The 'Irrationality Game' posts in discussion came before my time here, but I had a very good time reading the bits written in the comments section.  I also had a number of thoughts I would've liked to post and get feedback on, but I knew that being buried in such old threads not much would come of it.  So I asked around and feedback from people has suggested that they would be open to a reboot!

I hereby again quote the original rules:

Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?

Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.

That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!

Additional rules:

  • Generally, no repeating an altered version of a proposition already in the comments unless it's different in an interesting and important way. Use your judgement.
  • If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post. 
  • Don't post propositions as comment replies to other comments. That'll make it disorganized.
  • You have to actually think your degree of belief is rational.  You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that  any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average.  This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.
  • Debate and discussion is great, but keep it civil.  Linking to the Sequences is barely civil -- summarize arguments from specific LW posts and maybe link, but don't tell someone to go read something. If someone says they believe in God with 100% probability and you don't want to take the time to give a brief but substantive counterargument, don't comment at all. We're inviting people to share beliefs we think are irrational; don't be mean about their responses.
  • No propositions that people are unlikely to have an opinion about, like "Yesterday I wore black socks. ~80%" or "Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%." The goal is to be controversial and interesting.
  • Multiple propositions are fine, so long as they're moderately interesting.
  • You are encouraged to reply to comments with your own probability estimates, but  comment voting works normally for comment replies to other comments.  That is, upvote for good discussion, not agreement or disagreement.
  • In general, just keep within the spirit of the game: we're celebrating LW-contrarian beliefs for a change!

I would suggest placing *related* propositions in the same comment, but wildly different ones might deserve separate comments for keeping threads separate.

Make sure you put "Irrationality Game" as the first two words of a post containing a proposition to be voted upon in the game's format.

Here we go!

EDIT:  It was pointed out in the meta-thread below that this could be done with polls rather than karma so as to discourage playing-to-win and getting around the hiding of downvoted comments.  If anyone resurrects this game in the future, please do so under that system  If you wish to test a poll format in this thread feel free to do so, but continue voting as normal for those that are not in poll format. 

How can I spend money to improve my life?

15 jpaulson 02 February 2014 10:16AM

On ChrisHallquist's post extolling the virtues of money, the top comment is Eliezer pointing out the lack of concrete examples. Can anyone think of any? This is not just hypothetical: if I think your suggestion is good, I will try it (and report back on how it went)

I care about health, improving personal skills (particularly: programming, writing, people skills), gaining respect (particularly at work), and entertainment (these days: primarily books and computer games). If you think I should care about something else, feel free to suggest it.

I am early-twenties programmer living in San Francisco. In the interest of getting advice useful to more than one person, I'll omit further personal details.

Budget: $50/day

If your idea requires significant ongoing time commitment, that is a major negative.

View more: Next