Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Weekly LW Meetups: Austin, Berlin, Cambridge UK, London

1 FrankAdamek 15 February 2013 05:00PM

What are your rules of thumb?

19 DataPacRat 15 February 2013 03:59PM

I'm not as smart as I like to think I am. Knowing that, I've gotten into a habit of trying to work out as many general principles as I can ahead of time, so that when I actually need to think of something, I've already done as much of the work as I can.

What are your most useful cached thoughts?

continue reading »

Is protecting yourself from your own biases self-defeating?

0 [deleted] 15 February 2013 02:21PM

I graduated from high school and wish to further my education formally by studying for a bachelor's degree in order to become a medical researcher. I could, for instance, take two different academic paths:

  1. Study Medicine at undergraduate level and then do a postdoctoral fellowship.

  2. Study Biochemistry at undergraduate level, then study for a PhD at graduate level, and finally do a postdoctoral fellowship.

Since I will do these studies in Europe, they each take approximately the same amount of time, namely 6 to 8 years.

 

Do I want to do treat patients? No, I do not. But I am considering Medicine because it can be a buffer against my own mediocrity: in case I turn out to be a below average scientist, I will be screwed royally. From my personal job shadowing experience, Medicine, on the other hand, requires mere basic intellectual traits, primarily the ability to memorize heaps of information. And those I think I have. To do world-class research though I'd have to be an intellectual heavyweight, and of that I'm not so sure.

 

How do I decide what path to  follow?

 

The reason I'm asking you strangers for advice is because I evidently have biases, such as the pessimism/optimism bias or the DunningKruger effect, that impair my ability to reason clearly; and people who know me personally are likewise prone to make errors in advising me because of biases like, say, the Halo effect. (Come to think of it, thinking that I can't become an above average scientist is in itself a self-defeating prophecy!)


Do you think that one ought to always seek advice from total strangers in order to be safeguarded from his/her own biases?

 

PS: I apologize if I should have written this in a specific thread. I'll delete my article if that's necessary.

Unintentional bayesian

4 [deleted] 15 February 2013 10:46AM

Growing up in a very religious country, I was indoctrinated thoroughly both at home and at school. I used to believe that some Christian beliefs made sense. When I was 14 years old or so, I began contemplating death – I said to myself, “Well, after I die I go to Hell or Heaven; the latter is preferable, so I'd better learn as soon as possible how I can make sure I'll go to Heaven.”

So I went on to read frantically about Christianity. With every iota of information processed, I strayed away from this religion. That is, the more I read, the less anything pertaining to it seemed plausible. “Where the hell is Hell? Can I visit before I die? Why doesn't God answer my prayers to tell me? Why do some people get to talk to God but not me?”, I retorted. In retrospective, my greatest strength was genuine curiosity – I wanted to know as much as possible about the truthfulness of my religion.

 

The irony here is that wanting to become more Christian-like led to my abandoning of Christianity. But I continued to learn more about other religions as well, thinking that one might be truer than the other. Of course, none of them seemed every remotely plausible; I concluded that religions are false. I turned into an atheist without even knowing that that word existed!

 

Eventually I stumbled on some articles regarding non-religion and discovered that my lack of religious beliefs are called 'atheism'. Since then, I have abandoned more beliefs tied to, say, politics or nutrition, thanks to applying bayesian probability to my hypotheses.

 

I had been an unintentional bayesian for my whole life!

 

Have you had any similar experiences? 

 

PS: This is my first article. I am looking forward to hearing feedback on it.

 

Edit #1: I should have used the term 'rationalist' instead of 'bayesian' because I didn't apply Bayes' theorem explicitly.

The Virtue of Compartmentalization

4 b1shop 15 February 2013 08:53AM

Cross posted from my blog, Selfish Meme.


I’d like to humbly propose a new virtue to add to Eliezer’s virtues of rationality — the virtue of Compartmentalization. Like the Aristoteian virtues, the virtue of Compartmentalization is a golden mean. Learning the appropriate amount of Compartmentalization, like learning the appropriate amount of bravery, is a life-long challenge.

Learning how to program is both learning the words to which computers listen and training yourself to think about complex problems. Learning to comfortably move between levels of abstraction is an important part of the second challenge.

Large programs are composed of multiple modules. Each module is composed of lines of code. Each line of code is composed of functions manipulating objects. Each function is yet a deeper set of instructions.

For a programmer to truly focus on one element of a program, he or she has to operate at the right level of abstraction and temporarily forget the elements above, below or alongside the current problem.

Programming is not the only discipline that requires this focus. Economists and mathematicians rely on tools such as regressions and Bayes’ rule without continually recanting the math that makes them truths. Engineers do not consider wave-particle duality when predicting Newtonian-type problems. When a mechanic is fixing a radiator, the only relevant fact about spark plugs is that they produce heat.

If curiosity killed the cat, it’s only because it distracted her from more urgent matters.

As I became a better programmer I didn’t notice my Compartmentalization-skills improving – I was too lost in the problem at hand, but I noticed the skill when I noticed its absence in other people. Take, for example, the confused philosophical debate about free will. A typical spiel from an actual philosopher can be found in the movie Waking Life.

Discussions about free will often veer into unproductive digressions about physical facts at the wrong level of abstraction. Perhaps, at its deepest level, reality is a collection of billiard balls. Perhaps reality is, deep down, a pantheon of gods rolling dice. Maybe all matter is composed of cellists balancing on vibrating tightropes. Maybe we’re living in a simulated matrix of 1’s and 0s, or maybe it really is just turtles all the way down.

These are interesting questions that should be pursued by all blessed with sufficient curiosity, but these are questions at a level of abstraction absolutely irrelevant to the questions at hand.

A philosopher with a programmer’s discipline thinking about “free will” will not start by debating the above questions. Instead, he will notice that “free will” is itself a philosophical abstraction that can be broken down into several, oft-convoluted components. Debating the concept as a whole is too high of an abstraction. When one says “do I have free will?” one could actually be asking:

  1. Are the actions of humans predictable?
  2. Are humans perfectly predictable with complete knowledge and infinite computational time?
  3. Will we ever have complete knowledge and infinite computational time necessary to perfectly predict a human?
  4. Can you reliably manipulate humans with advertising/priming?
  5. Are humans capable of thinking about and changing their habits through conscious thought?
  6. Do humans have a non-physical soul that directs our actions and is above physical influences?

I’m sure there are other questions lurking beneath in the conceptual quagmire of “free will,” but that’s a good start These six are not only significantly narrower in scope than “Do humans have free will?” but also are also answerable and actionable. Off the cuff:

  1. Of course.
  2. Probably.
  3. Probably not.
  4. Less than marketers/psychologists would want you to believe but more than the rest of us would like to admit.
  5. More so than most animals, but less so than we might desire.
  6. Brain damage and mind-altering drugs would suggest our “spirits” are not above physical influences.

So, in sum, what would a programmer have to say about the question of free will? Nothing. The problem must be broken into manageable pieces, and each element must be examined in turn. The original question is not clear enough for a single answer. Furthermore, he will ignore all claims about the fundamental nature of the universe. You don’t go digging around machine code when you’re making a spreadsheet.

If you want your brain to think about problems larger, older and deeper than your brain, then you should be capable of zooming in and out of the problem – sometimes poring over the minutest details and sometimes blurring your vision to see the larger picture. Sometimes you need to alternate between multiple maps of varying detail for the same territory. Far from being a vice, this is the virtue of Compartmentalization.

Your homework assignment: Does the expression “love is just a chemical” change anything about Valentine’s Day?

Three Axes of Prohibitions

30 [deleted] 15 February 2013 07:34AM

The Game of Thrones board game is similar to Diplomacy (so I hear: I've never actually played Diplomacy). You often need to make alliances to survive, but these alliances are weak. It is both expected and required that you will eventually break your alliances, otherwise you will lose. My first time playing this game, I made an alliance with a neighboring House which turned out to be unwise, and severely limited my options. To me, breaking an alliance to win a game (even if it was socially acceptable) didn’t feel right/wasn’t worth the negative feelings, and so I ended up stuck on my island for the whole of the game. 

Instead of adapting by learning to be ok breaking alliances, which I considered to be a sub-optimal solution, I fixed the problem by targeting my alliance terms. Now, instead of a general alliance, my offers were along the lines of “I won’t attack you across this border for the next four rounds, if you agree to the same.”

This had the effect of actually strengthening my alliances. Limiting the terms of the alliance to something that could easily be complied with, meant that defecting was no longer expected. The cost goes down, and the benefits go up. In the previous game, both I and my ally would’ve had to leave our mutual border semi-defended, because we knew the alliance wouldn’t hold against a strong enough temptation of conquest. This was facilitated by the fact that the alliance was expected to be eventually broken. In the targeted alliance, I and my ally can leave our mutual border undefended, since we can expect the alliance to hold. This is facilitated by the fact that there would be social sanctions against breaking the alliance. (e.g. I wouldn’t form alliances with that person in future games, because I knew they would break them.)

This lead me to the thought that prohibitions seem to focus on three axes:

  • Specific/Vague Wording: When you ask for a favor, say “please” v. Be polite
  • Targeted/Broad Expectations: Don’t do this one thing v. Don’t do this whole class of things
  • Social expectation to comply/fudge: Don’t have sex with another (wo)man v “Don’t even look at another (wo)man!" 

 

General Examples:

  • Speed limits- specific, targeted, but expected to fudge
  • Terms of use agreements- specific, broad, expected to fudge (no one even reads them)
  • Bribing officials in corrupt societies (there are laws against it, but it is expected as the way to get anything done, or even considered a perk of the office)- vague (giving “gifts” is appropriate?)
  • "Thou shall not kill"- specific, targeted, expected to comply
  • Paper shields- being asked to sign something that is vague and broad. The idea is that it is broad enough to cover anything and everything, and so is expected to be broken. But as long as you don't do anything egregious, they don't enforce it.

 

It seems to me that making an injunction specific and targeted increases the expectation of compliance. This is important to me, because I seem to dislike injunctions that I am expected to fudge. 

Another example of how this plays out in my life: Being enmeshed in a poly network, there is a lot of talking about people (not necessarily in a bad way-- if you ask me about my day though, my answer is going to involve other people). To get around worrying about if I am ever breaking confidence, I specifically tell people I am close to that I don’t consider any information to be private unless it is specifically stated as so (this goes two ways). This way, I get to "gossip" but also people know they can strongly trust me with any information that is prefaced with "This is not for public consumption..." In this example, like the board game, I am turning a broad, vague injuction that isn't strongly expected to be followed ("don't ever talk about other people") into an injuction that can be trusted to be followed by making it specific and targeted. 

Relevant to previous discussions on: ask v guess cultures, and the idea that if it's expected that everyone breaks a specific law then the government can arrest anyone they want to

[SEQ RERUN] The Super Happy People (3/8)

1 MinibearRex 15 February 2013 07:06AM

Today's post, The Super Happy People (3/8) was originally published on 01 February 2009. A summary (taken from the LW wiki):

 

Humanity encounters new aliens that see the existence of pain amongst humans as morally unacceptable.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was War and/or Peace (2/8), and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

LW Women: LW Online

29 [deleted] 15 February 2013 01:43AM

 

Standard Intro

The following section will be at the top of all posts in the LW Women series.

Several months ago, I put out a call for anonymous submissions by the women on LW, with the idea that I would compile them into some kind of post.  There is a LOT of material, so I am breaking them down into more manageable-sized themed posts. 

Seven women submitted, totaling about 18 pages. 

Standard Disclaimer- Women have many different viewpoints, and just because I am acting as an intermediary to allow for anonymous communication does NOT mean that I agree with everything that will be posted in this series. (It would be rather impossible to, since there are some posts arguing opposite sides!)

Warning- Submitters were told to not hold back for politeness. You are allowed to disagree, but these are candid comments; if you consider candidness impolite, I suggest you not read this post

To the submitters- If you would like to respond anonymously to a comment (for example if there is a comment questioning something in your post, and you want to clarify), you can PM your message and I will post it for you. If this happens a lot, I might create a LW_Women sockpuppet account for the submitters to share.

Please do NOT break anonymity, because it lowers the anonymity of the rest of the submitters.

(Note from me: I've been procrastinating on posting these. Sorry to everyone who submitted! But I've got them organized decently enough to post now, and will be putting one up once a week or so, until we're through)

 


 

 

Submitter A

I think this is all true. Note that that commenter hasn't commented since 2009.

 

Objectifying remarks about attractive women and sneery remarks about unattractive women are not nice. I worry that guys at less wrong would ignore unattractive women if they came to meetings. Unattractive women can still be smart! I also worry that they would only pay attention to attractive women insofar as they think they might get to sleep with them.

 

I find the "women are aliens" attitude that various commenters  (and even Eliezer in the post I link to) seem to have difficult to deal with: http://lesswrong.com/lw/rp/the_opposite_sex/. I wish these posters would make it clear that they are talking about women on average: presumably they don't think that all men and all women find each other to be like aliens.

 

I find I tend to shy away from saying feminist things in response to PUA/gender posts, since there seems to be a fair amount of knee-jerk down-voting of anything feminist sounding. There also seems to be quite a lot of knee-jerk up-voting of poorly researched armchair ev-psych.

 

Linked to 3, if people want to make claims about men and women having different innate abilities, that is fine. However, I wish they'd make it clear when they are talking on average, i.e. "women on average are worse at engineering than men" not "women are worse at engineering than men."

 

A bit of me wishes that the "no mindkiller topics" rule was enforced more strictly, and that we didn't discuss sex/gender issues. I do think it is off-putting to smart women - you don't convert people to rationality by talking about such emotive topics. Even if some of the claims like "women on average are less good at engineering than men" are true* they are likely to put smart women off visiting less wrong. Not sure to what extent we should sacrifice looking for truth to attract people. I suspect many LWers would say not at all. I don't know. We already rarely discuss politics, so would it be terrible to also discuss sex/gender issues as little as possible?

 

I agree with Luke here

 

*and I do think some of them are true

 

***

 

Submitter B

 

My experience of LessWrong is that it feels unfriendly. It took me a long time to develop skin thick enough to tolerate an environment where warmth is scarce. I feel pretty certain that I've got a thicker skin than most women and that the environment is putting off other women. You wouldn't find those women writing an LW narrative, though - the type of women I'm speaking of would not have joined. It's good to open a line of communication between the genders, but by asking the women who stayed, you're not finding out much about the women who did not stay. This is why I mention my thinner-skinned self.

 

 What do I mean by unfriendly? It feels like people are ten thousand times more likely to point out my flaws than to appreciate something I said. Also, there's next to no emotional relating to one another. People show appreciation silently in votes, and give verbal criticism, and there are occasionally compliments, but there seems to be a dearth of friendliness. I don't need instant bonding, but the coldness is thick. If I try to tell by the way people are acting, I'm half convinced that most of the people here think I'm a moron. I'm thick skinned enough that it doesn't get to me, but I don't envision this type of environment working to draw women.

 

Ive had similar unfriendly experiences in other male-dominated environments like in a class of mostly boys. They were aggressive - in a selfish way, as opposed to a constructive one. For instance, if the teacher was demonstrating something, they'd crowd around aggressively trying to get the best spots. I was much shorter, which makes it harder to see. This forced me to compete for a front spot if I wanted to see at all, and I never did because I just wasn't like that. So that felt pretty insensitive. Another male dominated environment was similarly heavy on the criticism and light on niceness.

 

These seem to be a theme in male-dominated environments which have always had somewhat of a deterring effect on me: selfish competitive behavior (Constructive competition for an award or to produce something of quality is one thing, but to compete for a privilege in a way that hurts someone at a disadvantage is off-putting), focus on negative reinforcement (acting like tough guys by not giving out compliments and being abrasive), lack of friendliness (There can be no warm fuzzies when you're acting manly) and hostility toward sensitivity.

 

One exception to this is Vladimir_Nesov. He has behaved in a supportive and yet honest way that feels friendly to me. ShannonFriedman does "honest yet friendly" well, too.

 

A lot of guys I've dated in the last year have made the same creepy mistake. I think this is likely to be relevant because they're so much like LW members (most of them are programmers, their personalities are very similar and one of them had even signed up for cryo), and because I've seen some hints of this behavior on the discussions. I don't talk enough about myself here to actually bring out this "creepy" behavior (anticipation of that behavior is inhibiting me as well as not wanting to get too personal in public) so this could give you an insight that might not be possible if I spoke strictly of my experiences on LessWrong.

 

The mistake goes like this:

I'd say something about myself.

They'd disagree with me.

 

For a specific example, I was asked whether I was more of a thinker or feeler and I said I was pretty balanced. He retorted that I was more of a thinker. When I persist in these situations, they actually argue with me. I am the one who has spent millions of minutes in this mind, able to directly experience what's going on inside of it. They have spent, at this point, maybe a few hundred minutes observing it from the outside, yet they act like they're experts. If they said they didn't understand, or even that they didn't believe me, that would be workable. But they try to convince me I'm wrong about myself. I find this deeply disturbing and it's completely dysfunctional. There's no way a person will ever get to know me if he won't even listen to what I say about myself. Having to argue with a person over who I am is intolerable.

 

I've thought about this a lot trying to figure out what they're trying to do. It's never going to be a sexy "negative hit" to argue with me about who I am. Disagreeing with me about myself can't possibly count as showing off their incredible ability to see into me because they're doing the exact opposite: being willfully ignorant. Maybe they have such a need to box me into a category that they insist on doing so immediately. Personalities don't fit nicely in categories, so this is an auto-fail. It comes across as if they're either deluded into believing they're some kind of mind-reading genius or that they don't realize I'm a whole, grown-up human being complete with the ability to know myself. This has happened on the LessWrong forum also.

 

I have had a similar problem that only started to make sense after considering that they may have been making a conscious effort to develop skepticism: I had a lot of experiences where it felt like everything I said about myself was being scrutinized. It makes perfect sense to be skeptical about other conversation topics, but when they're skeptical about things I say about myself, this is ingratiating. This is because it's not likely that either of us will be able to prove or disprove anything about my personality or subjective experiences in a short period of time, and possibly never. Yet saying nothing about ourselves is not an option if we want to get to know each other better. I have to start somewhere.

 

It's almost like they're in such a rush to have definitive answers about me that they're sabotaging their potential to develop a real understanding of me. Getting to know people is complicated - that's why it takes a long time. Tearing apart her self-expressions can't save you from the ambiguity.

 

I need "getting to know me" / "sharing myself" type conversations to be an exploration. I do understand the need to construct one's own perspective on each new person. I don't need all my statements to be accepted at face value. I just want to feel that the person is happily exploring. They should seem like they're having fun checking out something interesting, not interrogating me and expecting to find a pile of errors. Maybe this happens because of having a habit of skeptical thinking - they make people feel scrutinized without knowing it.

Meetup : Future Meetup for Indonesian LWers

7 rv77ax 14 February 2013 10:00PM

Discussion article for the meetup : Future Meetup for Indonesian LWers

WHEN: 01 January 2015 04:00:00PM (+0700)

WHERE: Bandung, Indonesia

I remember about LW's survey sometimes ago, and when the survey result come out, I see that only one LWers is come from Indonesia. Obviously that was me.

I try to search forum and this site, using keyword "Indonesia", and only found one user, which is me.

Now, in case someone from and/or live in my country like to have a meetup, I put this reminder, so when they search this site they will find it and maybe we can have meetup.

Discussion article for the meetup : Future Meetup for Indonesian LWers

Meetup : Buffalo Meetup

3 StonesOnCanvas 14 February 2013 06:42PM

Discussion article for the meetup : Buffalo Meetup

WHEN: 17 February 2013 04:00:00PM (-0500)

WHERE: SPOT Coffee Delaware Ave & W Chippewa St, Buffalo, NY

(Apologies, for the short notice.) Last meetup we talked about making sure your beliefs "pay rent " by constraining anticipation. This time we'll talk about specific examples of ways some beliefs may not even really count as beliefs at all:

Belief in Belief - http://lesswrong.com/lw/i4/belief_in_belief/ Professing and Cheering - http://lesswrong.com/lw/i6/professing_and_cheering/ Belief as Attire - http://lesswrong.com/lw/i7/belief_as_attire/

The concepts are pretty similar so I thought I'd just lump them together for this meetup. Read what you can (3 posts is a lot, so don't worry about it if you don't get around to it). In any case, I'll do a cliff-notes summary for everyone. Anyone can attend. Feel free to invite friends who might be interested. We'll also play some cool games too. We're meeting at SPOT this time (I'll have a sign so you can find us easily).

Discussion article for the meetup : Buffalo Meetup

Meetup : Purdue Meetup

2 Sek0M 14 February 2013 04:48PM

Discussion article for the meetup : Purdue Meetup

WHEN: 16 February 2013 07:00:00PM (-0500)

WHERE: Purdue - Hicks Library

A reminder that there will be a Purdue Less Wrong Meetup this Friday. We have reserved a space, but given the short timeline and incomplete email list the plan is to meet in Hicks, group up, and transition to the reserved space. I'm saying the meeting time is 7PM instead of 6:50 based on my own preference for round numbers.

We're planning to bring some minor level of snackage, and in addition to our often very dynamic discussions I will offer some more structured thoughts on, well, structure, and how to grow the organization and offer more utility to our own meetup folks and possible to the campus and community in general. Please feel free to contact me with any question or concerns, and we hope to see you there.

Discussion article for the meetup : Purdue Meetup

The Singularity Wars

50 JoshuaFox 14 February 2013 09:44AM

(This is a introduction, for  those not immersed in the Singularity world, into the history of and relationships between SU, SIAI [SI, MIRI], SS, LW, CSER, FHI, and CFAR. It also has some opinions, which are strictly my own.)

The good news is that there were no Singularity Wars. 

The Bay Area had a Singularity University and a Singularity Institute, each going in a very  different direction. You'd expect to see something like the People's Front of Judea and the Judean People's Front, burning each other's grain supplies as the Romans moved in. 

continue reading »

A Series of Increasingly Perverse and Destructive Games

11 nigerweiss 14 February 2013 09:22AM

Related to: Higher Than the Most High

 

The linked post describes a game in which (I fudge a little), Omega comes to you and two other people, and ask you to tell him an integer.  The person who names the largest integer is allowed to leave.  The other two are killed.

This got me thinking about variations on the same concept, and here's what I've come up, taking that game to be GAME0.  The results are sort of a fun time-waster, and bring up some interesting issues.  For your enjoyment...

 

THE GAMES:

GAME1: Omega takes you and two strangers (all competent programmers), and kidnaps and sedates you.  You awake in three rooms with instructions printed on the wall explaining the game, and a computer with an operating system and programming language compiler, but no internet.  Food, water, and toiletries are provided, but no external communication.  The participants are allowed to write programs on the computer in a language that supports arbitrarily large numerical values.  The programs are taken by Omega and run on a hypercomputer in finite time (this hypercomputer can resolve the halting problem and infinite loops, but programs that do not eventually halt return no output).  The person who wrote the program with the largest output is allowed to leave.  The others are instantly and painlessly killed.  In the event of a tie, everyone dies.  If your program returns no output, that is taken to be zero.    

GAME2: Identical to GAME1, except that each program you write has to take two inputs, which will be the text of the other players' programs (assume they're all written in the same language).  The reward for outputting the largest number apply normally.  

GAME3: Identical to Game2, except that while you are sedated, Omega painlessly and imperceptibly uploads you.  Additionally, the instructions on the wall now specify that your program must take four inputs - blackbox functions which represent the uploaded minds of all three players, plus a simulation of the room you're in, indistinguishable from the real thing.  We'll assume that players can't modify or interpret the contents of their opponents' brains.  The room function take an argument of a string (which controls the text printed on the wall, and outputs whatever number the person in the simulation's program returns).

 

In each of these games, which program should you write if you wish to survive?  

 

SOME DISCUSSION OF STRATEGY: 

GAME1: Clearly, the trivial strategy (implement the Ackerman or similar fast-growing functions and generate some large integer), gives no better than random results, because it's the bare minimal strategy anyone will employ, and your ranking in the results, without knowledge of your opponents is entirely up to chance / how long you're willing to sit there typing nines for your Ackermann argument.

A few alternatives for your consideration:

1: if you are aware of an existence hypothesis (say, a number with some property which is not conclusively known to exist and could be any integer), write a program that brute-force tests all integers until it arrives at an integer which matches the requirements, and use this as the argument for your rapidly-growing function.  While it may never return any output, if it does, the output will be an integer, and the expected value goes towards infinity.  

2: Write a program that generates all programs shorter than length n, and finds the one with the largest output.  Then make a separate stab at your own non-meta winning strategy.  Take the length of the program you produce, tetrate it for safety, and use that as your length n.  Return the return value of the winning program.

On the whole, though, this game is simply not all that interesting in a broader sense.  

GAME2: This game has its own amusing quirks (primarily that it could probably actually be played in real life on a non-hypercomputer), however, most of its salient features are also present in GAME3, so I'm going to defer discussion to that.  I'll only say that the obvious strategy (sum the outputs of the other two players' programs and return that) leads to an infinite recursive trawl and never halts if everyone takes it.  This holds true for any simple strategy for adding or multiplying some constant with the outputs of your opponents' programs.    

 

GAME3: This game is by far the most interesting.  For starters, this game permits acausal negotiation between players (by parties simulating and conversing with one another).  Furthermore, anthropic reasoning plays a huge role, since the player is never sure if they're in the real world, one of their own simulations, or one of the simulations of the other players.  

Players can negotiate, barter, or threaten one another, they can attempt to send signals to their simulated selves (to indicate that they are in their own simulation and not somebody else's).  They can make their choices based on coin flips, to render themselves difficult to simulate.  They can attempt to brute-force the signals their simulated opponents are expecting.  They can simulate copies of their opponents who think they're playing any previous version of the game, and are unaware they've been uploaded.  They can simulate copies of their opponents, observe their meta-strategies, and plan around them.  They can totally ignore the inputs from the other players and play just the level one game.  It gets very exciting very quickly.  I'd like to see what strategy you folks would employ.  

 

And, as a final bonus, I present GAME4 :  In game 4, there is no Omega, and no hypercomputer.  You simply take a friend, chloroform them, and put them in a concrete room with the instructions for GAME3 on the wall, and a linux computer not plugged into anything.  You leave them there for a few months working on their program, and watch what happens to their psychology.  You win when they shrink down into a dead-eyed, terminally-paranoid and entirely insane shell of their former selves.  This is the easiest game.  

 

Happy playing!   

 

 

Meetup : Cambridge, UK LW Meetup [Reading Group, HAEFB-02]

2 Sohum 14 February 2013 09:02AM

Discussion article for the meetup : Cambridge, UK LW Meetup [Reading Group, HAEFB-02]

WHEN: 17 February 2013 11:00:00AM (+0000)

WHERE: Trinity JCR, Cambridge, UK

Meetup! This week, we'll be continuing our reading group session of the Shiny New Sequence, Highly Advanced Epistemology 101 For Beginners. These are explicitly designed as introductory posts, and this is just our second session, so dive right in with us if you're new!

We only covered The Useful Idea of Truth last week, so this week we'll do Appreciating Cognitive Algorithms and the two minor posts, Skill: The Map is Not The Territory and Firewalling the Optimal From The Rational.

See you there!

Discussion article for the meetup : Cambridge, UK LW Meetup [Reading Group, HAEFB-02]

Rationalist Lent

43 Qiaochu_Yuan 14 February 2013 06:32AM

As I understand it, Lent is a holiday where we celebrate the scientific method by changing exactly one variable in our lives for 40 days. This seems like a convenient Schelling point for rationalists to adopt, so:

What variable are you going to change for the next 40 days?

(I am really annoyed I didn't think of this yesterday.) 

[SEQ RERUN] War and/or Peace (2/8)

4 MinibearRex 14 February 2013 04:50AM

Today's post, War and/or Peace (2/8) was originally published on 31 January 2009. A summary (taken from the LW wiki):

 

The true prisoner's dilemma against aliens. The conference struggles to decide the appropriate course of action.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Baby-Eating Aliens (1/8), and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

Meetup : Cincinnati February: Predictions

3 RolfAndreassen 14 February 2013 04:01AM

Discussion article for the meetup : Cincinnati February: Predictions

WHEN: 16 February 2013 02:00:00PM (-0500)

WHERE: 354 Ludlow Avenue

We will meet at the Amol India on Ludlow street at 1400. This month's exercise is to try for calibration. Each of us should give a probability for each of the five events below occurring before April 1st; at the meetup we will discuss our reasoning and perhaps update. Then in April we'll see how we did, as a group and individually.

  1. Will Kim Jong-un cease to be dictator of Best Korea?
  2. Will the sentence of any of the seven scientists convicted of failing to adequately warn of the L'Aquila earthquake be modified?
  3. Will a large (more than 1000 soldiers) foreign force invade Iran?
  4. Will pope Bendict's resignation be revealed as for other than medical reasons?
  5. Will Eliezer update HPMOR a) Exactly once b) Exactly twice c) Three or more times?

If you like, you can substitute other questions, or suggest new ones, in the comments. Posting to PredictionBook is optional but encouraged.

Discussion article for the meetup : Cincinnati February: Predictions

Meetup : Vancouver Boredom vs Scope Insensitivity, and life-debugging

2 [deleted] 14 February 2013 03:15AM

Discussion article for the meetup : Vancouver Boredom vs Scope Insensitivity, and life-debugging

WHEN: 16 February 2013 03:00:00PM (-0800)

WHERE: 2505 W broadway, vancouver

Meet at Benny's Bagels on west broadway at 15:00 on saturday.

We are going to discuss Boredom vs Scope Insensitivity and related issues with utility curves.

To complement such a theoretical topic, we may also discuss the rationality failures in our lives that we'd like to get better at. It should be thoroughly depressing and hopefully useful.

As usual, see us on our mailing list.

Discussion article for the meetup : Vancouver Boredom vs Scope Insensitivity, and life-debugging

[LINK] "Scott and Scurvy": a reminder of the messiness of scientific progress

14 hesperidia 14 February 2013 02:02AM

http://idlewords.com/2010/03/scott_and_scurvy.htm

Now, I had been taught in school that scurvy had been conquered in 1747, when the Scottish physician James Lind proved in one of the first controlled medical experiments that citrus fruits were an effective cure for the disease. From that point on, we were told, the Royal Navy had required a daily dose of lime juice to be mixed in with sailors’ grog, and scurvy ceased to be a problem on long ocean voyages.

But here was a Royal Navy surgeon in 1911 apparently ignorant of what caused the disease, or how to cure it. Somehow a highly-trained group of scientists at the start of the 20th century knew less about scurvy than the average sea captain in Napoleonic times. Scott left a base abundantly stocked with fresh meat, fruits, apples, and lime juice, and headed out on the ice for five months with no protection against scurvy, all the while confident he was not at risk. What happened?

This article is a vivid illustration of just how nonlinear and downright messy science actually is, and how little the superficial presentation of science as neat "progress" reflects the reality of the field.

Meetup : Vienna Meetup 9th March

5 Ratcourse 13 February 2013 09:35PM

Discussion article for the meetup : Vienna Meetup 9th March

WHEN: 09 March 2013 04:00:00PM (+0100)

WHERE: Schottengasse 2, 1010 Innere Stadt (1.Bez)

Cafe im schottenstift.

Mainly this is a meeting to kickstart the vienna lesswrong/rationality group. We will get to know each other and discuss ways to promote a rational lifestyle.

Confirmed attendees: Five attendees so far, 3 of whom are working on creating a rationality education project.

See you there.

Discussion article for the meetup : Vienna Meetup 9th March

Higher than the most high

10 Stuart_Armstrong 13 February 2013 04:10PM

In an earlier post, I talked about how we could deal with variants of the Heaven and Hell problem - situations where you have an infinite number of options, and none of them is a maximum. The solution for a (deterministic) agent was to try and implement the strategy that would reach the highest possible number, without risking falling into an infinite loop.

Wei Dai pointed out that in the cases where the options are unbounded in utility (ie you can get arbitrarily high utility), then there are probabilistic strategies that give you infinite expected utility. I suggested you could still do better than this. This started a conversation about choosing between strategies with infinite expectation (would you prefer a strategy with infinite expectation, or the same plus an extra dollar?), which went off into some interesting directions as to what needed to be done when the strategies can't sensibly be compared with each other...

Interesting though that may be, it's also helpful to have simple cases where you don't need all these subtleties. So here is one:

Omega approaches you and Mrs X, asking you each to name an integer to him, privately. The person who names the highest integer gets 1 utility; the other gets nothing. In practical terms, Omega will reimburse you all utility lost during the decision process (so you can take as long as you want to decide). The first person to name a number gets 1 utility immediately; they may then lose that 1 depending on the eventual response of the other. Hence if one person responds and the other doesn't, they get the 1 utility and keep it. What should you do?

In this case, a strategy that gives you a number with infinite expectation isn't enough - you have to beat Mrs X, but you also have to eventually say something. Hence there is a duel of (likely probabilistic) strategies, implemented by bounded agents, with no maximum strategy, and each agent trying to compute the maximal strategy they can construct without falling into a loop.

The Fundamental Question - Rationality computer game design

39 Kaj_Sotala 13 February 2013 01:45PM

I sometimes go around saying that the fundamental question of rationality is Why do you believe what you believe?

-- Eliezer in Quantum Non-Realism

I was much impressed when they finally came out with a PC version of DragonBox, and I got around to testing it on some children I knew. Two kids, one of them four and the other eight years old, ended up blazing through several levels of solving first-degree equations while having a lot of fun doing so, even though they didn't know what it was that they were doing. That made me think that there has to be some way of making a computer game that would similarly teach rationality skills at the 5-second level. Some game where you would actually be forced to learn useful skills if you wanted to make progress.

After playing around with some ideas, I hit upon the notion of making a game centered around the Fundamental Question. I'm not sure whether this can be made to work, but it seems to have promise. The basic idea: you are required to figure out the solution to various mysteries by collecting various kinds of evidence. Some of the sources of evidence will be more reliable than others. In order to hit upon the correct solution, you need to consider where each piece of evidence came from, and whether you can rely on it.

Gameplay example

Now, let's go into a little more detail. Let's suppose that the game has a character called Bob. Bob tells you that tomorrow, eight o'clock, there will be an assassination attempt on Market Square. The fact that Bob has told you this is evidence for the claim being true, so the game automatically records the fact that you have such a piece of evidence, and that it came from Bob.

(Click on the pictures in case you don't see them properly.)

But how does Bob know that? You ask, and it turns out that Alice told him. So next, you go and ask Alice. Alice is confused and says that she never said anything about any assassination attempt: she just said that something big is going to be happen at the Market Square at that time, she heard it from the Mayor. The game records two new pieces of evidence: Alice's claim of something big happening at the Market Square tomorrow (which she heard from the Mayor), and her story of what she actually told Bob. Guess that Bob isn't a very reliable source of evidence: he has a tendency to come up with fancy invented details.

Or is he? After all, your sole knowledge about Bob being unreliable is that Alice claims she never said what Bob says she said. But maybe Alice has a grudge against Bob, and is intentionally out to make everyone disbelieve him. Maybe it's Alice who's unreliable. The evidence that you have is compatible with both hypotheses. At this point, you don't have enough information to decide between them, but the game lets you experiment with setting either of them as "true" and seeing the implications of this on your belief network. Or maybe they're both true - Bob is generally unreliable, and Alice is out to discredit him. That's another possibility that you might want to consider. In any case, the claim that there will be an assassination tomorrow isn't looking very likely at the moment.

Actually, having the possibility for somebody lying should probably be a pretty late-game thing, as it makes your belief network a lot more complicated, and I'm not sure whether this thing should display numerical probabilities at all. Instead of having to juggle the hypotheses of "Alice lied" and "Bob exaggerates things", the game should probably just record the fact that "Bob exaggerates things". But I spent a bunch of time making these pictures, and they do illustrate some of the general principles involved, so I'll just use them for now.

Game basics

So, to repeat the basic premise of the game, in slightly more words this time around: your task is to figure out something, and in order to do so, you need to collect different pieces of evidence. As you do so, the game generates a belief network showing the origin and history of the various pieces of evidence that you've gathered. That much is done automatically. But often, the evidence that you've gathered is compatible with many different hypotheses. In those situations, you can experiment with different ways of various hypotheses being true or false, and the game will automatically propagate the consequences of that hypothetical through your belief network, helping you decide what angle you should explore next.

Of course, people don't always remember the source of their knowledge, or they might just appeal to personal experiences. Or they might lie about the sources, though that will only happen at the more advanced levels.

As you proceed in the game, you will also be given access to more advanced tools that you can use for making hypothetical manipulations to the belief network. For example, it may happen that many different characters say that armies of vampire bats tend to move about at full moon. Since you hear that information from many different sources, it seems reliable. But then you find out that they all heard it from a nature documentary on TV that aired a few weeks back. This is reflected in your belief graph, as the game modifies it to show that all of those supposedly independent sources can actually be tracked back to a single one. That considerably reduces the reliability of the information.

But maybe you were already suspecting that the sources might not be independent? In that case, it would have been nice if the belief graph interface would let you postulate this beforehand, and see how big of an effect it would make on the plausibility of the different hypotheses if they were in fact reliant on each other. Once your character learns the right skills, it becomes possible to also add new hypothetical connections to the belief graph, and see how this would influence your beliefs. That will further help you decide what possibilities to explore and verify.

Because you can't explore every possible eventuality. There's a time limit: after a certain amount of moves, a bomb will go off, the aliens will invade, or whatever.

The various characters are also more nuanced than just "reliable" or "not reliable". As you collect information about the various characters, you'll figure out their mindware, motivations, and biases. Somebody might be really reliable most of the time, but have strong biases when it comes to politics, for example. Others are out to defame others, or invent fancy details to all the stories. If you talk to somebody you don't have any knowledge about yet, you can set a prior on the extent that you rely on their information, based on your experiences with other people.

You also have another source of evidence: your own intuitions and experience. As you get into various situations, a source of evidence that's labeled simply "your brain" will provide various gut feelings and impressions about things. The claim that Alice presented doesn't seem to make sense. Bob feels reliable. You could persuade Carol to help you if you just said this one thing. But in what situations, and for what things, can you rely on your own brain? What are your own biases and problems? If you have a strong sense of having heard something at some point, but can't remember where it was, are you any more reliable than anyone else who can't remember the source of their information? You'll need to figure all of that out.

As the game progresses to higher levels, your own efforts will prove insufficient for analyzing all the necessary information. You'll have to recruit a group of reliable allies, who you can trust to analyze some of the information on their own and report the results to you accurately. Of course, in order to make better decisions, they'll need you to tell them your conclusions as well. Be sure not to report as true things that you aren't really sure about, or they will end up drawing the wrong conclusions and focusing on the wrong possibilities. But you do need to condense your report somewhat: you can't just communicate your entire belief network to them.

Hopefully, all of this should lead to player learning on a gut level things like:

  • Consider the origin of your knowledge: Obvious.
  • Visualizing degrees of uncertainty: In addition to giving you a numerical estimate about the probability of something, the game also color-codes the various probabilities and shows the amount of probability mass associated with your various beliefs.
  • Considering whether different sources really are independent: Some sources which seem independent won't actually be that, and some which seem dependent on each other won't be.
  • Value of information: Given all the evidence you have so far, if you found out X, exactly how much would it change your currently existing beliefs? You can test this and find out, and then decide whether it's worth finding out.
  • Seek disconfirmation: A lot of things that seem true really aren't, and acting on flawed information can cost you.
  • Prefer simpler theories: Complex, detailed hypotheses are more likely to be wrong in this game as well.
  • Common biases: Ideally, the list of biases that various characters have is derived from existing psychological research on the topic. Some biases are really common, others are more rare.
  • Epistemic hygiene: Pass off wrong information to your allies, and it'll cost you.
  • Seek to update your beliefs: The game will automatically update your belief network... to some extent. But it's still possible for you to assign mutually exclusive events probabilities that sum to more than 1, or otherwise have conflicting or incoherent beliefs. The game will mark these with a warning sign, and it's up to you to decide whether this particular inconsistency needs to be resolved or not.
  • Etc etc.

Design considerations

It's not enough for the game to be educational: if somebody downloads the game because it teaches rationality skills, that's great, but we want people to also play it because it's fun. Some principles that help ensure that, as well as its general utility as an educational aid, include:

  • Provide both short- and medium-term feedback: Ideally, there should be plenty of hints for how to find out the truth about something by investigating just one more thing: then the player can find out whether your guess was correct. It's no fun if the player has to work through fifty decisions before finding out whether they made the right move: they should get constant immediate feedback. At the same time, the player's decisions should be building up to a larger goal, with uncertainty about the overall goal keeping them interested.
  • Don't overwhelm the player: In a game like this, it would be easy to throw a million contradictory pieces of evidence at the player, forcing them to go through countless of sources of evidence and possible interactions and have no clue of what they should be doing. But the game should be manageable. Even if it looks like there is a huge messy network of countless pieces of contradictory evidence, it should be possible to find the connections which reveal the network to be relatively simple after all. (This is not strictly realistic, but necessary for making the game playable.)
  • Introduce new gameplay concepts gradually: Closely related to the previous item. Don't start out with making the player deal with every single gameplay concept at once. Instead, start them out in a trusted and safe environment where everyone is basically reliable, and then begin gradually introducing new things that they need to take into account.
  • No tedium: A game is a series of interesting decisions. The game should never force the player to do anything uninteresting or tedious. Did Alice tell Bob something? No need to write that down, the game keeps automatic track of it. From the evidence that has been gathered so far, is it completely obvious what hypothesis is going to be right? Let the player mark that as something that will be taken for granted and move on.
  • No glued-on tasks: A sign of a bad educational game is that the educational component is glued on to the game (or vice versa). Answer this exam question correctly, and you'll get to play a fun action level! There should be none of that - the educational component should be an indistinguishable part of the game play.
  • Achievement, not fake achievement: Related to the previous point. It would be easy to make a game that wore the attire of rationality, and which used concepts like "probability theory", and then when your character leveled up he would get better probability attacks or whatever. And you'd feel great about your character learning cool stuff, while you yourself learned nothing. The game must genuinely require the player to actually learn new skills in order to get further.
  • Emotionally compelling: The game should not be just an abstract intellectual exercise, but have an emotionally compelling story as well. Your choices should feel like they matter, and characters should be in risk of dying if you make the wrong decisions.
  • Teach true things: Hopefully, the players should take the things that they've learned from the game and apply them to their daily lives. That means that we have a responsibility not to teach them things which aren't actually true.
  • Replayable: Practice makes perfect. At least part of the game world needs to be randomly generated, so that the game can be replayed without a risk of it becoming boring because the player has memorized the whole belief network.

What next?

What you've just read is a very high-level design, and a quite incomplete one at that: I've spoken on the need to have "an emotionally compelling story", but said nothing about the story or the setting. This should probably be something like a spy or detective story, because that's thematically appropriate for a game which is about managing information; and it might be best to have it in a fantasy setting, so that you can question the widely-accepted truths of that setting without needing to get on anyone's toes by questioning widely-accepted truths of our society.

But there's still a lot of work that remains to be done with regard to things like what exactly does the belief network look like, what kinds of evidence can there be, how does one make all of this actually be fun, and so on. I mentioned the need to have both short- and medium-term feedback, but I'm not sure of how that could be achieved, or whether this design lets you achieve it at all. And I don't even know whether the game should show explicit probabilities.

And having a design isn't enough: the whole thing needs to be implemented as well, preferably while it's still being designed in order to take advantage of agile development techniques. Make a prototype, find some unsuspecting testers, spring it on them, revise. And then there are the graphics and music, things for which I have no competence for working on.

I'll probably be working on this in my spare time - I've been playing with the idea of going to the field of educational games at some point, and want the design and programming experience. If anyone feels like they could and would want to contribute to the project, let me know.

EDIT: Great to see that there's interest! I've created a mailing list for discussing the game. It's probably easiest to have the initial discussion here, and then shift the discussion to the list.

Realism : Direct or Indirect?

3 kremlin 13 February 2013 09:40AM

Stanford Encyclopedia : Perception
Wikipedia : Direct and Indirect Realism

On various philosophy forums I've participated on, there have been arguments between those who call themselves 'direct realists' and those who call themselves 'indirect realists'. The question is apparently about perception. Do we experience reality directly, or do we experience it indirectly?

When I was first initiated to the conversation, I immediately took the indirect side -- There is a ball, photons bounce off the ball, the frequency of those photons is changed by some properties of the ball, the photons hit my retina activating light-sensitive cells, those cells send signals to my brain communicating that they were activated, the signals make it to the visual cortex and...you know...some stuff happens, and I experience the sight of a ball.

So, my first thought in the conversation about Indirect vs Direct realism was that there was a lot of stuff in between the ball and my experience of it, so, it must be indirect.

But then I found that direct realists don't actually disagree about any part of that sequence of events I described above. For them as well, at least the few that have bothered to respond, photons bounce off a ball, interact with our retinas, send signals to the brain, etc. The physical process is apparently the same for both sides of the debate.

And when two sides vehemently disagree on something, and then when the question is broken down into easy, answerable questions you find that they actually agree on every relevant question, that tends to be a pretty good hint that it's a wrong question.

So, is this a wrong question? Is this just a debate about definitions? Is it a semantic argument, or is there a meaningful difference between Direct and Indirect Realism? In the paraphrased words of Eliezer, "Is there any way-the-world-could-be—any state of affairs—that corresponds to Direct Realism being true, or Indirect Realism being true?"

[LINK] Open Source Software Developer with Terminal Illness Hopes to Opt Out of Death

17 lsparrish 13 February 2013 05:57AM

Aaron Winborn writes:

 

TLDR: http://venturist.info/aaron-winborn-charity.html

So maybe you've heard about my plight, in which I wrestle Lou Gehrig in this losing battle to stay alive. And I use the phrase "staying alive" loosely, as many would shudder at the thought of becoming locked in with ALS, completely paralyzed, unable to move a muscle other than your eyes.

But that's only half the story. Wait for the punchline.

As if the physical challenges of adapting to new and increasingly debilitating disabilities were not enough, my wife and two young daughters are forced to watch helplessly as the man they knew loses the ability to lift a fork or scratch an itch, who just two years ago was able to lift his infant daughter and run with the 7-year-old. The emotional strain on my family is more than any family should have to bear. Not to mention the financial difficulties, which include big purchases such as a wheelchair van and home modifications, and ultimately round the clock nursing care, all of it exacerbated by the fact that we have had to give up my income both because of the illness and to qualify for disability and Medicaid.

Meet me, Aaron Winborn, software developer and author of Drupal Multimedia, champion of the open source software movement.

Years ago, I worked for the lady of death herself, Elisabeth Kübler-Ross, the author of On Death and Dying. Of course, I knew that one day I would need to confront death, but like most people, I assumed it would be when I was old, not in the prime of my life. Not that I'm complaining; I have lived a full life, from living in a Buddhist monastery to living overseas, from marrying the woman of my dreams to having two wonderful daughters, from teaching in a radical school to building websites for progressive organizations, from running a flight simulator for the US Navy to working as a puppeteer.

I accept the fact of my inevitable death. But accepting death does not, I believe, mean simply rolling over and letting that old dog bite you. Regardless of the prevalent mindset in society that says that people die and so should you, get over it, I believe that the reality we experience of people living only to a few decades is about to be turned upside down.

Ray Kurzweil spells out a coming technological singularity, in which accelerating technologies reach a critical mass and we reach a post-human world. He boldly predicts this will happen by the year 2045. I figured that if I could make it to 2035, my late 60s, that I would be able to take advantage of whatever medical advances were available and ride the wave to a radically extended lifespan.

ALS dictates otherwise. 50% of everyone diagnosed will die within 2 to 3 years of the onset of the disease. 80% will be gone in 5 years. And only 10% go on to survive a decade, most of them locked in, paralyzed completely, similar to Stephen Hawking. Sadly, my scores put me on the fast track of the 50%, and I am coming up quickly on 3 years.

Enter Kim Suozzi.

On June 10 of last year, her birthday, which is coincidentally my own, Kim Suozzi asked a question to the Internet, "Today is my 23rd birthday and probably my last. Anything awesome I should try before I die?" The answer that she received and acted on would probably be surprising to many.

On January 17, 2013, Kim Suozzi died, and as per her dying wish, was cryonically preserved.

She was a brave person, and I hope to meet her someday.

So yes, there we have it. The point that I am making with all this rambling. I hope to freeze my body after I die, in the hope of future medical technologies advancing to the point where they will be able to revive me.

The good news is that in the scheme of things, it is not too terribly expensive to have yourself cryonically preserved. You should look at it yourself; most people will fund it with a $35K-200K life insurance policy.

The bad news for me is that a life insurance policy is out of the question for me; a terminal illness precludes that as an option. Likewise, due to the financial hardships in store for us, self-funding is also out of the question.

When I learned about Kim Suozzi's plight, I reached out to the organization that set up the charity that ultimately funded her cryopreservation. The Society for Venturism, a non-profit that has raised funds for the eventual cryopreservation of terminally ill patients, agreed to take on my case.

Many of you reading this post have already helped out in so many ways. From volunteering your time and effort to our family, to donating money towards my Special Needs Trust to help provide a cushion for the difficult times ahead.

I am so grateful for all of this. It means so much to me and my family to know that there is such a large and generous community supporting us. I hate to ask for anything more, especially for something that may seem like an extravagance.

But is it really an extravagance?

If I were to ask for $100,000 for an experimental stem cell treatment, I doubt that we would even be having this conversation. No one in their right mind would even consider a potentially life-saving procedure to be an extravagance.

And what is cryonics, but a potentially life-saving procedure?

People choose from among many options for their bodies after death. Some choose to be buried, some choose cremation. Some choose to donate their bodies to science. That last is precisely what happens with cryonics: in addition to helping to answer the obvious question of will future revival from cold storage be possible, many developments in cryonics help modern medicine with the development of better preservation for organ transplantation and blood volume expanders.

Yes, I admit that the chances of it working are slim, but have you looked at the state of stem cell research for ALS lately? Consider that the only FDA approved medication to treat ALS, Rilutek, will on average add 3 months to one's lifespan, and you might begin to see my desperation.

But you should be happy with the life you've had. Why do you want to live forever?

The only reasonable response to that is to ask why do you want to die?

I love life. Every morning, even now with my body half paralyzed, I awaken with a new sense of purpose, excited to take on the day. There is so much I have yet to do. There are books to write, games to create, songs to sing. If I can get the use of my arms and hands again, there are gardens to plant, houses to build, space ships to fly. And oh, the people to love.

So please help me to realize this, my dying wish.

http://venturist.info/aaron-winborn-charity.html

"The most beautiful people we have known are those who have known defeat, known suffering, known struggle, known loss, and have found their way out of the depths. These persons have an appreciation, a sensitivity, and an understanding of life that fills them with compassion, gentleness, and a deep loving concern. Beautiful people do not just happen."

- Elisabeth Kübler-Ross

 

Blog post: http://aaronwinborn.com/blogs/aaron/open-source-software-developer-terminal-illness-hopes-opt-out-death

Hacker news discussion: http://news.ycombinator.com/item?id=5211602

Questions for Moral Realists

3 peter_hurford 13 February 2013 05:44AM

My meta-ethics are basically that of Luke's Pluralistic Moral Reductionism.  (UPDATE: Elaborated in my Meta-ethics FAQ.)

However, I was curious as to whether this "Pluralistic Moral Reductionism" counts as moral realism or anti-realism.  Luke's essay says it depends on what I mean by "moral realism".  I see moral realism as broken down into three separate axes:

There's success theory, the part that I accept, which states that moral statements like "murder is wrong" do successfully refer to something real (in this case, a particular moral standard, like utilitarianism -- "murder is wrong" refers to "murder does not maximize happiness").

There's unitary theory, which I reject, that states there is only one "true" moral standard rather than hundreds of possible ones.

And then there's absolutism theory, which I reject, that states that the one true morality is rationally binding.

I don't know how many moral realists are on LessWrong, but I have a few questions for people who accept moral realism, especially unitary theory or absolutism theory.  These are "generally seeking understanding and opposing points of view" kind of questions, not stumper questions designed to disprove or anything.  While I'm doing some more reading on the topic, if you're into moral realism, you could help me out by sharing your perspective.

~

Why is there only one particular morality?

This goes right to the core of unitary theory -- that there is only one true theory of morality.  But I must admit I'm dumbfounded at how any one particular theory of morality could be "the one true one", except in so far as someone personally chooses that theory over others based on preferences and desires.

So why is there only one particular morality?  And what is the one true theory of morality?  What makes this theory the one true one rather than others?  How do we know there is only one particular theory?  What's inadequate about all the other candidates?

~

Where does morality come from?

This gets me a bit more background knowledge, but what is the ontology of morality?  Some concepts of moral realism have an idea of a "moral realm", while others reject this as needlessly queer and spooky.  But essentially, what is grounding morality?  Are moral facts contingent; could morality have been different?  Is it possible to make it different in the future?

~

Why should we care about (your) morality?

I see rationality as talking about what best satisfies your pre-existing desires.  But it's entirely possible that morality isn't desirable by someone at all.  While I hope that society is prepared to coerce them into moral behavior (either through social or legal force), I don't think that their immoral behavior is necessarily irrational.  And on some accounts, morality is independent of desire but still has rational force.

How does morality get it's ability to be rationally binding?  If the very definition of "rationality" includes being moral, is that mere wordplay?  Why should we accept this definition of rationality and not a different one?

I look forward to engaging in diologue with some moral realists.  Same with moral anti-realists, I guess.  After all, if moral realism is true, I want to know.

[SEQ RERUN] The Baby-Eating Aliens (1/8)

5 MinibearRex 13 February 2013 05:29AM

Today's post, The Baby-Eating Aliens (1/8) was originally published on 30 January 2009. A summary (taken from the LW wiki):

 

Future explorers discover an alien civilization, and learns something unpleasant about their civilization.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Value is Fragile, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

Meetup : London Meetup, 24th Feb

2 philh 13 February 2013 12:02AM

Discussion article for the meetup : London Meetup, 24th Feb

WHEN: 24 February 2013 02:00:00PM (+0000)

WHERE: Holborn, London

A meetup in the Shakespeare's Head pub by Holborn tube station. We meet every other Sunday at 2pm. Everyone is welcome.

We also have a Google group.

Discussion article for the meetup : London Meetup, 24th Feb

Meetup : Montreal LessWrong Meetup - The Science of Winning at Life

2 Paul_G 12 February 2013 09:19PM

Discussion article for the meetup : Montreal LessWrong Meetup - The Science of Winning at Life

WHEN: 18 February 2013 06:30:00PM (-0500)

WHERE: 655 Avenue du Président Kennedy, Montréal, QC H3A 3H9

Weekly meeting of the Montreal LessWrong Meetup group.

We've decided to look into the Science of Winning at Life. You can read the sequence here (http://wiki.lesswrong.com/wiki/The_Science_of_Winning_at_Life) if you're interested.

See you there!

Discussion article for the meetup : Montreal LessWrong Meetup - The Science of Winning at Life

Is suicide high-status?

9 Stabilizer 12 February 2013 09:41AM

I sometimes have thoughts of suicide. That does not mean I would ever come within a mile of committing the act of suicide. But my brain does simulate it; though I do try to always reduce such thoughts.

But what I have noticed is that 'suicide' is triggered in my mind whenever I think of some embarrassing event, real or imagined. Or an event in which I'm obviously a low-status actor. This leads me to think that suicide might be a high-status move, in the sense that its goal is to recover status after some event which caused a big drop in status. Consider the following instances when suicide is often considered:

  1. One-sided break-ups of romantic relationships. The party who has been 'dumped' (for the lack of a better word), has obviously taken a giant status hit. In this case, suicide is often threatened. 
  2. A samurai committing seppuku. The samurai has lost in battle. Clearly, a huge drop in status (aka 'honor').
  3. PhD student says he/she can't take it anymore. A PhD is a constant hit in status: you aren't smart enough, you don't have much money, and you don't yet have intellectual status.
Further, suicide (or suicidal behavior leading to death) seems to have conferred status to artists. Examples: Kurt Cobain, Amy Winehouse, Jimi Hendrix, Hunter S. Thompson, Ernest Hemingway, David Foster Wallace and many more. I'm not saying that they committed suicide due to a pressure to achieve high-status (though that may be the case, I'm not sure). What I am saying is that suicide has been associated with high-status. 

Further, after a person is dead, he/she is almost always celebrated (at least for a while) and all their faults are forgotten.

My theory: in many low-status situations, an instinctive way to recover status is to say that you are too good for this game and check-out. In fact, children (and adults) will often just leave a game they're not very good at and disparage the rest of the players for playing. And suicide is the ultimate check-out. This theory is motivated by observations of my own brain going through thoughts of suicide. They almost always consist of imagining other people crying about my death and saying what an awesome person he was. And about how he was just too smart to be able to live in this world. 

Do you think this theory has some weight? I'm certain that I'm not the first person to think of this. But a quick Google didn't yield much. Any pointers to literature?

 

[SEQ RERUN] Value is Fragile

3 MinibearRex 12 February 2013 01:32AM

Today's post, Value is Fragile was originally published on 29 January 2009. A summary (taken from the LW wiki):

 

An interesting universe, that would be incomprehensible to the universe today, is what the future looks like if things go right. There are a lot of things that humans value that if you did everything else right, when building an AI, but left out that one thing, the future would wind up looking dull, flat, pointless, or empty. Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was 31 Laws of Fun, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

Meetup : Cambridge, MA third-Sunday meetup

3 jimrandomh 11 February 2013 11:48PM

Discussion article for the meetup : Cambridge, MA third-Sunday meetup

WHEN: 18 February 2013 02:00:00PM (-0500)

WHERE: 25 Ames St Cambridge, MA 02138

Cambridge/Boston-area Less Wrong meetups recur on the first and third Sunday of every month, 2pm at the MIT Landau Building [25 Ames St, Bldg 66], room 148.

Discussion article for the meetup : Cambridge, MA third-Sunday meetup

A confusion about deontology and consequentialism

5 [deleted] 11 February 2013 07:19PM

I think there’s a confusion in our discussions of deontology and consequentialism. I’m writing this post to try to clear up that confusion. First let me say that this post is not about any territorial facts. The issue here is how we use the philosophical terms of art ‘consequentialism’ and ‘deontology’.

The confusion is often stated thusly: “deontological theories are full of injunctions like ‘do not kill’, but they generally provide no (or no interesting) explanations for these injunctions.” There is of course an equivalently confused, though much less common, complaint about consequentialism.

This is confused because the term ‘deontology’ in philosophical jargon picks out a normative ethical theory, while the question ‘how do we know that it is wrong to kill?’ is not a normative but a meta-ethical question. Similarly, consequentialism contains in itself no explanation for why pleasure or utility are morally good, or why consequences should matter to morality at all. Nor does consequentialism/deontology make any claims about how we know moral facts (if there are any). That is also a meta-ethical question.

Some consequentialists and deontologists are also moral realists. Some are not. Some believe in divine commands, some are hedonists. Consequentialists and deontologists in practice always also subscribe to some meta-ethical theory which purports to explain the value of consequences or the source of injunctions. But consequentialism and deontology as such do not. In order to avoid strawmaning either the consequentialist or the deontologist, it’s important to either discuss the comprehensive views of particular ethicists, or to carefully leave aside meta-ethical issues.

This Stanford Encyclopedia of Philosophy article provides a helpful overview of the issues in the consequentialist-deontologist debate, and is careful to distinguish between ethical and meta-ethical concerns.

SEP article on Deontology

Meetup : Melbourne social meetup

2 toner 11 February 2013 04:01PM

Discussion article for the meetup : Melbourne social meetup

WHEN: 15 February 2013 06:45:00PM (+1100)

WHERE: see mailing list, Carlton VIC 3053

Melbourne's next regular social meetup will be held on Friday 15th February at the usual venue in Carlton. All are welcome from 6:30pm for a 7:00pm official start, but don't stress about being on time.

Our social meetups are informal events held on the third Friday of each month, where we lounge about playing boardgames and chatting, with occasional group parlour games such as Mafia/Werewolf or Resistance if people are interested. If you haven't been to a Melbourne meetup before/recently, the social meetup can be less intimidating way to meet us as it's very informal.

Some snacks will be provided and we'll probably arrange some form of delivered food for dinner. BYO drinks and games.

For the location/any questions, please see the Melbourne Less Wrong google group, or feel free to SMS Richard on 0421 231 789 or contact me on 0412 996 288.

Thanks again to Richard (Maelin) for being able to host.

Discussion article for the meetup : Melbourne social meetup

Meetup : Durham: Status Quo Bias

4 evand 11 February 2013 04:32AM

Discussion article for the meetup : Durham: Status Quo Bias

WHEN: 14 February 2013 07:00:00PM (-0500)

WHERE: Francesca's, 706 9th Street, Durham, NC

We'll discuss the status quo bias and debiasing techniques. Suggested readings include Bostrom's paper, and other things TBD. Suggestions for additional reading are welcome. Post them here, or on the mailing list.

I'll summarize a bit about the status quo bias and Bostrom's paper, so please don't feel like you can't come if you haven't done the reading, but we do encourage reading in advance!

We'll do introductions and low-key discussion from 7:00-7:30, and status quo bias discussion from 7:30-9:00.

Discussion article for the meetup : Durham: Status Quo Bias

[SEQ RERUN] 31 Laws of Fun

1 MinibearRex 11 February 2013 03:51AM

Today's post, 31 Laws of Fun was originally published on 26 January 2009. A summary (taken from the LW wiki):

 

A brief summary of principles for writing fiction set in a eutopia.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Higher Purpose, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

Meetup : Washington DC fun and games meetup

2 rocurley 11 February 2013 02:31AM

Discussion article for the meetup : Washington DC fun and games meetup

WHEN: 17 February 2013 03:00:00PM (-0500)

WHERE: National Portrait Gallery , Washington, DC 20001, USA

We'll be meeting to play games and hang out.

Discussion article for the meetup : Washington DC fun and games meetup

Meetup : Toronto - Rational Debate: Will Rationality Make You Rich? ... and other topics

6 Giles 11 February 2013 01:12AM

Discussion article for the meetup : Rational Debate: Will Rationality Make You Rich? ... and other topics

WHEN: 12 February 2013 08:00:00PM (-0500)

WHERE: 54 Dundas St E, Toronto, ON

Place: Upstairs at The Imperial Public Library 54 Dundas St. E, near Dundas Station. Enter at the door on the right marked "library", go upstairs and look for the paperclip sign.

We'll kick the meeting off with ASK LESS WRONG. Think of something in your everyday life that's bothering you and we'll help you smooth it out. Purpose: increase the fun in each others' lives through the magic of friendship. Secondary purpose: train ourselves to notice things that are suboptimal and view them as problems that can be solved.

The main part of the meeting will be a RATIONAL DEBATE. We'll start with "will rationality make you rich", then move on to "is there intelligent life elsewhere in the universe" and "should you vote". That's probably all we'll have time for before the beer kicks in, but we do have backup topics.

If you want to read up on any of these topics, that's great - but not strictly necessary.

Rational debating is far from a solved problem, so we'll be learning how to do it as we go along. I'll be chairing, so don't worry about keeping track of this vast list of meta stuff - that's my job. It'll go something like this:

  • In a conventional debate, you win by sounding more plausible than the other person. In a rational debate, you win if and only if you end up believing the truth. This makes it a cooperative game - it's possible for everyone to win or for everyone to lose. (Incidentally it also means you don't actually know whether you've won or not).

  • Initially, each person answers the question separately, choosing how they wish to frame their answer. If people come up with very different ways of framing the question, we will take each one in turn and try to approach the question from that direction. (The point of this is to avoid fighting over the framing of the discussion and instead address the issues directly).

  • I'll keep track of structural stuff - different ways of framing the question, agreed subtopics of discussion, and binary chopping to find points of disagreement (which involves lists of statements and verbally how plausible we each think they are).

  • When arguing against something, construct a steel man first - rephrase the opposing argument in your own words, making it as strong and plausible as you can, before you try and defeat it.

  • Be bold and specific - make sure you're saying something substantial, even if you're not completely sure it's true.

  • The social aspect: make sure we're providing status and rewards for the right things.

  • Leave a line of retreat. What would I do if I was wrong about this?

  • Try to notice when you're replying to somebody's cached thought with a cached thought of your own. I'll try and do the same.

  • Try to find something to change your mind about, even if it's something small.

  • Separate out disagreement about facts from disagreement about values (and disagreement about strategy, which combines both). Separate out semantic confusion. I think we're already reasonably good at these.

  • If possible, identify which of these techniques you're trying to put into practice. I'll do the same. (By drawing attention to this we'll help keep things purposeful, and also hopefully learn which techniques seem particularly useful).

Resources on rational debate:

http://lesswrong.com/lw/85h/better_disagreement/

http://lesswrong.com/lw/o4/leave_a_line_of_retreat/

http://lesswrong.com/lw/gm9/philosophical_landmines/

Hope to see you all at the Imperial Pub on Tuesday! Let me know if you can come.

Discussion article for the meetup : Rational Debate: Will Rationality Make You Rich? ... and other topics

[Link] Detachment

2 John_Maxwell_IV 10 February 2013 11:05PM

http://www.urticator.net/essay/0/48.html

Might be a useful read apropos of Wei Dai's recent question:

Why are we causing [people] to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

[Link] False memories of fabricated political events

16 gjm 10 February 2013 10:25PM

Another one for the memory-is-really-unreliable file. Some researchers at UC Irvine (one of them is Elizabeth Loftus, whose name I've seen attached to other fake-memory studies) asked about 5000 subjects about their recollection of four political events. One of the political events never actually happened. About half the subjects said they remembered the fake event. Subjects were more likely to pseudo-remember events congruent with their political preferences (e.g., Bush or Obama doing something embarrassing).

Link to papers.ssrn.com (paper is freely downloadable).

The subjects were recruited from the readership of Slate, which unsurprisingly means they aren't a very representative sample of the US population (never mind the rest of the world). In particular, about 5% identified as conservative and about 60% as progressive.

Each real event was remembered by 90-98% of subjects. Self-identified conservatives remembered the real events a little less well. Self-identified progressives were much more likely to "remember" a fake event in which G W Bush took a vacation in Texas while Hurricane Katrina was devastating New Orleans. Self-identified conservatives were somewhat more likely to "remember" a fake event in which Barack Obama shook the hand of Mahmoud Ahmedinejad.

About half of the subjects who "remembered" fake events were unable to identify the fake event correctly when they were told that one of the events in the study was fake.

LW Women- Crowdsourced research on Cognitive biases and gender

7 [deleted] 10 February 2013 10:01PM

In the last LW Women post, it was mentioned, and I agree, that a two-way conversation is more productive, and presents varied viewpoints better than a one-way lecture. To that end, I am making this post an experiment in crowdsourcing research to LW. Instead of writing this topic up myself (more talking AT you), I want to see what happens if instead I leave a good prompt, along with some paths (search terms, journal articles) to start down for discussion. What information will a collectivist research project yield?  In other words, instead of reading what I write below as the article, pretend you are helping to collaborate on an article.

The next post in the series will go back to LW Women's submissions.

 

Recommended Rules (because last LW Women post reached 1000+ comments, and we want to keep that as navigable as possible)

When possible, make/use parent comments when you are discussing a specific bias, so that multiple studies or lines of reasoning on the same bias can be grouped together. 

When you post a summary of a study, make sure to read it first and give a decent rundown. If a study says "X sometimes, Y sometimes," do not just say "This study proves X!" 

Put meta discussion HERE (e.g.- What do you think about crowdsourcing research on LW? What do you think about the LW Women series, etc.)


Prompt

What cognitive biases might effect various gender stereotypes and how people think about gender?  Below are some starting points. The links are to the wikipedia articles. This list isn't the be-all, end-all. It's just somewhere to get started. Use it to get ideas, or not.

Fundamental Attribution Error- aka Correspondence Bias-  Tendency to draw inferences about a person's unique and enduring dispositions from behaviors that can be entirely explained by the situations in which they occur.

Actor-Observer Bias - People are more likely to see their own behavior as affected by the situation they are in, or the sequence of occurrences that have happened to them throughout their day. But, they see other people’s actions as solely a product of their overall personality, and they do not afford them the chance to explain their behavior as exclusively a result of a situational effect.

Just World Fallacy- human actions eventually yield morally fair and fitting consequences

System Justification- People have a motivation to defend and justify the status quo, even when it may be disadvantageous to certain people... they are motivated to see the status quo (or prevailing social, economic, and political norms) as good, legitimate, and desirable.

Availability Heuristic-  people make judgments about the probability of events by how easy it is to think of examples

List of Biases- help yourself to a bias! 

 

 


 

Example Response

Below is an example response I wrote about the Ultimate Attribution Error and Availability Heuristic. I didn't use any studies. Do better than me! (Update: I decided I should also include an example of a study write-up, so made a comment with one HERE . Please DON'T just give a link and a single sentence!)

 

The first post on the LW Women series involved trying to minimize the inferential gap by sharing anecdotes of what it's like growing up as a "geek girl". When reading these submissions, I was struck by how it might seem like the Fundamental Attribution Bias (aka Correspondence Bias) is at play, but for whole groups. Turns out this is A Thing, and it's called Ultimate Attribution Error.

For example, say a woman mentions that she's bad with computers. From *her* perspective, she sees the situation as the cause of this: "Of course I'm not as good with computers! When I went to learn in a programming class, it was full of guys who stared at me the whole time and I was too uncomfortable to pay attention!" When women see other women with the same responses, they can empathize with the situational causes.

However, when men see women complaining about new technology, they are more likely to attribute these to factors about the women's personalities: "she's not good at computers."

We don't view *lack* of a negative as a factor in our personalities. For example, one is likely to realize that the reason they did badly in school is because their parents had a low socio-economic status and so they lacked opportunities. One *might* realize that one of the reasons they are good in school is because their parents have a high socio-economic status which gives them certain advantages and opportunities. But one is *unlikely* to realize that NOT having low socioeconomic parents is why you did NOT do badly in school.


Images from: PhD Comics and xkcd

View more: Prev | Next