Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Rationalist fiction: a Slice of Life IN HELL

7 Ritalin 25 March 2014 05:02PM

"If you're sent to Hell for that, you wouldn't have liked it in Heaven anyway." 

This phrase inspired in me the idea of a Slice of Life IN HELL story. Basically, the strictest interpretation of the Abrahamic God turns out to be true, and, after Judgment Day, all the sinners (again, by the strictest standards), the pagans, the atheists, the gays, the heretics and so on end up in Hell, which is to say, most of humanity. Rather than a Fire and Brimstone torture chamber, this Hell is very much like earthly life, except it runs on Murphy's Law turned Up To Eleven ("everything that can go wrong, will go wrong"), and you can't die permanently, and it goes on forever. It's basically Life as a videogame, set to Maximum Difficulty, and real pain and suffering.

Our stories would focus actually decent, sympathetic people, who are there for things like following the wrong religion, or having sex outside missionary-man-on-woman, lack of observance of the daily little rituals, or even just being lazy. They manage to live more-or-less decently because they're extremely cautious, rational, and methodical. Given that reality is out to get them, this is a constant uphill battle, and even the slightest negligence can have a terrible cost. Thankfully, they have all the time in eternity to learn from their mistakes.

This could be an interesting way to showcase rationalist principles, especially those regarding safety and planning, in a perpetual Worst Case Scenario environment. There's ample potential for constant conflict, and sympathetic characters whom the audience can feel they really didn't deserve their fate. The central concept also seems classically strong to me: defying Status Quo and cruel authorities by striving to be as excellent as one can be, even in the face of certain doom.

What do you guys think? There's lots of little details to specify, and there are many things that I believe should be marked as "must NOT be specified". Any help, ideas, thoughts are very welcome.

Fiction: Written on the Body as love versus reason

-11 PhilGoetz 08 September 2013 06:13AM

In 1992, Jeanette Winterson, one of the hottest young authors of the early 1990s, published Written on the Body. Critics loved it, but none of them seem to have picked up on what I thought the book was about: The question of whether reason in love is good for you.

continue reading »

Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90

9 palladias 02 July 2013 02:13AM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 90The previous thread has passed 750 comments. 

There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.) 

The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: 1234567891011121314151617,18,19.

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89

12 Vaniver 30 June 2013 01:22AM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 88-89The previous thread has passed 500 comments. 

There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.) 

The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: 12345678910111213141516, 17, 18.

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

The Classic Literature Workshop

0 Ritalin 16 June 2013 09:54AM

From EY's Facebook page, there were two posts that got me thinking about fiction and how to work it better and make it stronger:

It would have been trivial to fix _Revenge of the Sith_'s inadequate motivation of Anakin's dark turn; have Padme already in the hospital slowly dying as her children come to term, not just some nebulous "visions". (Bonus points if you have Yoda lecture Anakin about the inevitability of death, but I'd understand if they didn't go there.) At the end, Anakin doesn't try to choke Padme; he watches the ship with her fly out of his reach, away from his ability to use his unnatural Sith powers to save her. Now Anakin's motives are 320% more sympathetic and the movie makes 170% more sense. If I'd put some serious work in, I'm pretty sure I could've had the movie audience in tears.

I still feel a sense of genuine puzzlement on how such disastrous writing happens in movies and TV shows. Are the viewers who care about this such a tiny percentage that it's not worth trying to sell to them? Are there really so few writers who could read over the script and see in 30 seconds how to fix something like this? (If option 2 is really the problem and people know it's the problem, I'd happily do it for $10,000 a shot.) Is it Graham's Design Paradox - can Hollywood moguls just not tell the difference between competent writers making such an offer, and fakers who'll take the money and run? Are the producers' egos so grotesque that they can't ask a writer for help? Is there some twisted sense of superiority bound up with believing that the audience is too dumb to care about this kind of thing, even though it looks to me like they do? I don't understand how a >$100M movie ends up with flaws that I could fix at the script stage with 30 seconds of advice.

A helpful key to understanding the art and technique of character in storytelling, is to consider the folk-psychological notion from Internal Family Systems of people being composed of different 'parts' embodying different drives or goals. A shallow character is then a character with only one 'part'.

A good rule of thumb is that to create a 3D character, that person must contain at least two different 2D characters who come into conflict. Contrary to the first thought that crosses your mind, three-dimensional good people are constructed by combining at least two different good people with two different ideals, not by combining a good person and a bad person. Deep sympathetic characters have two sympathetic parts in conflict, not a sympathetic part in conflict with an unsympathetic part. Deep smart characters are created by combining at least two different people who are geniuses.

E.g. HPMOR!Hermione contains both a sensible young girl who tries to keep herself and her friends out of trouble, and a starry-eyed heroine, neither of whom are stupid. (Actually, since HPMOR!Hermione is also the one character who I created as close to her canon self as I could manage - she didn't *need* upgrading - I should credit this one to J. K. Rowling.) (Admittedly, I didn't actually follow that rule deliberately to construct Methods, I figured it out afterward when everyone was praising the characterization and I was like, "Wait, people are calling me a character author now? What the hell did I just do right?")

If instead you try to construct a genius character by having an emotionally impoverished 'genius' part in conflict with a warm nongenius part... ugh. Cliche. Don't write the first thing that pops into your head from watching Star Trek. This is not how real geniuses work. HPMOR!Harry, the primary protagonist, contains so many different people he has to give them names, and none of them are stupid, nor does any one of them contain his emotions set aside in a neat jar; they contain different mixtures of emotions and ideals. Combining two cliche characters won't be enough to build a deep character. Combining two different realistic people in that character's situation works much better. Two is not a limit, it's a minimum, but everyone involved still has to be recognizably the same person when combined.

Closely related is Orson Scott Card's observation that a conflict between Good and Evil can be interesting, but it's often not half as interesting as a conflict between Good and Good. All standard rules about cliches still apply, and a conflict between good and good which you've previously read about and to which the reader can already guess your correct approved answer, cannot carry the story. A good rule of thumb is that if you have a conflict between good and good which you feel unsure about yourself, or which you can remember feeling unsure about, or you're not sure where exactly to draw the line, you can build a story around it. I consider the most successful moral conflict in HPMOR to be the argument between Harry and Dumbledore in Ch. 77 because it almost perfectly divided the readers on who was in the right *and* about whose side the author was taking. (*This* was done by deliberately following Orson Scott Card's rule, not by accident. Likewise _Three Worlds Collide_, though it was only afterward that I realized how much of the praise for that story, which I hadn't dreamed would be considered literarily meritful by serious SF writers, stemmed from the sheer rarity of stories built around genuinely open moral arguments. Orson Scott Card: "Propaganda only works when the reader feels like you've been absolutely fair to other side", and writing about a moral dilemma where *you're* still trying to figure out the answer is an excellent way to achieve this.)

Character shallowness can be a symptom of moral shallowness if it reflects a conflict between Good and Evil drawn along lines too clear to bring two good parts of a good character into conflict. This is why it would've been hard for Lord of the Rings to contain conflicted characters without becoming an entirely different story, though as Robin Hanson has just remarked, LotR is a Mileu story, not a Character story. Conflicts between evil and evil are even shallower than conflicts between good and evil, which is why what passes for 'maturity' in some literature is so uninteresting. There's nothing to choose there, no decision to await with bated breath, just an author showing off their disillusionment as a claim of sophistication.

 

I was wondering if we could apply this process to older fiction, Great Literature that is historically praised, and excellent by its own time's standards, but which, if published by a modern author, would seem substandard or inappropriate in one way or another.

Given our community's propensity for challenging sacred cows, and the unique tool-set available to us, I am sure we could take some great works of the past and turn them into awesome works of the present.


Of course, it doesn't have to be a laboratory where we rewrite the whole damn things. Just proprely-grounded suggestions on how to improve this or that work would be great.

 

P.S. This post is itself a work in progress, and will update and improve as comments come. It's been a long time since I've last posted on LW, so advice is quite welcome. Our work is never over.

 

EDIT: Well, I like that this thread has turned out so lively, but I've got finals to prepare for and I can't afford to keep participating in the discussion to my satisfaction. I'll be back in July, and apologize in advance for being such a poor OP. That said, cheers!

Orwell and fictional evidence for dictatorship stability

16 Stuart_Armstrong 24 May 2013 12:19PM

"If you want a picture of the future, imagine a boot stamping on a human face—forever."
George Orwell (Eric Arthur Blair), Nineteen Eighty-Four

Orwell's Nineteen Eighty-Four is brilliant, terrifying and useful. It's been at its best fighting against governmental intrusions, and is often quoted by journalists and even judges. It's cultural impact has been immense. And, hey, it's well written.

But that doesn't mean it's accurate as a source of predictions or counterfactuals. Orwell's belief that "British democracy as it existed before 1939 would not survive the war" was wrong. Nineteen Eighty-Four did not predict the future course of communism. There is no evidence that anything like the world he envisaged could (or will) happen. Which isn't the same as saying that it couldn't, but we do require some evidence before accepting Orwell's world as realistic.

Yet from this book, a lot of implicit assumptions have seeped into our consciousness. The most important one (shared with many other dystopian novels) is that dictatorships are stable forms of government. Note the "forever" in the quote above - the society Orwell warned about would never change, never improve, never transform. In several conversations (about future governments, for instance), I've heard - and made - the argument that a dictatorship was inevitable, because it's an absorbing state. Democracies can come become dictatorships, but dictatorships (barring revolutions) will endure for good. And so the idea is that if revolutions become impossible (because of ubiquitous surveillance, for instance), then we're stuck with Big Brother for life, and for our children's children'c children's lives.

But thinking about this in the context of history, this doesn't seem credible. The most stable forms of government are democracies and monarchies; nothing else endures that long. And laying revolutions aside, there have been plenty of examples of even quite nasty governments improving themselves. Robespierre was deposed from within his own government - and so the Terror, for all its bloodshed, didn't even last a full year. The worse excesses of Stalinism ended with Stalin. Gorbachev voluntarily opened up his regime (to a certain extent). Mao would excoriate the China of today. Britain's leaders in the 19th and 20th century gradually opened up the franchise, without ever coming close to being deposed by force of arms. The dictatorships of Latin America have mostly fallen to democracies (though revolutions played a larger role there). Looking over the course of recent history, I see very little evidence the dictatorships have much lasting power at all - or that they are incapable of drastic internal change and even improvements.

Now, caveats abound. The future won't be like the past - maybe an Orwellian dictatorship will become possible with advanced surveillance technologies. Maybe a world government won't see any neighbouring government doing a better job, and feel compelled to match it by improving lot of its citizens. Maybe the threat of revolution remains necessary, even if revolts don't actually happen.

Still, we should refrain from assuming that dictatorships, whether party or individual, are somehow the default state, and conduct a much more evidence-based analysis of the matter.

Pascal's wager

-10 duckduckMOO 22 April 2013 04:41AM


I started this as a comment on "Being half wrong about pascal's wager is even worse" but its really long, so I'm posting it in discussion instead.

 

Also I illustrate here using negative examples (hell and equivalents) for the sake of followability and am a little worried about inciting some paranoia so am reminding you here that every negative example has an equal and opposite positive partner. For example pascal's wager has the opposite where accepting sends you to hell, it also has the opposite where refusing sends you to heaven. I haven't mentioned any positive equivalents or opposites below. Also all of these possibilities are literally effectively 0 so dun worry.

 

"For so long as I can remember, I have rejected Pascal's Wager in all its forms on sheerly practical grounds: anyone who tries to plan out their life by chasing a 1 in 10,000 chance of a huge pay-off is almost certainly doomed in practice.  This kind of clever reasoning never pays off in real life..."

 

Pascal's wager shouldn't be in in the reference class of real life. It is a unique situation that would never crop up in real life as you're using it. In the world in which pascal's wager is correct you would still see people who plan out their lives on a 1 in 10000 chance of a huge pay-off fail 9999 times out of 10000. Also, this doesn't work for actually excluding pascal's wager. If pascal's wager starts off excluded from the category real life you've already made up your mind so this cannot quite be the actual order of events.

 

In this case 9999 times you waste your Christianity and 1/10000 you don't go to hell for eternity which is, at a vast understatement, much worse than 10000 times as bad as worshipping god even at the expense of the sanity it costs to force a change in belief, the damage it does to your psyche to live as a victim of self inflicted Stockholm syndrome, and any other non obvious cost: With these premises choosing to believe in God produces infinitely better consequences on average.

 

Luckily the premises are wrong. 1/10000 is about 1/10000 too high for the relevant probability. Which is:

the probability that the wager or equivalent, (anything whose acceptance would prevent you going to hell is equivalent) is true

MINUS

the probability that its opposite or equivalent, (anything which would send you to hell for accepting is equivalent), is true 

 

Equivalence here refers to what behaviours it punishes or rewards. I used hell because it is in the most popular wager but it applies to all wagers. To illustrate: If its true that there is one god: ANTIPASCAL GOD, and he sends you to hell for accepting any pascal's wager, then that's equivalent to any pascal's wager you hear having an opposite (no more "or equivalent"s will be typed but they still apply) which is true because if you accept any pascal's wager you go to hell. Conversely, If PASCAL GOD is the only god and he sends you to hell unless you accept any pascal's wager, that's equivalent to any pascal's wager you hear being true.

 

The real trick of pascals wager is the idea that they're generally no more likely than their opposite. For example, there are lots of good, fun, reasons to assign the Christian pascal's wager a lower probability than its opposite even engaging on a Christian level:

 

Hell is a medieval invention/translation error: the eternal torture thing isn't even in the modern bibles.

The belief or hell rule is hella evil and gains credibility from the same source (Christians, not the bible) who also claim that god is good as a more fundamental belief, which directly contradicts the hell or belief rule.

The bible claims that God hates people eating shellfish, taking his name in vain, and jealousy. Apparently taking his name in vain is the only unforgivable sin. So if they're right about the evil stuff, you're probably going to hell anyway.

It makes no sense that god would care enough about your belief and worship to consign people to eternal torture but not enough to show up once in a while.

it makes no sense to reward people for dishonesty.

The evilness really can't be overstated. eternal torture as a response to a mistake which is at its worst due to stupidity (but actually not even that: just a stacked deck scenario), outdoes pretty much everyone in terms of evilness. worse than pretty much every fucked up thing every other god is reputed to have done put together. The psychopath in the bible doesn't come close to coming close to this.

 

The problem with the general case of religious pascal's wagers is that people make stuff up (usually unintentionally) and what made up stuff gains traction has nothing to do with what is true. When both Christianity and Hinduism are taken seriously by millions (as were the Roman/Greek gods, and Viking gods, and Aztec gods, and Greek gods, and all sorts of other gods at different times, by large percentages of people) mass religious belief is 0 evidence. At most one religion set (e.g. Greek/Roman, Christian/Muslim/Jewish, etc) is even close to right so at least the rest are popular independently of truth.

 

The existence of a religion does not elevate the possibility that the god they describe exists above the possibility that the opposite exists because there is no evidence that religion has any accuracy in determining the features of a god, should one exist.

 

You might intuitively lean towards religions having better than 0 accuracy if a god exists but remember there's a lot of fictional evidence out there to generalise from. It is a matter of judgement here. there's no logical proof for 0 or worse accuracy (other than it being default and the lack of evidence) but negative accuracy is a possibility and you've probably played priest classes in video games or just seen how respected religions are and been primed to overestimate religion's accuracy in that hypothetical. Also if there is a god it has not shown itself publicly in a very long time, or ever. So it seems to have a preference for not being revealed.  Also humans tend to be somewhat evil and read into others what they see in themselves. and I assume any high tier god (one that had the power to create and maintain a hell, detect disbelief, preserve immortal souls and put people in hell) would not be evil. I think without bad peers or parents there's no reason to be evil. I think people are mostly evil in relation to other people. Being evil or totally unscrupled has benefits among humans which a god would not get. So I religions a slight positive accuracy in the scenario where there is a god but it does not exceed priors against pascal's wager (another one is that they're pettily human) or perhaps even the god's desire to stay hidden. 

 

Even if God itself whispered pascal's wager in your ear there is no incentive for it to actually carry out the threat: 

 

There is only one iteration.

AND

These threats aren't being made in person by the deity. They are either second hand or independently discovered so:

The deity has no use for making the threat true, to claim it more believably, as it might if it was an imperfect liar (at a level detectable by humans) that made the threats in person.

The deity has total plausible deniability.

Which adds up to all of the benefits of the threat having already being extracted by the time the punishment is due and no possibility of a rep hit (which wouldn't matter anyway.)

 

So, All else being equal. i.e. unless the god is the god of threats or pascal's wagers (whose opposites are equally likely):

 

If God is good (+ev on human happiness -ev on human sadness that sort of thing), actually carrying out the threats has negative value.

If god is scarily-doesn't-give-a-shit-neutral to humans, it still has no incentive to actually carry out the threat and a non zero energy cost.

if god gives the tiniest most infinitesimal shit about humans its incentive to actually carry out the threat is negative.

 

If God is evil you're fucked anyway:

The threat gains no power by being true, so the only incentive a God can have for following through is that it values human suffering. If it does, why would it not send you to hell if you believed in it? (remember that the god of commitments is as likely as the god of breaking commitments)

 

Despite the increased complexity of a human mind I think the most (not saying its at all likely just that all others are obviously wrong) likely motivational system for a god which would make it honour the wager is that that God thinks like a human and therefore would keep its commitment out of spite or gratitude or some other human reason. So here's why I think that one is wrong. It's generalizing from fictional evidence: humans aren't that homogeneous (and one without peers would be less so), and if a god gains likelihood to keep a commitment from humanness it also gains not -designed-to-be-evil-ness that would make it less likely to make evil wagers.  It also has no source for spite or gratitude, having no peers. Finally could you ever feel spite towards a bug? Or gratitude? We are not just ants compared to a god, we're ant-ant-ant-etc-ants.

 

Also there's the reasons that refusing can actually get you in trouble:  bullies don't get nicer when their demands are met. It's often not the suffering they're after but the dominance, at which point the suffering becomes an enjoyable illustration of that dominance.  As we are ant-ant-etc-ants this probability is lower but The fact that we aren't all already in hell suggests that if god is evil it is not raw suffering that it values. Hostages are often executed even when the ransom is paid. Even if it is evil, it could be any kind of evil: its preferences cannot have been homogenised by memes and consensus.

 

There's also the rather cool possibility that if human-god is sending people to hell, maybe its for lack of understanding. If it wants belief it can take it more effectively than this. If it wants to hurt you it will hurt you anyway. Perhaps peerless, it was never prompted to think through the consequences of making others suffer. Maybe god, in the absence of peers just needs someone to explain that its not nice to let people burn in hell for eternity. I for one remember suddenly realising that those other fleshbags hosted people. I figured it out for myself but if I grew up alone as the master of the universe maybe I would have needed someone to explain it to me.

 

[LINK] The power of fiction for moral instruction

11 David_Gerard 24 March 2013 09:19PM

From Medical Daily: Psychologists Discover How People Subconsciously Become Their Favorite Fictional Characters

Psychologists have discovered that while reading a book or story, people are prone to subconsciously adopt their behavior, thoughts, beliefs and internal responses to that of fictional characters as if they were their own.

Experts have dubbed this subconscious phenomenon ‘experience-taking,’ where people actually change their own behaviors and thoughts to match those of a fictional character that they can identify with.

Researcher from the Ohio State University conducted a series of six different experiments on about 500 participants, reporting in the Journal of Personality and Social Psychology, found that in the right situations, ‘experience-taking,’ may lead to temporary real world changes in the lives of readers. 

They found that stories written in the first-person can temporarily transform the way readers view the world, themselves and other social groups. 

I always wondered at how Christopher Hitchens (who, when he wasn't being a columnist, was a professor of English literature) went on and on about the power of fiction for revealing moral truths. This gives me a better idea of how people could imprint on well-written fiction. More so than, say, logically-reasoned philosophical tracts.

This article is, of course, a popularisation. Anyone have links to the original paper?

Edit: Gwern delivers (PDF): Kaufman, G. F., & Libby, L. K. (2012, March 26). "Changing Beliefs and Behavior Through Experience-Taking." Journal of Personality and Social Psychology. Advance online publication. doi: 10.1037/a0027525

Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87

4 Alsadius 22 December 2012 07:55AM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 87The previous thread has passed 500 comments. 

There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.) 

The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: 12345678910111213141516, 17.

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86

9 Alsadius 17 December 2012 07:19AM

Edit: New thread posted here

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 86The previous thread  has long passed 500 comments. 

There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.) 

The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: 123456789101112131415, 16.

As a reminder, it’s often useful to start your comment by indicating which chapter you are commenting on.

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

 

NKCDT: The Big Bang Theory

-12 [deleted] 10 November 2012 01:15PM

Hi, Welcome to the first Non-Karmic-Casual-Discussion-Thread.

This is a place for [purpose of thread goes here].

In order to create a causal non karmic environment for every one we ask that you

-Do not upvote or downvote any zero karma posts

-If you see a vote with positive karma, downvote it towards zero, even if it’s a good post

-If you see a vote with negative karma, upvote it towards zero, even if it’s a weak post

-Please be polite and respectful to other users

-Have fun!”

 

 

This is my first attempt at starting a casual conversation on LW where people don't have to worry about winning or losing points, and can just relax and have social fun together.

 

So, Big Bang Theory. That series got me wondering. It seems to be about "geeks", and not the basement-dwelling variety either; they're highly successful and accomplished professionals, each in their own field. One of them has been an astronaut, even. And yet, everything they ever accomplish amounts to absolutely nothing in terms of social recognition or even in terms of personal happiness. And the thing is, it doesn't even get better for their "normal" counterparts, who are just as miserable and petty.

 

Consider, then; how would being rationalists would affect the characters on this show? The writing of the show relies a lot on laughing at people rather than with them; would rationalist characters subvert that? And how would that rationalist outlook express itself given their personalities? (After all, notice how amazingly different from each other Yudkowsky, Hanson, and Alicorn are, just to name a few; they emphasize rather different things, and take different approaches to both truth-testing and problem-solving).

Note: this discussion does not need to be about rationalism. It can be a casual, normal discussion about the series. Relax and enjoy yourselves.

 

But the reason I brought up that series is that its characters are excellent examples of high intelligence hampered by immense irrationality. The apex of this is represented by Dr. Sheldon Cooper, who is, essentially, a complete fundamentalist over every single thing in his life; he applies this attitude to everything, right down to people's favorite flavor of pudding: Raj is "axiomatically wrong" to prefer tapioca, because the best pudding is chocolate. Period. This attitude makes him a far, far worse scientist than he thinks, as he refuses to even consider any criticism of his methods or results. 

 

A My Little Pony fanfic allegedly but not mainly about immortality

7 PhilGoetz 10 September 2012 01:02AM

My Little Pony (generation 4) has 2 immortal characters, who get a lot of sympathy from the bronies.  "How sad!  Poor Celestia and Luna must see everyone they know grow old and die.  How much better to die yourself!"

I tried to write a fanfic saying that death was bad.  But I had to make it a story, and it ended up having other themes.  I don't know whether I like it or not, but it was very popular (now approaching 7000 views in 3 days on fimfiction).

I was pretty sure the message "death is bad" was still in there, because Celestia says things like "Death is bad" and "I'm afraid of dying."  So imagine my surprise when comment after comment said, "Yes, immortality is such a curse!"

continue reading »

fimfiction.net LessWrong group

5 PhilGoetz 09 September 2012 04:10PM

There is now a LessWrong group on fimfiction, to let LWers on fimfiction find each other and collect stories that might be of interest to them.  (That?  Which?  Grammar Nazis, help!)

Dragon Ball's Hyperbolic Time Chamber

35 gwern 02 September 2012 11:49PM

A time dilation tool from an anime is discussed for its practical use on Earth; there seem surprisingly few uses and none that will change the world, due to the severe penalties humans would incur while using it, and basic constraints like Amdahl's law limit the scientific uses. A comparison with the position of an Artificial Intelligence such as an emulated human brain seems fair, except most of the time dilation disadvantages do not apply or can be ameliorated and hence any speedups could be quite effectively exploited. I suggest that skeptics of the idea that speedups give advantages are implicitly working off the crippled time dilation tool and not making allowance for the disanalogies.

Master version on gwern.net

[Link] Short story by Yvain

31 CronoDAS 31 August 2012 04:33AM

Yvain isn't a big enough self-promoter to link to this, but I liked it a lot and I think you will too.

"The Last Temptation of Christ"

The Fiction Genome Project

12 [deleted] 29 June 2012 11:19AM

The Music Genome Project is what powers Pandora. According to Wikipedia:

 

The Music Genome Project was first conceived by Will Glaser and Tim Westergren in late 1999. In January 2000, they joined forces with Jon Kraft to found Pandora Media to bring their idea to market.[1] The Music Genome Project was an effort to "capture the essence of music at the fundamental level" using almost 400 attributes to describe songs and a complex mathematical algorithm to organize them. Under the direction of Nolan Gasser, the musical structure and implementation of the Music Genome Project, made up of 5 Genomes (Pop/Rock, Hip-Hop/Electronica, Jazz, World Music, and Classical), was advanced and codified.

 

A given song is represented by a vector (a list of attributes) containing approximately 400 "genes" (analogous to trait-determining genes for organisms in the field of genetics). Each gene corresponds to a characteristic of the music, for example, gender of lead vocalist, level of distortion on the electric guitar, type of background vocals, etc. Rock and pop songs have 150 genes, rap songs have 350, and jazz songs have approximately 400. Other genres of music, such as world and classical music, have 300–500 genes. The system depends on a sufficient number of genes to render useful results. Each gene is assigned a number between 1 and 5, in half-integer increments.[2]

 

Given the vector of one or more songs, a list of other similar songs is constructed using a distance function. Each song is analyzed by a musician in a process that takes 20 to 30 minutes per song.[3] Ten percent of songs are analyzed by more than one technician to ensure conformity with the in-house standards and statistical reliability. The technology is currently used by Pandora to play music for Internet users based on their preferences. Because of licensing restrictions, Pandora is available only to users whose location is reported to be in the USA by Pandora's geolocation software.[4]

 

 

Eminent lesswronger, strategist, and blogger, Sebastian Marshall,  wonders:

 

Personally, I was thinking of doing a sort of “DNA analysis” of successful writing. Have you heard of the Music Genome Project? It powers Pandora.com.

 

So I was thinking, you could probably do something like that for writing, and then try to craft a written work with elements known to appeal to people. For instance, if you wished to write a best selling detective novel, you might do an analysis of when the antagonist(s) appear in the plot for the first time. You might find that 15% of bestsellers open with the primary antagonist committing their crime, 10% have the antagonist mixed in quickly into the plot, and 75% keep the primary antagonist a vague and shadowy figure until shortly before the climax.

 

I don’t know if the pattern fits that – I don’t read many detective novels – but it would be a bit of a surprise if it did. You might think, well, hey, I better either introduce the antagonist right away having them commit their crime, or keep him shadowy for a while.

 

 

Or, to use an easier example – perhaps you could wholesale adopt the use of engineering checklists into your chosen discipline? It seems to me like lots of fields don’t use checklists that could benefit tremendously from them. I run this through my mind again and again – what kind of checklist could be built here? I first came across the concept of checklists being adopted in surgery from engineering, and then having surgical accidents and mistakes go way down.

 

Some people at TV Tropes came across that article, and thought that their wiki's database might be a good starting point to make this project a reality. I came here to look for the savvy, intelligence, and level of technical expertise in all things AI and NIT that I've come to expect of this site's user-base, hoping that some of you might be interested in having a look at the discussion, and, perhaps, would feel like joining in, or at least sharing some good advice.

Thank you. (Also, should I make this post "Discussion" or "Top Level"?)

"Where Am I?", by Daniel Dennett

8 [deleted] 04 June 2012 09:45AM

”Where Am I?” is a short story by Daniel Dennett from his book Brainstorms: Philosophical Essays on Mind and Psychology. Some of you might already be familiar with it.

The story is a humorous semi-science fiction one, where Dennett gets a job offer form Pentagon that entails moving his brain into a vat, without actually moving his point of view. Later on it brings up questions about uploading and what it would mean in terms of diverging perspectives and so on. Aside from being a joy to read, it offers solutions to a few hurdles about the nature of consciousnesses and personal identity. 

Suppose, I argued to myself, I were now to fly to California, rob a bank, and be apprehended.  In which state would I be tried:  in California, where the robbery took place, or in Texas, where the brains of the outfit were located?  Would I be a California felon with an out-of-state brain, or a Texas felon remotely controlling an accomplice of sorts in California? It seemed possible that I might beat such a rap just on the undecidability of that jurisdictional question, though perhaps it would be deemed an interstate, and hence Federal, offense.

 

[Book Suggestions] Summer Reading for Younglings.

8 Karmakaiser 12 May 2012 04:57PM

I bought my niece a Kindle that just arrived and I'm about to load it up with books to give it to her tomorrow for her birthday. I've decided to be a sneaky uncle and include good books that can teach better abilities to think or at least to consider science cool and interesting. She is currently in the 4th Grade with 5th coming after the Summer.

She reads basically at her own grade level so while I'm open to stuffing the Kindle with books to be read when she's ready, I'd like to focus on giving her books she can read now. Ender's Game will be on there most likely. Game of Thrones will not.

What books would you give a youngling? Her interests currently trend toward the young mystery section, Hardy Boys and the like, but in my experience she is very open to trying new books with particular interest in YA fantasy but not much interest in Sci Fi (if I'm doing any other optimizing this year, I'll try to change her opinion on Sci Fi).

Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85

9 FAWS 18 April 2012 02:30AM

The next discussion thread is here.

 

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 85The previous thread  has long passed 500 comments. Comment in the 15th thread until you read chapter 85. 

There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.) 

The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: 12345678910111213, 14, 15.

As a reminder, it’s often useful to start your comment by indicating which chapter you are commenting on.

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

Mental Clarity; or How to Read Reality Accurately

-10 Hicquodiam 12 April 2012 06:18AM

 

Hey all - I typed this out to help me understand, well... how to understand things:

 

Mental clarity is the ability to read reality accurately. 

 

I don't mean being able to look at the complete objective picture of an event, as you don't have any direct access to that. I'm talking about the ability to read the data presented by your subjective experience: thoughs, sights, sounds, etc. Once you get a clear picture of what that data is, you can then go on and use it to build or falsify your ideas about the world.


This post will focus on the "getting a clear picture" part.


I use the word "read" because it's no different than reading from a book, or from these words. When you read a book, you are actually curious as to what the words are saying. You wouldn't read anything into it that's not there, which would be counterproductive to your understanding.

 

You just look at the words plainly, and through this your mind automatically recognizes and presents the patterns: the meaning of the sentences, their relation to the topic, the visual imagery associated with them, all of that. If you want to know a truth about reality, just look at it and read what's there.


Want to know what the weather's like? Look outside - read what's going on.


Want to know if the Earth revolves around the Sun, or vice versa? Look at the movement of the planets, read what they're doing, see which theory fits better.


Want to check if your beliefs about the world are correct? Take one, read the reality that the belief tries to correspond to, and see how well they compare.


This is the root of all science and all epiphanies.


But if it's so simple and obvious, why am I talking about it?


It's not something that we as a species often do. For trivial matters, sure, for science too, but not for our strongly-held opinions. Not for the beliefs and positions that shape our self-image, make us feel good/comfortable, or get us approval. Not for our political opinions, religious ideas, moral judgements, and little white lies.


If you were utterly convinced that your wife was faithful, moreso, if you liked to think of her in that way, and your friend came along and said she was cheating on you, you'd be reluctant to read reality and check if that's true. Doing this would challenge your comfort and throw you into an unknown world with some potentially massive changes. It would be much more comforting to rationalize why she still might be faithful, than to take one easy look at the true information. It would also more damaging.


Delusion is reading into reality things which aren't there. Telling yourself that everything's fine when it obviously isn't, for example. It's the equivalent of looking at a book about vampires and jumping to the conclusion that it's about wizards.


Sounds insane. You do it all the time. You'll catch yourself if you're willing to read the book of your own thoughts: flowing through your head, in plain view, is a whole mess of opinions and ideas of people, places, and positions you've never even encountered. Crikey!


That mess is incredibly dangerous to have. Being a host to unchecked or false beliefs about the world is like having a faulty map of a terrain: you're bound to get lost or fall off a cliff. Reading the terrain and re-drawing the map accordingly is the only way to accurately know where you're going. Having an accurate map is the only way to achieve your goals.



So you want to develop mental clarity? Be less confused, or more successful? Have a better understanding of the world, the structure of reality, or the accuracy of your ideas? 


Just practice the accurate reading of what's going on. Surrender the content of your beliefs to the data gathered by your reading of reality. It's that simple.

 

It can also be scary, especially when it comes to challenging your "personal" beliefs. It's well worth the fear, however, as a life built on truth won't crumble like one built on fiction.

 

Truth doesn't crumble.

 

Stay true.



Further reading:


Stepvhen from Burning true on truth vs. fantasy.


Kevin from Truth Strike on why this skill is important to develop.

 

Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

3 FAWS 11 April 2012 03:39AM

The next discussion thread is here.

 

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 84The previous thread  has passed 500 comments. Comment in the 14th thread until you read chapter 84. 

There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.) 

The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: 12345678910111213, 14.

As a reminder, it’s often useful to start your comment by indicating which chapter you are commenting on.

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

Harry Potter and the Methods of Rationality predictions

6 gwern 09 April 2012 09:49PM

The recent spate of updates has reminded me that while each chapter is enjoyable, the approaching end of MoR, as awesome as it no doubt will be, also means the end of our ability to learn from predicting the truth of the MoR-verse and its future.

With that in mind, I have compiled a page of predictions on sundry topics, much like my other page on predictions for Neon Genesis Evangelion; I encourage people to suggest plausible predictions that I've omitted, register their probabilities on PredictionBook.com, and come up with their own predictions. Then we can all look back when MoR finishes and reflect on what we (or Eliezer) did poorly or well.  

The page is currently up to >182 predictions.

Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82

7 FAWS 04 April 2012 02:53AM

The new discussion thread (part 15) is here


This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 82The previous thread passed 1000 comments as of the time of this writing, and so has long passed 500. Comment in the 13th thread until you read chapter 82. 

There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.) 

The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: 12345678910111213.

As a reminder, it’s often useful to start your comment by indicating which chapter you are commenting on.

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

Harry Potter and the Methods of Rationality discussion thread, part 12

5 Xachariah 25 March 2012 11:01AM

The new thread, discussion 13, is here.

 

This is a new thread to discuss Eliezer Yudkowsky's Harry Potter and the Methods of Rationality and anything related to it. With three chapters recently the previous thread has very quickly reached 1000 comments. The latest chapter as of 25th March 2012 is Ch 80.

There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author's Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)


The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: one, two, three, four, five, six, seven, eight, nine, ten, eleven.

As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.

Spoiler Warning:  this thread is full of spoilers.  With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13.  More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.

 

Harry Potter and the Methods of Rationality discussion thread, part 11

6 Oscar_Cunningham 17 March 2012 09:41AM

EDIT: New discussion thread here.

 

This is a new thread to discuss Eliezer Yudkowsky's Harry Potter and the Methods of Rationality and anything related to it. With two chapters recently the previous thread has very quickly reached 500 comments. The latest chapter as of 17th March 2012 is Ch. 79.

There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author's Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)


The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: one, two, three, four, five, six, seven, eight, nine, ten.

As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.

Spoiler Warning:  this thread is full of spoilers.  With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13.  More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.

Harry Potter and the Methods of Rationality discussion thread, part 10

11 Oscar_Cunningham 07 March 2012 04:46PM

(The HPMOR discussion thread after this one is here.)

This is a new thread to discuss Eliezer Yudkowsky's Harry Potter and the Methods of Rationality and anything related to it. There haven't been any chapters recently, but it looks like there are a bunch in the pipeline and the old thread is nearing 700 comments. The latest chapter as of 7th March 2012 is Ch. 77.

There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author's Notes.


The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: one, two, three, four, five, six, seven, eight, nine.

As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.

Spoiler Warning:  this thread is full of spoilers.  With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13.  More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.

Writing about Singularity: needing help with references and bibliography

4 [deleted] 05 March 2012 01:27AM

 

It was Yudkowsky's Fun Theory sequence that inspired me to undertake the work of writing a novel on a singularitarian society... however, there are gaps I need to fill, and I need all the help I can get. It's mostly book recommendations that I'm asking for.

 

One of the things I'd like to tackle in it would be the interactions between the modern, geeky Singularitarianisms, and Marxism, which I hold to be somewhat prototypical in that sense, as well as other utopisms. And contrasting them with more down-to-earth ideologies and attitudes, by examining the seriously dangerous bumps of the technological point of transition between "baseline" and "singularity". But I need to do a lot of research before I'm able to write anything good: if I'm not going to have any original ideas, at least I'd like to serve my readers with a collection of well-researched. solid ones.

 

So I'd like to have everything that is worth reading about the Singularity, specifically the Revolution it entails (in one way or another) and the social aftermath. I'm particularly interested in the consequences of the lag of the spread of the technology from the wealthy to the baselines, and the potential for baselines oppression and other forms of continuation of current forms of social imbalances, as well as suboptimal distribution of wealth. After all, according to many authors, we've had the means to end war, poverty and famine, and most infectious diseases, since the sixties, and it's just our irrational methods of wealth distribution That is, supposing the commonly alleged ideal of total lifespan and material welfare maximization for all humanity is what actually drives the way things are done. But even with other, different premises and axioms, there's much that can be improved and isn't, thanks to basic human irrationality, which is what we combat here.

 

Also, yes, this post makes my political leanings fairly clear, but I'm open to alternative viewpoints and actively seek them. I also don't intend to write any propaganda, as such. Just to examine ideas, and scenarios, for the sake of writing a compelling story, with wide audience appeal. The idea is to raise awareness of the Singularity as something rather imminent ("Summer's Coming"), and cause (or at least help prepare) normal people to question the wonders and dangers thereof, rationally.

 

It's a frighteningly ambitious, long-term challenge, I am terribly aware of that. And the first thing I'll need to read is a style-book, to correct my horrendous grasp of standard acceptable writing (and not seem arrogant by doing anything else), so please feel free to recommend as many books and blog articles and other material as you like. I'll take my time going though it all.

 

Utopia in Manna

9 Konkvistador 25 February 2012 09:53PM

Manna is the title of a science fiction story that describes a near future transition to an automated society where humans are uneconomical. In the later chapters it describes in some detail a post-scarcity society. There are several problems with it however, the greatest by far is that the author seems to have assumed that "want" and "envy" are primarily tied in material needs. This is simply not true.

I would love to live in a society with material equality on a sufficiently hight standard, I'd however hate to live in society with a enforced social equality, simply because that would override my preferences and freedom to interact or not interact with whomever I wish.

Also since things like the willpower to work out (to stay in top athletic condition even!) or not having the resources to fulfil even basic plans are made irrelevant, things like genetic inequality or how comfortable you are messing with your own hardware to upgrade your capabilities or how much time you dedicate to self-improvement would be more important than ever.

I predict social inequality would be pretty high in this society and mostly involuntary. Even a decision about something like the distribution of how much time you use for self-improvement, which you could presumably change later, there wouldn't be a good way to catch up with anyone (think opportunity cost and compound interest), unless technological progress would hit diminishing returns and slow down. Social inequality would however be more limited than pure financial inequality I would guess because of things like Dunbar's number. There would still be tragedy (that may be a feature rather than a bug of utopia). I guess people would be comfortable with gods above and beasts below them, that don't really figure in their "my social status compared to others" part of the brain, but even in the narrow band where you do care about inequality would grow rapidly. Eventually you might find yourself alone in your specific spot.

To get back to my previous point about probable (to me) unacceptable limitations on freedom, It may seem silly that a society with material equality would legislate intrusive and micromanaging rules that would force social equality to prevent this, but the hunter gatherer instincts in us are strong. We demand equality. We enjoy bringing about "equality". We look good demanding equality. Once material needs are met, this powerful urge will still be there and bring about signalling races. And new and new ways to avoid the edicts produced by such races (because also strong in us is our desire to be personally unequal or superior to someone, to distinguish and discriminate in our personal lives). This would play out in interesting and potentially dystopia ways.

I'm pretty sure the vast majority of people in the Australia project would probably end up wireheading. Why bother to go to the Moon when you can have a perfect virtual reality replica of it, why bother with the status of building a real fusion reactor when you can just play a gameified simplified version and simulate the same social reward, why bother with a real relationship ect... dedicating resoruces for something like a real life space elevator simply wouldn't cross their minds. People I think systematically overestimate how much something being "real" matters to them. Better and better also means better and better virtual super-stimuli. Among the tiny remaining faction of remaining "peas" (those choosing to spend most of their time in physical existence), there would be very few that would choose to have children, but they would dominate the future. Also I see no reason why the US couldn't buy technology from the Australia Project to use for its own welfare dependant citizens. Instead of the cheap mega-shelters, just hook them up on virtual reality, with no choice in the matter. Which would make a tiny fraction of them deeply unhappy (if they knew about it).

I maintain that the human brains default response to unlimited control of its own sensor input and reasonable security of continued existence is solipsism. And the default of a society of human brains with such technology is first social fragmentation, then value fragmentation and eventually a return to living under the yoke of an essentially Darwinian processes. Speaking of which the society of the US as described in the story would probably outpace Australia since it would have machines do its research and development.

It would take some time for the value this creates to run out though, much like Robin Hanson finds a future with a dream time of utopia followed by trillions of slaves glorious , I still find a few subjective millennia of a golden age followed by non-human and inhuman minds to be worth it.

It is not like we have to choose between infinity and something finite, the universe seems to have an expiration date as it is. A few thousand or million years doesn't seem like something fleas on a insignificant speck should sneer at.

HPMOR: What could've been done better?

5 Anubhav 28 January 2012 01:31PM

Warning: As per the official spoiler policy, the following discussion may contain unmarked spoilers for up to the current chapter of the Methods of Rationality. Proceed at your own risk.

Assume HPMOR was written by a super-intelligence implementing the CEV of Eliezer Yudkowsky and assorted literary critics. What would it have written differently?

... is what I want to know, but that's hard to answer. So here's an easier question:

In what ways do you think Eliezer's characterisations/world-building/plot-fu are sub-optimal? <optional> How could they be made less sub-optimal? </optional>

(My own ideas are in the comments.)

To put it another way... Assume a group of intrepid fanfic writers in the late 2020s are planning to write a reboot. What parts of Eliezer's story do you think they should tweak?

And just to make sure we're all on the same page: Eliezer isn't going to go back and change anything he's written to bring it in line with anything suggested here. This is purely an "Ah, just consider the possibilities!" thread.

... which means that we can safely suggest drastic rewrites encompassing 30 chapters or something. Or change fundamental facts about the world.

(Exercise due restraint on this one. Getting rid of the Ministry/the Noble Houses/blood purism would probably turn the story into something completely different; this isn't what we're trying to do here.)

With that, let the nit-picking begin!! 

Fiction: LW-inspired scenelet

22 DataPacRat 27 January 2012 06:43AM

A short science-fictional scene I just wrote, after reading about some real and actual scientific research. I'd love to turn this, or something like it, into an actual scene in Dee's life story, I just can't think of a good enough story to insert it in, and so I present it on its own for your amusement, even if it does mean I'm likely to lose more karma than I gained from my last post...

 

 

Not your grandfather's science fiction.
A scene from Dee's life



We join our heroine, Dee, and her plucky-yet-sarcastic sidekick holed up in a hotel room.

"Well, this is another fine mess you've gotten us into. Got any great ideas for getting us out of it?"

"No - but I know how to have one. Since I lost my visor and vest, including my nootropics and transcranial stimulator... I'm going to need a syringe, sixty millilitres of icewater, a barf bag, and a video camera."

"I don't know what you're planning, but I'm not sure I want to have any part in it."

"Start MacGuyvering as much as we can now from the mini-bar, I'll explain as we go. Without a camera, and with our time pressure, I'm going to need your help to get this to work, and you need to understand some of this or else you'll be really confused later. Physically, all I'm going to do is squirt water into my left ear."

"... and this will help us, how exactly?"

"By shocking my vestibular system, which causes all sorts of interesting effects. One of the unfortunate ones is that when done right, it induces immediate vomiting."

"Ew."

"Yes, well, that's just a side-effect. The main point is... well, really complicated. In layman's terms, there's a part of the brain that's responsible for triggering the creation of profound, revolutionary ideas, and another part that makes you create rationalizations to explain away just about anything, and usually, these two parts of the brain kind of balance each other out. This vestibular trick happens to hyper-stimulate the revolutionary part for about ten minutes, allowing me to realize things I normally wouldn't, and to see them as being obvious that I don't know why I didn't think of them before."

"well... okay, even if that's so, why haven't I seen you do it before?"

"For one, I don't want to risk some sort of long-term adaptation which might reduce its effect. But there's more complications to it than that."

"Of course there are."

"The thing is, after it's been hyper-stimulated, the revolutionary part gets tuckered out, and then the rationalizing part effectively kicks into overdrive - and I pretty much forget everything I thought of during those ten minutes, and even crazier-sounding, I won't be able to accept the idea that I said any of what I said. I literally won't believe that those ideas came from my mouth."

"'Crazier-sounding' sounds right."

"Which is why I'm going to need you to remember whatever it is I come up with - and then tell me what the best ideas were, but not tell me that I came up with them. At least until my brain's gotten back into balance again. I'm now precommitting myself to do whatever it is you tell me to do - even if I don't understand it, even if I think it's a bad or stupid or useless idea. Do you think you can handle that level of responsibility?"

"I... think so. And this really works? How the cuss did you ever come up with this, anyway?"

"I once noticed that when I was in a certain state of mind, my head kept twitching to the left every time I thought of something, showing there was a link between idea-generation and the vestibular system. Later I read up about some experiments with people with anosognosia, people who aren't aware of being paralyzed or blind... are you done with that straw yet?"

"As much as I'll ever be, I guess."

"Alright. Hand me the bucket, and squirt the water in my ear - my left ear. It only works in the left ear. Except for left-handed people."

"I'm beginning to wonder if it's just the idea that's crazy."

"We'll soon find out. Remember, being the only right person in the room doesn't mean you feel like the cool guy wearing black, it feels like you're the only one wearing a clown suit. I did that once, just to try. Now, here we <hralph!>"

 


Non-theist cinema?

8 Jay_Schweikert 08 January 2012 07:54AM

There isn't much in the way of explicitly atheist cinema* -- that is, movies that contain the explicit or implicit message that religion is nothing but superstition, and where this point itself is a central part of the story. The only popular films that jump to mind here are The Invention of Lying, and to a lesser extent The Man from Earth (overall a phenomenal movie, but far less well known). Sure, there are lots of popular movies that make fun of organized religion, or what some people might call religious "fanaticism" (e.g., Dogma, Saved, The Life of Brian, Jesus Camp). But pretty much all of these come away with the message that it's fine to be "spiritual" or whatever, so long as you don't hurt other people, and don't get too crazy about what you believe. As much as some "conservative" pundits love to accuse Hollywood "liberals" of being godless, there sure aren't many movies where godlessness is really taken seriously.

And that's unfortunate, in my view, as movies are probably the most prevalent and influential art form for the general public, and because many people will form their views on abstract concepts based on the percepts that movies provide (related to the issue of generalizing from fictional evidence). One need only glance over the examples on the tvtropes page "Hollywood Atheist" to see that movies and television aren't exactly putting the best foot forward for our kind.

But perhaps there's a bit more hope in the way of non-theist cinema, as opposed to overt atheist cinema. Of course, any story without gods is a non-theist story, and there are plenty of movies that don't touch on gods or religion at all. But what I'm talking about are movies where one would normally expect to find religion, but where no religion is to be found -- in other words, movies that seem to be depicting the alternate world where humanity never fell prey to this particular superstition, and where the concepts of god and religion simply don't exist.

The movie that inspired this particular thought was 50/50, the recent comedy-drama where Joseph Gordon-Levitt plays a man dealing with potentially fatal cancer. It's a great movie, but what struck me afterwards is how completely absent any mention of god, religion, the afterlife, etc. was in a movie about a man, along with his friends and family, potentially facing his own death. There are lots of characters, lots of conflicts, lots of different perspectives on what he's going through, but nothing at all from anyone amounting to a "spiritual" response to the situation (at least that I recall).

And it got me thinking, what other sorts of issues are there where we would normally expect religion to pop up, such that a story without it would be decidedly non-theist, as opposed to incidentally non-theist? And are there other major movies that you think tell such a story? I ask both because I'm always eager to hear about new movies I might enjoy (or old movies I might appreciate more), but also because I think this sort of non-theist cinema might be a good bridge to people who would instinctively rebel against anything openly atheist. In other words, show people that a "godless" world really isn't all that crazy, that people get by just fine and find ways to face conflicts, etc. Anyway, just thought I'd poll the membership and see what people thought about this idea. Looking forward to seeing the responses!

*I'm well aware that there's quite a bit of atheist and non-theist art in other mediums -- sf literature most prominently. But I'm focusing on movies (and perhaps to a lesser extent, television) because those are the main forms of "public art" in our culture, and the mediums most likely to influence how the public at large views these concepts.

HPMoR.com

18 lukeprog 04 January 2012 11:15AM

Josh's mirror of Harry Potter and the Methods of Rationality has been redesigned by Lightwave (who also did IntelligenceExplosion.com, Friendly-AI.com, and lukeprog.com), and it is now located at a simpler URL: HPMoR.com. Thanks also to Louie who put together this "facelift" project.

Scooby Doo and Secular Humanism [link]

25 Dreaded_Anomaly 03 December 2011 04:58AM

A great column by Chris Sims at the Comics Alliance.

Excerpt:

Because that's the thing about Scooby-Doo: The bad guys in every episode aren't monsters, they're liars.

I can't imagine how scandalized those critics who were relieved to have something that was mild enough to not excite their kids would've been if they'd stopped for a second and realized what was actually going on. The very first rule of Scooby-Doo, the single premise that sits at the heart of their adventures, is that the world is full of grown-ups who lie to kids, and that it's up to those kids to figure out what those lies are and call them on it, even if there are other adults who believe those lies with every fiber of their being. And the way that you win isn't through supernatural powers, or even through fighting. The way that you win is by doing the most dangerous thing that any person being lied to by someone in power can do: You think.

Tim Minchin fans may recall him mentioning Scooby Doo in a similar light in his beat poem Storm, and it's been brought up on Less Wrong before.

When viewed in this light, Scooby Doo really is like an elementary version of Methods of Rationality.

[FICTION] Hamlet and the Philosopher's Stone

25 HonoreDB 25 October 2011 10:31PM

Cryonics on Castle [Spoilers]

24 wedrifid 04 October 2011 09:46AM

Check out the latest episode of Castle (Headcase) to see Cryonics covered in mainstream fiction in a not entirely terrible manner. The details are not exactly accurate but probably not more inaccurate than similar fictionalised coverage of most other industries. In fact there is one obvious implementation difference that the company in Castle uses which is how things clearly ought to be:

Amulets of Immortality

It is not uncommon for cryonics enthusiasts to make 'immortality' jokes about their ALCOR necklaces but the equivalent on the show make the obvious practical next step. The patients have heart rate monitors with GPS signalers that signal the cryonics company as soon as the patient flatlines. This is just obviously the way things should be and it is regrettable that the market is not yet broad enough for 'obvious' to have been translated into common practice.

Other things to watch out for:

  • Predictable attempts by the cops to take the already preserved body so they can collect more evidence.
  • A somewhat insightful question of whether the cryonics company should hand over the corpsicle without taking things to court because that way they would not risk legal precedent being set based on a case where there are unusual factors which may make them lose. It may be better to lose one patient so that they can force the fight to happen on a stronger case.
  • Acknowledgement that only the head is required, which allows a compromise of handing over the body minus the head.
  • Smug superiority of cops trying to take the cryonics patient against the will of the patient himself, his family and the custodians. This is different than cops just trying to claim territory and do their job and to the hell with everyone else, it is cops trying to convey that it is morally virtuous to take the corpse and the wife would understand that it was in her and her corpsicle husband's best interest to autopsy his head if she wasn't so stupid. (Which seems like a realistic attitude.)
  • Costar and lead detective Beckett actually attempts to murder a cryonics patient (to whatever extent that murder applies to corpsicle desiccation). For my part this gave me the chance to explore somewhat more tangibly my ethical intuitions over what types of responses would be appropriate. My conclusion was that if someone had shot Beckett in order to protect the corpsicle I would have been indifferent. Not glad that she was killed but not proud of the person killing her either. I suspect (but cannot test) that most of the pain and frustration of losing a character that I cared about would be averted as well. Curious.
  • Brain destroying disease vs cryonicist standoff!
  • Beckett redeems herself on the 'not being an ass to cryonicists' front by being completely non-judgemental of the woman for committing "involuntary euthenasia" of her tumor-infested husband. (Almost to the point of being inconsistent with her earlier behavior but I'm not complaining.)
  • A clever "Romeo and Juliet" conclusion to wrap up the case without Beckett being forced to put the wife in jail for an act that has some fairly reasonable consequentialist upsides. Played out to be about as close to a happy ending as you could get.

Overall a positive portrayal of cryonics or at least one I am happy with. It doesn't convey cryonics as normal but even so it is a step less weird than I would usually expect. I'd call it good publicity.

Yehweh and the Methods of Rationality

-20 DanielLC 28 September 2011 03:06AM

[Link] 20 2020 Pennies (a webcomic chapter about many worlds and decision theory... sort of)

6 ArisKatsaris 11 September 2011 03:35PM

The comic in question (Penny & Aggie; by T Campbell) is as a whole a simple teenage comedy/drama. But the particular storyline I'd like to discuss here takes a much more SF turn than usual, and it's (marginally; if we stretch the concepts a bit) related to issues relevant to LessWrong; decision theory, CEV, perhaps even simulations and/or many-worlds.

The needed context is that in the page immediately previous, one of the comic's two protagonists (Penny) is asked by her biker boyfriend Rich to follow him on the road, effectively dropping out of highschool.

The chapter itself is about 20 different future Pennies from the year 2020 (20 that represent trillions), convene to decide which choice to take.

pna20071031-secondpanel

 


Thoughts and SPOILERS for the story to follow after the space, so you may want to read it before proceeding.

 

 

 

 

 

 

 

 

 

 

  • The first thing one may notice is that this is a *weird* story that doesn't follow any of the most standard tropes -- it's not truly many worlds or parallel-universe because some realities fade when Penny's eventual decision drops them out of probability space; it's not a "What-if-I'd taken-a-different-path-in-life" because it's not meant to discuss a past decision in the life of the character but a current one; it's not time-travel. It's all very conceptual and abstract.
  •  

  • The mode of existence for these alternate future Pennies is left vague and mysterious. They don't have a completely imaginary existence in the mind of Penny, because after the decision is taken some of them persist and discuss it with other people that share said mode. They don't have a completely physical existence, because some of them can fade. And most bizarre of all, most of them voted against their very existence. (7 out of 10 Pennies that followed Rich, voted not to have followed him -- 4 of the Pennies that didn't follow him voted or seemed to want to vote in favor. That makes 11 of 20 Pennies that voted against their existence).

    Perhaps the best way one can handle this whole bizarreness would be as a visualization of the FAI-failure mode in which the AI's models of people are also people. So that the AI can only anticipate what people would want to do or would regret doing, if he has their simulations actively decide to do it, and then regret it. But for the purposes of the convention, the AI disabled all self-preservation circuitry, so that these models can vote with full honesty the decision they believe best.

     

  • The twist (sort of) in the story is that the collective will of all these extrapolated future versions vote against joining Rich. But present Penny vetoes their decision and follows Rich anyway -- because she doesn't really care what the probabilities say, she doesn't care what she-herself will think in the future. She's not a person-in-the-making, a person to be extrapolated, she's a person now:


    To put it in LessWrong terms: "Up yours, Extrapolated Volition".

    Most intriguingly yet, at least one of those extrapolated versions (Biker Penny who voted against joining Rich and bitterly regretted joining a "clique for losers") actually seems to admire and love how Teenage Penny is telling her to go to hell: What if your extrapolated volition is a volition that doesn't wish you to consider the rulings of your extrapolated volition?

    Also (an even more complicated scenario) what if your current volition wishes you to follow your extrapolated volition, but your extrapolated volition would want you to follow a different decision path (don't consider the future)? What ways are there outside of this paradox? What decision do you take, if you are changed by that decision into a person that will regret it either way for different reasons?

     

  • There is no truly dystopian universe in the subset of the 10 "didn't follow Rich" Pennies -- unlike the future of Holocaust Survivor Penny (and possibly Borg Penny too). It's rather unlikely that going with Rich would cause a global dystopia -- it's much more likely that didnt-follow-Rich Pennies simply didn't survive a dystopia with enough probability density to be represented. Doesn't that make it then a *biased* sample, improper to draw conclusions from? On the other hand, there are a number of unexplored death scenarios in the other subset as well, so if anything both sets are biased...

     

  • Last note: Though I consider this chapter the pinnacle of the quality in the whole Penny & Aggie comic, both art-wise, writing-wise, idea-wise, and character-wise, I consider the immediately following storyline (a side-story about Korean students we hadn't seen before and haven't seen since) to be the bottom of quality in all of the above (bad guest art, clumsy writing, unremarkable characters, etc). So, if you want to keep reading the comic, I just suggest skipping that whole following chapter.

    As I said, the rest of the comic is however mostly teenage comedy/drama, though it does include some amusing SF references/tropes from time to time.
  • Harry Potter and the Methods of Rationality discussion thread, part 9

    9 Oscar_Cunningham 09 September 2011 01:29PM

    (The HPMOR discussion thread after this one is here.)

    The previous thread is over the 500-comment threshold, so let's start a new Harry Potter and the Methods of Rationality discussion thread.  This is the place to discuss Eliezer Yudkowsky's Harry Potter fanfic and anything related to it. The latest chapter as of 09/09/2011 is Ch. 77.


    The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: one, two, three, four, five, six, seven, eight.  The fanfiction.net author page is the central location for information about updates and links to HPMOR-related goodies, and AdeleneDawner has kept an archive of Author's Notes.

    As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.

    Spoiler Warning:  this thread is full of spoilers.  With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13.  More specifically:

    You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

    If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.

    Are there better ways of identifying the most creative scientists?

    9 gwern 31 August 2011 07:42PM

    Marginal Revolution linked today an old 1963 essay by Isaac Asimov, who argues that a very cheap test for scientific capability in children & adolescents is to see whether they like science fiction and in particular, harder science fiction, "The Sword of Achilles".

    I copied it out and made an HTML version of the essay: http://www.gwern.net/docs/1963-asimov-sword-of-achilles

    I'd be interested if anyone knows of better tests for such scientific aptitude.

    I think it'd also be interesting to see how well the SF test's predictive power has held up. Asimov's numbers seem reasonable for 1963, but may be very different these days: perhaps SF readers back then were <1% of the population and >50% of scientists, so it was a very informative, but these days? SF seems more popular, even discounting the comic books and Hollywood material as Asimov explicitly does, but the SF magazines are mostly dead and my understanding is that scientists are a vastly larger group in 2011 than 1963, both in absolute numbers and per capita.

    Harry Potter and the Methods of Rationality discussion thread, part 8

    8 Unnamed 25 August 2011 02:17AM

    Update: Discussion has moved on to a new thread.

    The hiatus is over with today's publication of chapter 73, and the previous thread is approaching the 500-comment threshold, so let's start a new Harry Potter and the Methods of Rationality discussion thread.  This is the place to discuss Eliezer Yudkowsky's Harry Potter fanfic and anything related to it.

    The first 5 discussion threads are on the main page under the harry_potter tag.  Threads 6 and on (including this one) are in the discussion section using its separate tag system.  Also: one, two, three, four, five, six, seven.  The fanfiction.net author page is the central location for information about updates and links to HPMOR-related goodies, and AdeleneDawner has kept an archive of Author's Notes.

    As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.

    Spoiler Warning:  this thread is full of spoilers.  With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13.  More specifically:

    You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

    If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.

    View more: Next