02 March 2015 06:51PM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

• Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
• If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
• Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
• Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

## Towards a theory of nerds... who suffer.

02 March 2015 05:11PM

Summary: I will here focus on nerds who suffer, from the lack of self-respect and sexual, romantic, social success.  My thesis this stems from self-hatred, and the self-hatred stems from childhood bullying, and the solution will involve fixing things that made one a "tempting" bullying target, and some other ways to improve self-respect.

Motivated reasoning and offense

SSC wrote we don't yet have a science of nerds. http://slatestarcodex.com/2014/09/25/why-no-science-of-nerds/ My proposal is to use motivated reasoning and focus on the subset of nerds who suffer and need helping. I am mostly familiar with the white straight male demographic and in this, suffering nerds are often called "neckbeards", or "omega males".

One danger of such motivated reasoning is giving out offense, because problems that cause suffering and in need of helping have a huge overlap with traits that can be used as insults, many disabilities are good parallels here, it is possible to use disabilities as insults mainly for people who don't actually have them, especially when using emotionally loaded language like "cripple" or "retard". Any helpful doctor needs to be careful if he wants to diagnose a child with low IQ, parents will often be like, "my kid is not stupid!" and we have a similar issue here.

The solution to the offense issue is: if you are a nerd, and you find what I write here does not apply to you, good: you are not in the subset of nerds who need helping! You are a happy, well-adjusted person with some "nerdy" interests and preferences, which is entirely OK but also relatively uninteresting, I simply don't want to discuss that because that is mostly like discussing why some people don't like mushroom on their pizza: maybe borderline curious, but not important. I focus on nerds who suffer. Human suffering is what matters, and if I can help a hundred people who suffer while offending ten who do not understand that I am not talking about them, it is a good trade.

I am largely talking about the guys who are mocked and bullied by being called "forever a virgin", those whose traits cluster around interest in D&D, Magic: The Gathering, fantasy, anime, have poor body hygiene, dress and groom in ways considered unattractive, have poor social skills, very low chances of ever finding a girlfriend, and not have any social life besides teaming up with fellow social outcasts.

Self-hatred

I propose the core issue of suffering nerds, "neckbeards", "omega males" is self-hatred. I see three reasons for this:

A) Engaging in fantasy, D&D, discussing superheroes, Star Wars etc. can be seen as escaping from a self and life one hates.

Against1: every novel and movie is a way to that. Not just fantasy or superhero comics.

Pro1: have you noticed non-nerdy people like movies and novels that are more or less cast in the here and now, with heroes that are believable contemporary characters? While nerds are often bored by "mainstream" crime novels, Ludlum type spy novels, by stuff "normal people" read?

Against2: this can simply mean disliking the current, real world, but not necessary their own self.

Against3: so everybody who enjoys LOTR movies and the GoT series is hating himself?  Have you not noticed fantasy went mainstream in the recent years?

Pro3: indeed it did. But a version of it that lacks the unreal appeal. Game of Thrones is almost historical, it is just normal medieval people fighting and scheming for power, with very little supernatural thrown in. LOTR got hollywoodized in the movies, much more focus on flashy sword fighting against stupid looking brutes, less about supernatural stuff. They are to fantasy what Buck Rogers was to sci-fi.  And non-nerds just watch them, maybe read them, but do not obsess about them.

B) Their poor clothing and grooming habits suggest they do not think their own self deserves to be decorated.

Against1: maybe they are just not interested  in their looks.

Pro1: life is a trade-off. Time you invest into looks is time you take away from something else. How could people who spend their time fantasizing about Star Wars think their time is that important? Eliezer Yudkowsky thinks his time is invested into literally saving humankind from extinction and still takes time away from it to invest into grooming and dressing in an okay way and finding eyeglasses that match his face, because he knows otherwise his message will not be taken seriously enough. It is a worthy investment. People don't want to listen to someone with a "crazy scientist" or similar look. He knows he needs to look like he is selling software, kind of. I don't think anyone could seriously think the social gains from a basic okay wardrobe and regular barber visits do not worth taking some time away from D&D. Obesity is often a neckbeard problem too, and it is also unhealthy.

Against2: Okay, but maybe they either do not realize it, due to some kind of social blindness, or lack the ability to figure out how to look in a way that society approves. Chalk it up to poor social skills, not self-hatred?

Pro2: The heroes suffering nerds fantasize about actually look good in their own fantasy world. Often even in the real one. In the sense that Superman was a good looking journalist when he was not Superman and even Peter Parker being borderline okay, and most fantasy heroes look like someone who is appropriate in that social circumstance (simplified/heroized/sanitized/mythologized European middle ages). First of all they are not fat and rather muscular, they are well groomed, and so on. Suffering nerds don't even imitate their own heroes. Although someone trying to look like Aragorn would be weird today, basically being a tall and muscular guy with a long hair and short cropped, well groomed beard and maybe leather clothes would look like a biker rocker, which is leaps and bounds cooler in society's eyes than an obese neckbeard with greasy hair and Tux t-shirt with dirty baggy jeans and dirtier sneakers. If nerds would really try to look like fantasy heroes, the would be more popular. But they look more like, they feel don't deserve to improve their looks. But there is also something more:

C) When they sometimes improve their looks, this does not come accross as improving their real selves or finding something that matches who they are, rather as a symbolic imitation of an entirely different person. A good example is the fedora, which symbolizes an old fashioned gentleman in 1950 which does not match the rest of their clothes or the fact it is not 1950. This suggests self-hatred.

Against1: Doesn't it contradict the previous point?

Pro1: I think it strenghtens it. Any guy with a fedora or something like that cannot be said to be uninterested in looks, and misjudging what society considers to be attractive cannot possible mean you wear Dick Tracy's hat but not his suit, muscles, lack of paunch, and lack of neckbeard. I think it is more of a symbol that I don't want to be me, I want to be someone totally different.

A-C)

Against1: fine, neckbeards hate themselves and dream about being someone else. How do we know it is the source of their problems, and not an effect? How about lack of socio-sexual success making them both suffering and self-hating and they react to this like that?

Pro1: we don't, and it is a good point, something like autism may play a role. Socio-sexual success, being borderline "cool" or at least accepted is something not exactly bright high school dropouts can figure out, how comes often highly intelligent men cannot? Indeed, autism or Asperger may play a role. However there are charming, sexy people on the spectrum, this cannot possible be the cause. Besides certain symptoms overlap with self-hatred: if someone avoids eye contact, how to know if it comes from their Asperger syndrome or from self-hatred making them afraid to meet a gaze directly and rather wanting to hide from other people's eyes? Cannot obsessive tendencies be a way to avoid thinking about one's own self? It is entirely possible that many men on the spectrum developed a self-hatred due to the bullying the received for being on the spectrum and much of their problems come from that. One thing is clear - whatever other reasons there are for lacking socio-sexual success, the above characteristics make the situation much worse.

Against2: Satoshi Kanazawa argued high IQ suppresses instincts and makes you basically lack "common sense". Maybe it is just that?

Pro2: Yes. But the instinct in question is not simply basic social skills. I will get back to this.

Against3: Paul Graham wrote nerds are unpopular because they simply don't want to invest into being popular, having other interests.

Pro3: This seems to be true for non-suffering nerds. Primarily the nerds who are into this-worldly, productive, STEM stuff. Why care about fashionable clothes when you are learning fascinating things like physics? Slightly irritated about the superficiality of other people, the non-suffering nerd gets a zero-maintenance buzz cut and 7 polo shirts of the same basic color of a brand a random cute looking girl has recommended, so that he does not have to think about what to put on, and has a presentable look with minimal effort. Of course we know "neckbeards", "omegas" don't look like that. Much worse. Suffering nerds seem to have deeper problems than not wanting to invest a minimal amount of time into their looks. Besides, look at their interests. STEM nerds are into things that are useful in this today's real world. D&D nerds want to escape it.

Against4: Testosterone?

Pro4: Plays a role both ways, see below.

The cause of self-hatred

Other people despising you. Sooner or later you internalize it. There could be many causes for that... sometimes parents of the kind who always tell their kids they suck. Some people hit walls like racism or homophobia... some people get picked on as kids because they are disabled or disfigured.

Actually this latest is a good clue and a good proof of we are on a good track with this here. I certainly have seen an above-average % of disabled or disfigured youths playing D&D. It seems if you are a textbook target for bullying, if other kids tell you you are a worthless piece of feces in various ways for years, you will want to escape into a fantasy where you are a wizard casting fireballs burning the meanines to death. So we are getting a clue about what may cause this self-hatred.

However in my experience simply being a weak or cowardly boy causes the same shitstorm of bullying, humiliations, and beatings. Kids are cruel. It is basically a brutal form of setting up a dominance hierarchy by trying to torture everybody, those who don't even dare to resist get assigned the lowest rank, those who try and fail only slightly higher, and the bravest, bolderst, cruelest, most aggressive fighters being on top. And intelligence may be an obstacle here by suppressing your fighting instinct.

Being bullied into the lowest level of social rank basically destroys your serum teststerone levels. It also makes you depressed. Both depend on your rank in the pecking order. Low-T combined with depression is probably something really close to what I call "self-hatred", since high-T is often understood as pride and confidence, so the opposite of it is probably shame and submissiveness, and SSC wrote depressed people who are suicidial often say "I feel I am a burden" i.e. you are not worthy to others, a liability, not an asset. Shame, submissiveness and feeling worthless is precisely what I called self-hatred.

Thus these two well-documented aspects of getting a low social rank already cause something akin to self-hatred, but I think it is also important how it happens in childhood. If it would be simply kids e.g. respecting those with higher grades, or richer parents more but still behaving borderline polite with everybody, the way how adults do it, I think it would be less of an issue. Kids, boys, however, establish social rank with brutal beatings, humiliation, bullying, and making sure the other boy got the "you suck" message driven in with a sledgehammer. A textbook example of the "wedgie" which Wiki calls a prank: http://en.wikipedia.org/wiki/Wedgie and perhaps it is possible to do it in harmless pranky that way, too, but when four muscular boys boys capture a weak, scared, squealing one in the toilet, immobilize him, and give him an atomic one then force him to walk out like that so that everybody can laugh at his humiliation, this is no prank. This is the message hammered in: you suck, you are worthless, you are helpless, you are no man, you got no balls, we do whatever we want to you and you have no "figther rank" whatsoever, you did not even try to defend yourself.  And I have seen many such events when I was a child.

Against1: Ouch. But is this really about fighting ability? Don't you think other ways how kids rank each other, rank their popularity matters, especially in modern schools where fighting is strictly forbidden and surveillance is strong?

Pro1: not 100% sure. After all they do it teaming up. It is perfectly possible that if a brown skinned boy and a bunch of racist classmates interacted it would be the same for him even if he is strong and does MMA. Still... in my experience, it was usually about that. I mean, not about what karate belt you have, it was more like testing your masculinity, like courage, aggression, strength. If you are "man enough" they would respect you and leave you alone, basically assigning a higher rank. The whole thing felt like testing whatever I later learned about testosterone levels, both prenatal and serum. It seems bullies were trying to sniff out weakness, both emotional and physical, and T is the best predictor to a combination of both.  For example, the worst thing was to cry, you got called a girly boy and bullied even more, get the lowest possible rank. Surely boys being raised in patriarchical and homophobic cultures had something to do with it, but the whole thing still reminded me of something biological like reindeer "locking horns".  I think if there is ever such a thing as males establishing a dominance hierarchy largely through  testing each others prenatal or serum testosterone i.e. manly courage and strength and fierceness, it was that.

But I also find it likely being "different" in any way, race, sexuality, disability, must have made you much more of a target.

Obviously this reflects the values of society, too. In Russia even grown up soldiers and prison inmates do this, which probably reflects the highly toxic-masculinity values they have or the oppression they themselves receive from officers, or even formerly from fathers. Two fascinating links: http://en.wikipedia.org/wiki/Dedovshchina http://en.wikipedia.org/wiki/Thief_in_law#Ponyatiya so you can imagine what goes on in schools. And yes, on the other hand growing up in a textbook NY liberal community must be a lot easier in this regard. Most of Europe will be somewhere in between.

Against1: So, your argument is that bullying destroys your self-respect much more than any other way of achieving a low social rank, and this leads to self-hatred, which leads to fantasy escapism and typical nerd-neckbeard behaviors, which then adds up and results in the lack of socio-sexual success? Isn't it a job for Occam's razor?

Pro1: well, the argument is more like, whatever happens with you in your childhood is very important, boys tend to establish rank by bullying and fighting or even in the best case, by testing each others courage and masculinity by other means, daring each other to climb trees etc. My point is, not simply that bullying or even childhood bullying matters so much, my point is rather that bullying or courage tests in childhood make you realize the fact that indeed you are lacking in important masculine abilities like courage, fierceness or strength, so probably low prenatal T, and low social rank established via this cuts much deeper in a man's soul than simply low social rank because you are poor or get bad grades. It affirms you don't worth much as a man and this makes you hate yourself much more than simply internalizing that you are poor or something like that. This alone - such as the depressed T levels and general depression due to low social rank - could explain the suffering and lack of later socio-sexual success of nerds, but the fantasy-escapism as a coping method makes it worse. Without that, nerds, neckbeards would not be a noticable and much ridiculed type - without that, all you would see is that some guys are kind of sad and timid, but otherwise look and behave like all the other guys!

Against1: do you think anti-bullying policies could solve "neckbeards" for the next generation ?

Pro1: Trying to make people behave less cruel is ought to reduce the suffering of the victims and a good thing. Having said that, while this demographic I am talking about would suffer less victimization as a child, I am not entirely convinced they would end up with much less self-hatred and better socio-sexual success, thus less adult suffering.  Why? Because my thesis is not that victimization hurts, obviously it does, my thesis is that being truly, indeed, actually less masculine than other boys and having your nose rubbed into it so that you realize you are indeed not much of a man is what generates self-hatred, perhaps partially due to biology and partially to patriarchy, I don't know why. I mean, the bullies are ethically wrong, but truthfully right - they bully you because you are indeed weak, in emotion or body, and you hate yourself for being indeed, truly weak.  So for example something as light as not daring to climb a rope during gym class and the other boys giving you a contemptuous look could destroy your self-respect here, especially if afterwards you are interacted with as a low-rank social pariah. And this is not something the anti-bullying teachers can solve. Perhaps you can try to pressure boys to not judge each others for courage, not express it so, never treat anyone like a social outcast etc. but it would be a lot like trying to destroy their masculinity too, trying to destroy that competititve, dominant, judgemental spirit that is so strongly linked to testosterone. I don't think it can succeed and I don't think it would be ethical to try do so. This is what they are. You can teach them to express their views in less agressive ways, but human freedom means if you want to frown because you think another guys suck, you can. Nevertheless, still it is good to not tolerate bullies, it is better to force high-To boys to express their contempt in more civilized ways, to reduce the suffering of their victims, just don't expect it prevents later "nerd problems".

Against1: I am still not convinced other forms of discrimination or low social rank do not generate more self-hatred.

Pro1: Well, just look at those American blacks who are both poor and black, both giving them a lower social rank at school, and end up being gangsta-rappers or even criminal inmates, but still strong, tattooed, masculine as hell, really the opposite of neckbeards-nerds who typically have characteristics that are considered unmasculine. It seems you could be bullyed for many a thing, but apparently nerdiness, neckbeardery tends to be formed when it is specifically your lack of a masculine fighter spirit that made you a target.

Against1: Any ways to easily test all this?

Pro1: Yes. Ask your neckbeard friend to consent to a test that will not be physically harmful but may cause emotional triggering. Then pretend to slap or munch him in the face. Do you get a panicky, nervous reaction, like turtling up and blinking, or you get a "manly" one like leaning back and catching your hand? This predicts if he is used to fighting back, or used to getting beaten and not daring to fight.

The cure

How to fix all this? Well, I have found that some neckbeards have managed to fix themselves to a certain extent without really even planning to, via the following means:

- Career success giving you a certain sense of social rank and self-confidence. Being higher on the social ladder increases testosterone, which also gets you the feedback from others and yourself that you are less unmasculine now, which makes you hate yourself for being unmasculine less.

- During career, many neckbeards did the same thing as Eliezer and opted for a simple, easy smart-casual wardrobe and better groomed in a low-maintenance way. This improved feedback from others and thus their confidence.

- It seems sports, martial arts, to some extent even basic body building helped many a man.

- All this led to better self-acceptance.

But let's try to go deeper here.

Neckbeards need to find self-respect WHILE accepting they are intellectuals. The goal is neither to accept yourself the way you are - they way you currently are sucks - nor to hate yourself so much that you do not feel you deserve to be improved and thus projecting a false public image. The goal is to self-improve WHILE accepting you are an intellectual.

Step 1 is to realize that it is not intellectualism that makes people marginalized, ridiculed, and unable to find girlfriends. It is the lack of other skills than intellectual ones, largely, the lack of masculine virtues. Here the idea of a writer is a useful mental crutch: you as a neckbeard are probably a voracious reader, thinking you are made from the same material writers are made from is not entirely wrong, it is realistic, it is close enough to your real self or essence. As a voracious reader, you are as to writers what power users are to programmers. Close enough. It is not a fake persona for you if you make some writers your role models: you both are intellectuals in essence. And yes, sexy, masculine, socially and sexually succesful male writers exist: Richard Dawkins, Robert Heinlein, Albert Camus. Shaping yourself after them is both true to your real self and a way to improve yourself.

The basics are not hard.

- Sports (more about it later)

- Smart casual wardrobe, nice low maintenance haircut, facial hair probably to be completely avoided until you learn more about style. That is an advanced level milestone, postpone it.

- Dropping a nuke on your social shyness by joining Toastmasters - a writer should be able to give a speech on a podium? Toastmasters International (and the later is not just a name, they are in Europe etc. too) says on the can that they are about public speaking skills, which is true, but public speaking is simply the hardest kind of speaking for introverted, shy, or self-hating people, go through the Comm manual giving the 10 speeches, participate in table topics, and compared to that 1:1 socializing or chatting will be easy.

- One more thing you need to learn there, namely to develop a genuine interest in other people and not just obsessively talk about your interests to them, but also be interested in their stuff, or even in small talk. This is annoying,  but once you get a bit used to it, you realize that you are gaining validation from respectable looking people choosing to discuss the weather or similar stupid topics with you. If they "wasted" a minute or two on a worthless topic with you, then perhaps it is your own person that is not worthless for them. This helps with the self-hatred issue. Toastmasters tends to be very good at this. Old time members are happy to chat with newbies just about anything, because these meetings are marked as communicate, communicate, communicate in their calendar.

- Therapy, focusing on your childhood bullying for being perceived weak and cowardly, or general feedbacks about being less masculine. Well, this is one of the advices that is almost useless, because if you are the type of guy who goes to shrinks you have did it long ago and if you are the type who would not go near a shrink unless borderline suicidial you won't take this advice, but it simply had to be given, for the sake of my conscience more than for your benefit.

Socially speaking, anti-bullying and reducing the worst aspects of toxic masculinity or highly patriarchical values should help but be careful! Natural born high-T bullies fly under the radar much more than bullied nerds who are trying to man up and thus doing spectacularly manly things. Do it the wrong way around, and you end up handicapping precisely those you are trying to help! Anyone who obsesses about guns, MMA or choppers, while wearing fatigues and Tapout tees are not the masculine bullies: they are the nerds trying to cope with not actually being or not having been masculine. While this is a questionable way to cope, it is not them you want to handicap, so if you want to fight toxic masculinity or patriarchy, do NOT focus on its lowest hanging fruits! The true bullies don't do these, they don't need to.

## Open thread, Mar. 2 - Mar. 8, 2015

2 02 March 2015 08:19AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

## Imagining Scarcity

6 02 March 2015 01:43AM

Thank goodness this wasn't a restaurant where you had to order only one thing and you never found out what all the other things on the menu tasted like. Harry hated that, it was like a torture chamber for anyone with a spark of curiosity: Find out about only one of the mysteries on this list, ha ha ha!

-HPMOR

A simple way to understand scarcity is to imagine you're trying to fit all your sand into a hole, but the hole is too small for all the sand to fit into.

It is, of course, possible to make the hole deeper or wider. It's also possible to compress the sand. However, either task can only be accomplished with the help of a mysterious element called "technology." The thing is, economists don't know what this element looks like or how to find it. Sometimes we look at all the people putting sand into holes and notice that the hole is bigger or the sand is more compressed, and we conclude "technology" must have happened. But it's not something we can predict or count on. So how are you going to get all of your sand into this hole?

You're not. Look, I don't know what's so special about this sand, and I don't know why you have to get it into this hole, but I know not all of it's going in. And that means if you want any of it to go in, you must leave some of it out.

That's scarcity: you must give up something to get anything.

It won't fit. Don't try to force it - it won't fit. And that means you're going to have to make a choice.

"Hold on," you say. "I don't really care which sand goes into the hole and which stays out here."

"Okay, okay, but this is economic sand. It's representative."

"Of what?"

"Take a closer look."

You give the sky (being uncertain of where this voice is coming from) a skeptical look, but you grudgingly crouch and inspect the sand (which stretches for miles around you). To your surprise, each grain is different from the rest. And, when you look really closely, each is a tiny, tiny gem, a reflection of something.

In some you see familiar faces. Others, you know just by looking, taste like chocolate, and smell like flowers, and feel like accomplishment, and smell like chlorine, a memory....

You pick up one. It is a pounding bass that sets your whole body vibrating. You drop it before your heart bursts out your chest.

It was your favorite techno remix of classical music.

Your are rubbing between your fingers the feeling of being curled up on the couch on a rainy night with your best friend watching a movie when a voice coughs.

"It's my values," you say, getting quickly to your feet.

"It's representative, like I said."

You look around. The sand seems to stretch on endlessly in all directions.

"There's a lot of it."

"Aren't you a marvelous creature? And to think it all fits between the sides of your skull."

"Some of it's out of reach."

"That's one of the problems, yes. And if the hole were big enough, all the sand, though it stretches on endlessly, would nevertheless fall into the hole."

"Can we abstract away from that, please? This is all a bit much."

"Certainly."

You open your eyes (though they hadn't been closed) and look around. Now you are in an empty room, the walls grey. There is a ball of sand that you know is made of all the sand from before, yet it is small and light enough to hold in your hands. There is no door. There is the hole, same as it ever was, only now you do not, you do not want to leave even a single grain of sand without.

"Can't I put some of it in, then take it out and put the rest in?"

"This hole, too, is an economic hole. It's representative."

You stare until it clicks. "Choice. There's no going back."

"Yep. The instant you fill the hole, it closes. And now you must make a choice."

Only so much will fit in...which means you have to leave some out. Take your time.

It's tough, but finally you separate the grains of sand you want to keep the most from the less important ones. The remaining sand will fit into the hole.

Notice something - once you've removed enough sand to fit the rest into the hole, there's no reason to remove any more. You only want to remove the minimum necessary to fit the sand into the hole.

So you remove the sand, and you pile the rest into the hole, and the hole closes, and then you suffocate to death in this doorless room....

So what's up with that hole, anyway? Notice how the fact that you couldn't fit all the sand into the hole forced you to make a choice. You could have removed this grain or that grain or made all the grains a little smaller. Or you could have thrown the sand down in despair and wept. But if you did that, you wouldn't have gotten any of the sand into the hole, so you did the smart thing, made a choice, and forwent some sand.

And what happens then? Why, the hole closes, and you can't go back and choose something different.

That's scarcity. You can't get everything, which means you have to give up something, which means you have to make a choice, and you can never go back, not entirely.

## Followup: Sequences book reading group

3 01 March 2015 05:37PM

It's been about a week since I posted a request for a reading group once the Sequences book comes out. As of this post, 25 people have indicated that they would like someone to do this, but we still have no volunteers to actually do it. I would volunteer to do it myself, but it's hard for me to commit to it. (For productivity reasons I usually have LessWrong blocked on my computer except in the evenings, and there are many evenings when I don't have time to log on at all.)

I propose that we use essentially the same model used for the Open Threads. If it's time for a new Reading Group post and nobody's posted it yet, post it yourself. If you feel that you can probably commit to help with this on occasion, please mention this in the comments. (I understand that having a few people volunteer while everybody else stays quiet might increase the bystander effect, but I think it's useful to have at least a few people mention that they can help. Everybody else, even if you didn't volunteer in the comments here, please step up to the plate anyway if you see nobody else is posting.)

We had a number of discussions / polls in the previous thread about exactly how the reading group should be conducted: What should the pace be? Should we re-post the entire article or just post a link to the original? Should we post individual articles (at whatever pace we decide) or should we post all the articles of the sequence all together? (This last link is to a new poll I just put up.) Or maybe we should just have a link on the sidebar to where the reading group is currently holding?

I propose that we start off the reading group with whatever seems to be the most popular options, and that we re-assess towards the end of each sequence. So for example we might start off at a rate of one individual article every other day1, which would mean we'd probably finish the first sequence in a little less than a month. Towards the end of that time we'd do the polls again and perhaps switch to a different pace or to posting the whole sequence at once.

Actionable items:

• If you haven't voted on the linked polls and want to, please do so.
• If you know how to set up the LW sidebar so that it shows a link to the current reading group article, please volunteer to do so.
• If you are privy to information about the upcoming book, please let us know about whether or not there will be copyright issues with copy/pasting the articles into LW.
• Please volunteer to help out with posting!

1 At the time of this posting there are 8 people who voted for 1 article per day, 6 said 1 every other day, and 2 said 1 per week. Going with 1 every other day, at least to start off, seems a reasonable compromise.

## Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113

8 28 February 2015 08:23PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 113.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

IMPORTANT -- From the end of chapter 113:

You have 60 hours.

despite being naked, holding only his wand, facing 36 Death Eaters
plus the fully resurrected Lord Voldemort.

If a viable solution is posted before
*12:01AM Pacific Time* (8:01AM UTC) on Tuesday, March 3rd, 2015,
the story will continue to Ch. 121.

Otherwise you will get a shorter and sadder ending.

Keep in mind the following:

1. Harry must succeed via his own efforts. The cavalry is not coming.
Everyone who might want to help Harry thinks he is at a Quidditch game.

2. Harry may only use capabilities the story has already shown him to have;
he cannot develop wordless wandless Legilimency in the next 60 seconds.

3. Voldemort is evil and cannot be persuaded to be good;
the Dark Lord's utility function cannot be changed by talking to him.

4. If Harry raises his wand or speaks in anything except Parseltongue,
the Death Eaters will fire on him immediately.

5. If the simplest timeline is otherwise one where Harry dies -
if Harry cannot reach his Time-Turner without Time-Turned help -
then the Time-Turner will not come into play.

6. It is impossible to tell lies in Parseltongue.

Within these constraints,
Harry is allowed to attain his full potential as a rationalist,
now in this moment or never,
regardless of his previous flaws.

Of course 'the rational solution',
if you are using the word 'rational' correctly,
is just a needlessly fancy way of saying 'the best solution'
or 'the solution I like' or 'the solution I think we should use',
and you should usually say one of the latter instead.
(We only need the word 'rational' to talk about ways of thinking,
considered apart from any particular solutions.)

And by Vinge's Principle,
if you know exactly what a smart mind would do,
you must be at least that smart yourself.
Asking someone "What would an optimal player think is the best move?"
should produce answers no better than "What do you think is best?"

So what I mean in practice,
when I say Harry is allowed to attain his full potential as a rationalist,
is that Harry is allowed to solve this problem
the way YOU would solve it.
If you can tell me exactly how to do something,
Harry is allowed to think of it.

But it does not serve as a solution to say, for example,
"Harry should persuade Voldemort to let him out of the box"
if you can't yourself figure out how.

The rules on Fanfiction dot Net allow at most one review per chapter.
Please submit *ONLY ONE* review of Ch. 113,
to submit one suggested solution.

For the best experience, if you have not already been following
Internet conversations about recent chapters, I suggest not doing so,
trying to complete this exam on your own,
not looking at other reviews,
and waiting for Ch. 114 to see how you did.

I wish you all the best of luck, or rather the best of skill.

Ch. 114 will post at 10AM Pacific (6PM UTC) on Tuesday, March 3rd, 2015.

If you have pending exams,
then even though the bystander effect is a thing,
I expect that the collective effect of
'everyone with more urgent life
issues stays out of the effort'
shifts the probabilities very little

(because diminishing marginal returns on more eyes
and an already-huge population that is participating).

So if you can't take the time, then please don't.
Like any author, I enjoy the delicious taste of my readers' suffering,
finer than any chocolate; but I don't want to *hurt* you.

Likewise, if you hate hate hate this sort of thing, then don't participate!
Other people ARE enjoying it. Just come back in a few days.
I shouldn't even need to point this out.

I remind you again that you have hours to think.
Use the Hold Off On Proposing Solutions, Luke.

And really truly, I do mean it,
Harry cannot develop any new magical powers
or transcend previously stated constraints on them
in the next sixty seconds.

## Probability of coming into existence again ?

4 28 February 2015 12:02PM

This question has been bothering me for a while now, but I have the nagging feeling that I'm missing something big and that the reasoning is flawed in a very significant way. I'm not well read in philosophy at all, and I'd be really surprised if this particular problem hasn't been addressed many times by more enlightened minds. Please don't hesitate to give reading suggestions if you know more. I don't even know where to start learning about such questions. I have tried the search bar but have failed to find a discussion around this specific topic.

I'll try and explain my train of thought as best as I can but I am not familiar with formal reasoning, so bear with me! (English is not my first language, either)

Based on the information and sensations currently available, I am stuck in a specific point of view and experience specific qualia. So far, it's the only thing that has been available to me; it is the entirety of my reality. I don't know if the cogito ergo sum is well received on Less Wrong, but it seems on the face of it to be a compelling argument for my own existence at least.

Let's assume that there are other conscious beings who "exist" in a similar way, and thus other possible qualia. If we don't assume this, doesn't it mean that we are in a dead end and no further argument is possible? Similar to what happens if there is no free will and thus nothing matters since no change is possible? Again, I am not certain about this reasoning but I can't see the flaw so far.

There doesn't seem to be any reason why I should be experiencing these specific qualia instead of others, that I "popped into existence" as this specific consciousness instead of another, or that I perceive time subjectively. According to what I know, the qualia will probably stop completely at some subjective point in time and I will cease to exist. The qualia are likely to be tied to a physical state of matter (for example colorblindness due to different cells in the eyes) and once the matter does not "function" or is altered, the qualia are gone. It would seem that there could be a link between the subjective and some sort of objective reality, if there is indeed such a thing.

On a side note, I think it's safe to ignore theism and all mentions of a pleasurable afterlife of some sort. I suppose most people on this site have debated this to death elsewhere and there's no real point in bringing it up again. I personally think it's not an adequate solution to this problem.

Based on what I know, and that qualia occur, what is the probability (if any) that I will pop into existence again and again, and experience different qualia each time, with no subjectively perceivable connection with the "previous" consciousness? If it has happened once, if a subjective observer has emerged out of nothing at some point, and is currently observing subjectively (as I think is happening to me), does the subjective observing ever end?

I know it sounds an awful lot like mysticism and reincarnation, but since I am currently existing and observing in a subjective way (or at least I think I am), how can I be certain that it will ever stop?

The only reason why this question matters at all is because suffering is not only possible but quite frequent according to my subjective experience and my intuition of what other possible observers might be experiencing if they do exist in the same way I do. If there were no painful qualia, or no qualia at all, nothing would really matter since there would be no change needed and no concept of suffering. I don't know how to define suffering, but I think it is a valid concept and is contained in qualia, based on my limited subjectivity.

This leads to a second, more disturbing question : does suffering have a limit or is it infinite? Is there a non zero probability to enter into existence as a being that experiences potentially infinite suffering, similar to the main character in I have no mouth and I must scream? Is there no way out of existence? If the answer is no, then how would it be possible to lead a rational life, seeing as it would be a single drop in an infinite ocean?

On a more positive note, this reasoning can serve as a strong deterrent to suicide, since it would be rationally better to prolong your current and familiar existence than to potentially enter a less fortunate one with no way to predict what might happen.

Sadly, these thoughts have shown to be a significant threat to motivation and morale. I feel stuck in this logic and can't see a way out at the moment. If you can identify a flaw here, or know of a solution, then I eagerly await your reply.

kind regards

## Best of Rationality Quotes, 2014 Edition

11 27 February 2015 10:43PM

Here is the way-too-late 2014 edition of the Best of Rationality Quotes collection. (Here is last year's.) Thanks Huluk for nudging me to do it.

Best of Rationality Quotes 2014 (300kB page, 235 quotes)
and Best of Rationality Quotes 2009-2014 (1900kB page, 1770 quotes)

The page was built by a short script (source code here) from all the LW Rationality Quotes threads so far. (We had such a thread each month since April 2009.) The script collects all comments with karma score 10 or more, and sorts them by score. Replies are not collected, only top-level comments.

As is now usual, I provide various statistics and top-lists based on the data. (Source code for these is also at the above link, see the README.) I added these as comments to the post:

## In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him?

8 27 February 2015 08:57PM

Hopefully at least one or two would show a virtue of non-straw rationality.

Episode list

15 27 February 2015 07:26PM

It has long been known that algorithms out-perform human experts on a range of topics (here's a LW post on this by lukeprog). Why, then, is it that people continue to mistrust algorithms, in spite of their superiority, and instead cling to human advice? A recent paper by Dietvorst, Simmons and Massey suggests it is due to a cognitive bias which they call algorithm aversion. We judge less-than-perfect algorithms more harshly than less-than-perfect humans. They argue that since this aversion leads to poorer decisions, it is very costly, and that we therefore must find ways of combating it.

Abstract:

Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

General discussion:

The results of five studies show that seeing algorithms err makes people less confident in them and less likely to choose them over an inferior human forecaster. This effect was evident in two distinct domains of judgment, including one in which the human forecasters produced nearly twice as much error as the algorithm. It arose regardless of whether the participant was choosing between the algorithm and her own forecasts or between the algorithm and the forecasts of a different participant. And it even arose among the (vast majority of) participants who saw the algorithm outperform the human forecaster.
The aversion to algorithms is costly, not only for the participants in our studies who lost money when they chose not to tie their bonuses to the algorithm, but for society at large. Many decisions require a forecast, and algorithms are almost always better forecasters than humans (Dawes, 1979; Grove et al., 2000; Meehl, 1954). The ubiquity of computers and the growth of the “Big Data” movement (Davenport & Harris, 2007) have encouraged the growth of algorithms but many remain resistant to using them. Our studies show that this resistance at least partially arises from greater intolerance for error from algorithms than from humans. People are more likely to abandon an algorithm than a human judge for making the same mistake. This is enormously problematic, as it is a barrier to adopting superior approaches to a wide range of important tasks. It means, for example, that people will more likely forgive an admissions committee than an admissions algorithm for making an error, even when, on average, the algorithm makes fewer such errors. In short, whenever prediction errors are likely—as they are in virtually all forecasting tasks—people will be biased against algorithms.
More optimistically, our findings do suggest that people will be much more willing to use algorithms when they do not see algorithms err, as will be the case when errors are unseen, the algorithm is unseen (as it often is for patients in doctors’ offices), or when predictions are nearly perfect. The 2012 U.S. presidential election season saw people embracing a perfectly performing algorithm. Nate Silver’s New York Times blog, Five Thirty Eight: Nate Silver’s Political Calculus, presented an algorithm for forecasting that election. Though the site had its critics before the votes were in— one Washington Post writer criticized Silver for “doing little more than weighting and aggregating state polls and combining them with various historical assumptions to project a future outcome with exaggerated, attention-grabbing exactitude” (Gerson, 2012, para. 2)—those critics were soon silenced: Silver’s model correctly predicted the presidential election results in all 50 states. Live on MSNBC, Rachel Maddow proclaimed, “You know who won the election tonight? Nate Silver,” (Noveck, 2012, para. 21), and headlines like “Nate Silver Gets a Big Boost From the Election” (Isidore, 2012) and “How Nate Silver Won the 2012 Presidential Election” (Clark, 2012) followed. Many journalists and popular bloggers declared Silver’s success a great boost for Big Data and statistical prediction (Honan, 2012; McDermott, 2012; Taylor, 2012; Tiku, 2012).
However, we worry that this is not such a generalizable victory. People may rally around an algorithm touted as perfect, but we doubt that this enthusiasm will generalize to algorithms that are shown to be less perfect, as they inevitably will be much of the time.

## Weekly LW Meetups

3 27 February 2015 04:26PM

This summary was posted to LW Main on February 20th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

## What subjects are important to rationality, but not covered in Less Wrong?

18 27 February 2015 11:57AM

As many people have noted, Less Wrong currently isn't receiving as much content as we would like. One way to think about expanding the content is to think about which areas of study deserve more articles written on them.

For example, I expect that sociology has a lot to say about many of our cultural assumptions. It is quite possible that 95% of it is either obvious or junk, but almost all fields have that 5% within them that could be valuable. Another area of study that might be interesting to consider is anthropology. Again this is a field that allows us to step outside of our cultural assumptions.

I don't know anything about media studies, but I imagine that they have some worthwhile things to say about how we the information that we hear is distorted.

What other fields would you like to see some discussion of on Less Wrong?

## If you can see the box, you can open the box

40 26 February 2015 10:36AM

First post here, and I'm disagreeing with something in the main sequences.  Hubris acknowledged, here's what I've been thinking about.  It comes from the post "Are your enemies innately evil?":

On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America.  Now why do you suppose they might have done that?  Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?

Realistically, most people don't construct their life stories with themselves as the villains.  Everyone is the hero of their own story.  The Enemy's story, as seen by the Enemy, is not going to make the Enemy look bad.  If you try to construe motivations that would make the Enemy look bad, you'll end up flat wrong about what actually goes on in the Enemy's mind.

1) People do not construct their stories so that they are the villains,

therefore

2) the idea that Al Qaeda is motivated by a hatred of American freedom is false.

Reading the Al Qaeda document released after the attacks called Why We Are Fighting You you find the following:

What are we calling you to, and what do we want from you?

1.  The first thing that we are calling you to is Islam.

A.  The religion of tahwid; of freedom from associating partners with Allah Most High , and rejection of such blasphemy; of complete love for Him, the Exalted; of complete submission to his sharia; and of the discarding of all the opinions, orders, theories, and religions that contradict with the religion He sent down to His Prophet Muhammad.  Islam is the religion of all the prophets and makes no distinction between them.

It is to this religion that we call you …

2.  The second thing we call you to is to stop your oppression, lies, immorality and debauchery that has spread among you.

A.  We call you to be a people of manners, principles, honor and purity; to reject the immoral acts of fornication, homosexuality, intoxicants, gambling and usury.

We call you to all of this that you may be freed from the deceptive lies that you are a great nation, which your leaders spread among you in order to conceal from you the despicable state that you have obtained.

B.  It is saddening to tell you that you are the worst civilization witnessed in the history of mankind:

i.  You are the nation who, rather than ruling through the sharia of Allah, chooses to invent your own laws as you will and desire.  You separate religion from you policies, contradicting the pure nature that affirms absolute authority to the Lord your Creator….

ii.  You are the nation that permits usury…

iii.   You are a nation that permits the production, spread, and use of intoxicants.  You also permit drugs, and only forbid the trade of them, even though your nation is the largest consumer of them.

iv.  You are a nation that permits acts of immorality, and you consider them to be pillars of personal freedom.

"Freedom" is of course one of those words.  It's easy enough to imagine an SS officer saying indignantly: "Of course we are fighting for freedom!  For our people to be free of Jewish domination, free from the contamination of lesser races, free from the sham of democracy..."

If we substitute the symbol with the substance though, what we mean by freedom - "people to be left more or less alone, to follow whichever religion they want or none, to speak their minds, to try to shape society's laws so they serve the people" - then Al Qaeda is absolutely inspired by a hatred of freedom.  They wouldn't call it "freedom", mind you, they'd call it "decadence" or "blasphemy" or "shirk" - but the substance is what we call "freedom".

Returning to the syllogism at the top, it seems to be that there is an unstated premise.  The conclusion "Al Qaeda cannot possibly hate America for its freedom because everyone sees himself as the hero of his own story" only follows if you assume that What is heroic, what is good, is substantially the same for all humans, for a liberal Westerner and an Islamic fanatic.

(for Americans, by "liberal" here I mean the classical sense that includes just about everyone you are likely to meet, read or vote for.  US conservatives say they are defending the American revolution, which was broadly in line with liberal principles - slavery excepted, but since US conservatives don't support that, my point stands).

When you state the premise baldly like that, you can see the problem.  There's no contradiction in thinking that Muslim fanatics think of themselves as heroic precisely for being opposed to freedom, because they see their heroism as trying to extend the rule of Allah - Shariah - across the world.

Now to the point - we all know the phrase "thinking outside the box".  I submit that if you can recognize the box, you've already opened it.  Real bias isn't when you have a point of view you're defending, but when you cannot imagine that another point of view seriously exists.

That phrasing has a bit of negative baggage associated with it, that this is just a matter of pigheaded close-mindedness.  Try thinking about it another way.  Would you say to someone with dyscalculia "You can't get your head around the basics of calculus?  You are just being so close minded!"  No, that's obviously nuts.  We know that different peoples minds work in different ways, that some people can see things others cannot.

Orwell once wrote about the British intellectuals inability to "get" fascism, in particular in his essay on H.G. Wells.  He wrote that the only people who really understood the nature and menace of fascism were either those who had felt the lash on their backs, or those who had a touch of the fascist mindset themselves.  I suggest that some people just cannot imagine, cannot really believe, the enormous power of faith, of the idea of serving and fighting and dying for your god and His prophet.  It is a kind of thinking that is just alien to many.

Perhaps this is resisted because people think that "Being able to think like a fascist makes you a bit of a fascist".  That's not really true in any way that matters - Orwell was one of the greatest anti-fascist writers of his time, and fought against it in Spain.

So - if you can see the box you are in, you can open it, and already have half-opened it.  And if you are really in the box, you can't see the box.  So, how can you tell if you are in a box that you can't see versus not being in a box?

The best answer I've been able to come up with is not to think of "box or no box" but rather "open or closed box".  We all work from a worldview, simply because we need some knowledge to get further knowledge.  If you know you come at an issue from a certain angle, you can always check yourself.  You're in a box, but boxes can be useful, and you have the option to go get some stuff from outside the box.

The second is to read people in other boxes.  I like steelmanning, it's an important intellectual exercise, but it shouldn't preclude finding actual Men of Steel - that is, people passionately committed to another point of view, another box, and taking a look at what they have to say.

Now you might say: "But that's steelmanning!"  Not quite.  Steelmanning is "the art of addressing the best form of the other person’s argument, even if it’s not the one they presented."  That may, in some circumstances, lead you to make the mistake of assuming that what you think is the best argument for a position is the same as what the other guy thinks is the best argument for his position.  That's especially important if you are addressing a belief held by a large group of people.

Again, this isn't to run down steelmanning - the practice is sadly limited, and anyone who attempts it has gained a big advantage in figuring out how the world is.  It's just a reminder that the steelman you make may not be quite as strong as the steelman that is out to get you.

[EDIT: Link included to the document that I did not know was available online before now]

## "Human-level control through deep reinforcement learning" - computer learns 49 different games

10 26 February 2015 06:21AM

full text

This seems like an impressive first step towards AGI. The games, like 'pong' and 'space invaders' are perhaps not the most cerebral games, but given that deep blue can only play chess, this is far more impressive IMO. They didn't even need to adjust hyperparameters between games.

I'd also like to see whether they can train a network that plays the same game on different maps without re-training, which seems a lot harder.

## Are Cognitive Biases Design Flaws?

1 25 February 2015 09:02PM

I am a newbie so today I read the article by Eliezer Yudkowski "Your Strength As A Rationalist" which helped me understand the focus of LessWrong, but I respectfully disagreed with a line that is written in the last paragraph:

It is a design flaw in human cognition...

So this was my comment in the article's comment section which I bring here for discussion:

Since I think evolution makes us quite fit to our current environment I don't think cognitive biases are design flaws, in the above example you imply that even if you had the information available to guess the truth, your guess was another one and it was false, therefore you experienced a flaw in your cognition.

My hypotheses is that reaching the truth or communicating it in the IRC may have not been the end objective of your cognitive process, in this case just to dismiss the issue as something that was not important anyway "so move on and stop wasting resources in this discussion" was maybe the "biological" objective and as such it should be correct, not a flaw.

If the above is true then all cognitive bias, simplistic heuristics, fallacies, and dark arts are good since we have conducted our lives for 200,000 years according to these and we are alive and kicking.

Rationality and our search to be LessWrong, which I support, may be tools we are developing to evolve in our competitive ability within our species, but not a "correction" of something that is wrong in our design.

Edit 1: I realize there is change in the environment and that may make some of our cognitive biases, which were useful in the past, to be obsolete. If the word "flaw" is also applicable to describe something that is obsolete then I was wrong above. If not, I prefer the word obsolete to characterize cognitive biases that are no longer functional for our preservation.

## Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112

4 25 February 2015 09:00PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 112.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

## Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111

3 25 February 2015 06:52PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 111.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

## Journal 'Basic and Applied Psychology' bans p<0.05 and 95% confidence intervals

10 25 February 2015 05:15PM

Editorial text isn't very interesting; they call for descriptive statistics and don't recommend any particular analysis.

## Does hormetism work? Opponent process theory.

7 25 February 2015 02:00PM

To the fun theory, hedonic treadmill sequences.

http://gettingstronger.org/hormesis/

TL;DR stoicism with science.

Key idea: OPT, Opponent Process Theory: http://gettingstronger.org/2010/05/opponent-process-theory/

From the article:

"In hedonic reversal, a stimulus that initially causes a pleasant or unpleasant response does not just dissipate or fade away, as Irvine describes, but rather the initial feeling leads to an opposite secondary emotion or sensation. Remarkably, the secondary reaction is often deeper or longer lasting than the initial reaction.  And what is more, when the stimulus is repeated many times, the initial response becomes weaker and the secondary response becomes stronger and lasts longer."

## Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 110

3 24 February 2015 08:01PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 110.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

## Saving for the long term

6 24 February 2015 03:33AM

I'm 22 years old, just got a job, and have the option of putting money in a 401k. More generally, I just started making money and need to think about how I'm going to invest and save it.

As far as long-term/retirement savings goes, the way I see it is that my goal is to ensure that I have a sufficient standard of living when I'm "old" (70-80). I see a few ways that this can happen:

1. There is enough wealth creation and distribution by then such that I pretty much won't have to do anything. One way this could happen is if there was a singularity. I'm no expert on this topic, but the experts seem to be pretty confident that it'll happen by the time I retire.

Median optimistic year (10% likelihood): 2022
Median realistic year (50% likelihood): 2040
Median pessimistic year (90% likelihood): 2075
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

And even if they're wrong and there's no singularity, it still seems to be very likely that there will be immense wealth creation in the next 60 or so years, and I'm sure that there'll be a fair amount of distribution as well, such that the poorest people will probably have reasonably comfortable lives. I'm a believer in Kurweil's Law of Accelerating Returns, but even if you project linear growth, there'd still be immense growth.

Given all of this, I find thinking that "wealth creation + distribution over the next 60 years -> sufficient standard of living for everyone" is a rather likely scenario. But my logic here is very "outside view-y" - I don't "really understand" the component steps and their associated likelihoods, so my confidence is limited.
2. I start a startup, make a lot of money, and it lasts until retirement. I think that starting a startup and using the money to do good is the way for me to maximize the positive impact I have on the world, as well as my own happiness, and so I plan on working relentlessly until that happens. Ie. I'm going to continue to try, no matter how many times I fail. I may need to take some time to work in order to save up money and/or develop skills though.

Anyway, I think that there is a pretty good chance that I succeed, in, say the next 20 years. I never thought hard enough about it to put a number on it, but I'll try it here.

Say that I get 10 tries to start a startup in the next 20 years (I know that some take longer than 2 years to fail, but 2 years is the average, and it often takes shorter than 2 years to fail). At a 50% chance of success, that's a >99.9% chance that at least one of them succeeds (1-.5^10). I know 50% might seem high, but I think that my rationality skills, domain knowledge (eventually) and experience (eventually) give me an edge. Even at a 10% chance of success, I have about a 65% (1-.9^10) chance at succeeding in one of those 10 tries, and I think that 10% chance of success is very conservative.

Things I may be underestimating: the chances that I judge something else (earning to give? AI research? less altruistic? a girl/family?) to be a better use of my time. Changes in the economy that make success a lot less likely.

Anyway, there seems to be a high likelihood that I continue to start startups until I succeed, and there seems to be a high likelihood that I will succeed by the time I retire, in which case I should have enough money to ensure that I have a sufficient standard of living for the rest of my life.
3. I spend my life trying and failing at startups, not saving any money, but I develop enough marketable skills along the way and I continue to work well past normal retirement age (assuming I keep myself in good physical and mental condition, and assuming that 1. hasn't happened). I'm not one who wants to stop working.
4. I work a normal-ish job, have a normal retirement plan, and save enough to retire at a normal age.

The point I want to make in this article is that 1, 2, 3 seem way more likely than 4. Which makes me think that long-term saving might not actually be such a good idea.

The real question is "what are my alternatives to retirement saving and why are they better than retirement saving?". The main alternative is to live off of my savings while starting startups. Essentially to treat my money as runway, and use it to maximize the amount of time I spend working towards my (instrumental) goal of starting a successful startup. Ie. money that I would otherwise put towards retirement could be used to increase the amount of time I spend working on startups.

For the record:
1. I'm frugal and conservative (hard to believe... I know).
2. I know that these are unpopular thoughts. It's what my intuition says (a part of my intuition anyway), but I'm not too confident. I need to achieve a higher level of confidence before doing anything drastic, so I'm working to obtain more information and think it through some more.
3. I don't plan on starting a startup any time too soon. I probably need to spend at least a few years developing my skills first. So right now I'm just learning and saving money.
4. The craziest thing I would do is a) put my money in an index fund instead of some sort of retirement account, forgoing the tax benefits of a retirement account. And b) keeping a rather short runway. I'd probably work towards the goal of starting a startup as long as I have, say 6 months living expenses saved up.
5. I know this is a bit of a weird thing to post on LW, but these aren't the kinds of arguments that normal people will take seriously ("I'm not going to save for retirement because there'll be a singularity. Instead I'm going to work towards reducing existential risk." That might be the kind of thing that actually get's you thrown into a mental hospital. I'm only partially joking). And I really need other people's perspectives. I judge that the benefits that other perspectives will bring me will outweigh the weirdness of posting this and any costs that come with people tracing this article to me.
Thoughts?

## Superintelligence 24: Morality models and "do what I mean"

7 24 February 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.

Welcome. This week we discuss the twenty-fourth section in the reading guideMorality models and "Do what I mean".

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Morality models” and “Do what I mean” from Chapter 13.

# Summary

1. Moral rightness (MR) AI: AI which seeks to do what is morally right
1. Another form of 'indirect normativity'
2. Requires moral realism to be true to do anything, but we could ask the AI to evaluate that and do something else if moral realism is false
3. Avoids some complications of CEV
4. If moral realism is true, is better than CEV (though may be terrible for us)
2. We often want to say 'do what I mean' with respect to goals we try to specify. This is doing a lot of the work sometimes, so if we could specify that well perhaps it could also just stand alone: do what I want. This is much like CEV again.

# Another view

Olle Häggström again, on Bostrom's 'Milky Way Preserve':

The idea [of a Moral Rightness AI] is that a superintelligence might be successful at the task (where we humans have so far failed) of figuring out what is objectively morally right. It should then take objective morality to heart as its own values.1,2

Bostrom sees a number of pros and cons of this idea. A major concern is that objective morality may not be in humanity's best interest. Suppose for instance (not entirely implausibly) that objective morality is a kind of hedonistic utilitarianism, where "an action is morally right (and morally permissible) if and only if, among all feasible actions, no other action would produce a greater balance of pleasure over suffering" (p 219). Some years ago I offered a thought experiment to demonstrate that such a morality is not necessarily in humanity's best interest. Bostrom reaches the same conclusion via a different thought experiment, which I'll stick with here in order to follow his line of reasoning.3 Here is his scenario:
The AI [...] might maximize the surfeit of pleasure by converting the accessible universe into hedonium, a process that may involve building computronium and using it to perform computations that instantiate pleasurable experiences. Since simulating any existing human brain is not the most efficient way of producing pleasure, a likely consequence is that we all die.
Bostrom is reluctant to accept such a sacrifice for "a greater good", and goes on to suggest a compromise:
The sacrifice looks even less appealing when we reflect that the superintelligence could realize a nearly-as-great good (in fractional terms) while sacrificing much less of our own potential well-being. Suppose that we agreed to allow almost the entire accessible universe to be converted into hedonium - everything except a small preserve, say the Milky Way, which would be set aside to accommodate our own needs. Then there would still be a hundred billion galaxies devoted to the maximization of pleasure. But we would have one galaxy within which to create wonderful civilizations that could last for billions of years and in which humans and nonhuman animals could survive and thrive, and have the opportunity to develop into beatific posthuman spirits.

If one prefers this latter option (as I would be inclined to do) it implies that one does not have an unconditional lexically dominant preference for acting morally permissibly. But it is consistent with placing great weight on morality. (p 219-220)

What? Is it? Is it "consistent with placing great weight on morality"? Imagine Bostrom in a situation where he does the final bit of programming of the coming superintelligence, to decide between these two worlds, i.e., the all-hedonium one versus the all-hedonium-except-in-the-Milky-Way-preserve.4 And imagine that he goes for the latter option. The only difference it makes to the world is to what happens in the Milky Way, so what happens elsewhere is irrelevant to the moral evaluation of his decision.5 This may mean that Bostrom opts for a scenario where, say, 1024 sentient beings will thrive in the Milky Way in a way that is sustainable for trillions of years, rather than a scenarion where, say, 1045 sentient beings will be even happier for a comparable amount of time. Wouldn't that be an act of immorality that dwarfs all other immoral acts carried out on our planet, by many many orders of magnitude? How could that be "consistent with placing great weight on morality"?6

# Notes

1. Do What I Mean is originally a concept from computer systems, where the (more modest) idea is to have a system correct small input errors.

2. To the extent that people care about objective morality, it seems coherent extrapolated volition (CEV) or Christiano's proposal would lead the AI to care about objective morality, and thus look into what it is. Thus I doubt it is worth considering our commitments to morality first (as Bostrom does in this chapter, and as one might do before choosing whether to use a MR AI), if general methods for implementing our desires are on the table. This is close to what Bostrom is saying when he suggests we outsource the decision about which form of indirect normativity to use, and eventually winds up back at CEV. But it seems good to be explicit.

3. I'm not optimistic that behind every vague and ambiguous command, there is something specific that a person 'really means'. It seems more likely there is something they would in fact try to mean, if they thought about it a bunch more, but this is mostly defined by further facts about their brains, rather than the sentence and what they thought or felt as they said it. It seems at least misleading to call this 'what they meant'. Thus even when '—and do what I mean' is appended to other kinds of goals than generic CEV-style ones, I would expect the execution to look much like a generic investigation of human values, such as that implicit in CEV.

4. Alexander Kruel criticizes 'Do What I Mean' being important, because every part of what an AI does is designed to be what humans really want it to be, so it seems unlikely to him that AI would do exactly what humans want with respect to instrumental behaviors (e.g. be able to understand language, and use the internet and carry out sophisticated plans), but fail on humans' ultimate goals:

Outsmarting humanity is a very small target to hit, requiring a very small margin of error. In order to succeed at making an AI that can outsmart humans, humans have to succeed at making the AI behave intelligently and rationally. Which in turn requires humans to succeed at making the AI behave as intended along a vast number of dimensions. Thus, failing to predict the AI’s behavior does in almost all cases result in the AI failing to outsmart humans.

As an example, consider an AI that was designed to fly planes. It is exceedingly unlikely for humans to succeed at designing an AI that flies planes, without crashing, but which consistently chooses destinations that it was not meant to choose. Since all of the capabilities that are necessary to fly without crashing fall into the category “Do What Humans Mean”, and choosing the correct destination is just one such capability.

I disagree that it would be surprising for an AI to be very good at flying planes in general, but very bad at going to the right places in them. However it seems instructive to think about why this is.

# In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

1. Are there other general forms of indirect normativity that might outsource the problem of deciding what indirect normativity to use?
2. On common views of moral realism, is morality likely to be amenable to (efficient) algorithmic discovery?
3. If you knew how to build an AI with a good understanding of natural language (e.g. it knows what the word 'good' means as well as your most intelligent friend), how could you use this to make a safe AI?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

# How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about other abstract features of an AI's reasoning that we might want to get right ahead of time, instead of leaving to the AI to fix. We will also discuss how well an AI would need to fulfill these criteria to be 'close enough'. To prepare, read “Component list” and “Getting close enough” from Chapter 13. The discussion will go live at 6pm Pacific time next Monday 2 March. Sign up to be notified here.

## Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 109

5 23 February 2015 08:05PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 109.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

## A quick heuristic for evaluating elites (or anyone else)

4 23 February 2015 04:22PM

Summary: suppose that for some reason you want to figure out how a society works or worked in a given time and place, largely you want to see if it is a meritocracy where productive people get ahead and thus conditions are roughly fair and efficient, or is it more like a parasitical elite sucking the blood out of everybody. I present a heuristic for this and also as a bonus this also predicts how intellectuals work there.

I would look at whether the elites are specialists or generalists. We learned it from Adam Smith that a division of labor is efficient, and if people get rich without being efficient, well, that is a red flag.  If someone's rich, and they tell you they are specialists like manufacturing food scented soap or are specialists in retina surgery, you could have the impression that when such people get ahead, circumstances are fair and meritocratic. But when the rich people you meet seemed to be more vague, like owning shares in various businesses and you don't see any connection between them (and they are not Buffet type investors either, they keep owning the same shares), or when all you gather is that their skillset is something very general, like generic businesses sense or people skills, you should be suspicious that the market may be rigged or corrupted, perhaps through establishing an overbearing and corrupted state, and overally the system is neither fair nor efficient.

Objection: but generic skills can be valuable!

Counter-objection: yes, but people with generic skills should be outcompeted by people with generic AND specialist skills, so the generalists should only see a middling level of success and not be on top. Alternatively, people who want to get very succesful using only generic skills should probably find the most success in a fair and efficient market by turning that generic skill into a specialization, usually a service/consulting. Thus, someone who has excellent people skills but does not like learning technical details would not see the most success (only a middling level) as a used car salesperson, or any technical product salesperson and would rather be providing sales training, communication training services, courses, consulting, writing books.

Counter-counter-objection: we know it since Adam Smith's Scottish Highlands blacksmith example that you can only specialize if there is a lot of competition, not only in the sense that only in this case you are forced to specialize, but also in the sense that only in that case it is good, beneficial for you and others to do so. If you are the only doctor in a days walk in a Borneo rainforest, don't specialize. If you are the only IT guy in a poverty stricken village, don't specialize.

Answer: this is a very good point. In general specialization is a comparative thing, if nobody is a doctor near you, then being a doctor is a specialization in itself. If there are a lot of doctors, you differentiate yourself by becoming a surgeon, if there are a lot of surgeons, you differentiate yourself by becoming eye surgeon. In a village where nobody knows computers, being generic IT guy is a specialization, in a city with many thousands of IT people, you differentate yourself by being an expert on SAP FI and CO modules.

So the heuristic works only so far as you can make a good enough guess what level of specialization or differntiation would be logical in the circumstances and then you see people who are the richest or most succesful not being so specialized. In fact if they are less specialized than their underlings, that is a clear red flag! When you see someone who is an excellent eye surgeon specialist, but he is not the highest ranking doc, the highest ranking one is someone whom people say to be a generic good leader but does not have any specialist skills - welcome to Corruption Country! Because a purely okay leader (generic skill) should mean a middling level of success, not a stellar one, the top of the top guns should be someone who has these generic skills and also a rock star in a specialist field.

Well, maybe this needs to be fleshed out more, but it is a starter of an idea.

BONUS. Suppose you figured out the elites are too generalists to assume they earned their wealth by providing value to others, they simply does not look that productive, don't seem to have a specialized enough skillset, they may look more like parasites. From this you can also figure out what intellectuals are like. By intellectuals I mean the people who write the books people from the middle classes up consume. If elites are productive, they are not interested in signalling, they have a get-things-done mentality and thus the intellectuals will often have a very pragmatic attitude, they won't be much into lofty, murky intellectualism, they will often see highbrowery as a way to solve practical problems. Because that is what their customers want. While if elites are unproductive, they will do a lot of signalling to try to excuse their high status. They cannot tell _exactly_ what they do, so they try to look _generally_ superior than a the plebs. They often signal being more sophisticated, having better taste and all that - all this means "I don't have a superior specialist skill, because I am an unproductive elite parasite, so I must look generally superior than the plebs".  They will also use books, intellectual ideas to express this and that kind of intellectualism will always be very murky, lofty, abstract. Not a get-things-done type.  One trick to look for is if intellectuals like to abuse terms like "higher", "spiritual", this suggests "you guise who read it are generally superior" and thus plays into the signalling of unproductive elites.

You can also use the heuristic in the reverse. If the most popular bestsellers books are like "The Power of Habits" (pragmatic, empirical, focusing on reality, like LW), you can also assume that the customers of these books, the elites, will be largely efficient people working in a honest market (or the other way around). If the most popular bestsellers are "The Spiritual Universe - The Ultimate Truths Behind The Meaning Of The Cosmos" - you can assume not only the intellectuals who write them are buffoons, but also the rich folks will also be unproductive, parasitical aristocrats, because they generally use stuff like this to make themselves look superior in general, without specialists skills. Because specialist, productive elites hate this stuff and do not finance it.

Why is this all useful?

You can quickly decide if you want to work with / in that kind of society. Will your efficient work be rewarded? Or more likely those who are amongst the well born will take the credit? You can also figure out if a society today or even in the historical past was politically unjust or not.

(And now I am officially horrible at writing essays, it is to writing what "er, umm, er, like" is to speaking. But I hope you can glean the meaning out of it. I am not a very verbal thinker, I am just trying to translate the shapes in my mind to words.)

## GCRI: Updated Strategy and AMA on EA Forum next Tuesday

7 23 February 2015 12:35PM

Just announcing for those interested that Seth Baum from the Global Catastrophic Risks Institute (GCRI) will be coming to the Effective Altruism Forum to answer a wide range of questions (like a Reddit "Ask Me Anything") next week at 7pm US ET on March 3.

Seth is an interesting case - more of a 'mere mortal' than Bostrom and Yudkowsky. (Clarification: his background is more standard, and he's probably more emulate-able!). He had a PhD in geography, and had come to a maximising consequentialist view, in which GCR-reduction is overwhelmingly important. So three years ago,  with risk analyst Tony Barrett, he cofounded the Global Catstrophic Risks Institute - one of the handful of places working on these particularly important problems. Since then, it's done some academic outreach and have covered issues like double-catastrophe/ recovery from catstrophe, bioengineering, food security and AI.

Just last week, they've updated their strategy, giving the following announcement:

Dear friends,

I am delighted to announce important changes in GCRI’s identity and direction. GCRI is now just over three years old. In these years we have learned a lot about how we can best contribute to the issue of global catastrophic risk. Initially, GCRI aimed to lead a large global catastrophic risk community while also performing original research. This aim is captured in GCRI’s original mission statement, to help mobilize the world’s intellectual and professional resources to meet humanity’s gravest threats.

Our community building has been successful, but our research has simply gone farther. Our research has been published in leading academic journals. It has taken us around the world for important talks. And it has helped us publish in the popular media. GCRI will increasingly focus on in-house research.

Our research will also be increasingly focused, as will our other activities. The single most important GCR research question is: What are the best ways to reduce the risk of global catastrophe? To that end, GCRI is launching a GCR Integrated Assessment as our new flagship project. The Integrated Assessment puts all the GCRs into one integrated study in order to assess the best ways of reducing the risk. And we are changing our mission statement accordingly, to develop the best ways to confront humanity’s gravest threats.

So 7pm ET Tuesday, March 3 is the time to come online and post your questions about any topic you like, and Seth will remain online until at least 9 to answer as many questions as he can. Questions in the comments here can also be ported across.

On the topic of risk organisations, I'll also mention that i) video is available from CSER's recent seminar, in which Mark Lipsitch and Derek Smith's discussed potentially pandemic pathogens, and ii) I'm helping Sean to write up an update of CSER's progress for LessWrong and effective altruists which will go online soon.

## Announcing LessWrong Digest

25 23 February 2015 10:41AM

I've been making rounds on social media with the following message.

Great content on LessWrong isn't as frequent as it used to be, so not as many people read it as frequently. This makes sense. However, I read it at least once every two days for personal interest. So, I'm starting a LessWrong/Rationality Digest, which will be a summary of all posts or comments exceeding 20 upvotes within a week. It will be like a newsletter. Also, it's a good way for those new to LessWrong to learn cool things without having to slog through online cultural baggage. It will never be more than once weekly. If you're curious here is a sample of what the Digest will be like.

Also, major blog posts or articles from related websites, such as Slate Star Codex and Overcoming Bias, or publications from the MIRI, may be included occasionally. If you want on the list send an email to:

lesswrongdigest *at* gmail *dot* com

Users of LessWrong itself have noticed this 'decline' in frequency of quality posts on LessWrong. It's not necessarily a bad thing, as much of the community has migrated to other places, such as Slate Star Codex, or even into meatspace with various organizations, meetups, and the like. In a sense, the rationalist community outgrew LessWrong as a suitable and ultimate nexus. Anyway, I thought you as well would be interested in a LessWrong Digest. If you or your friends:

• find articles in 'Main' are too infrequent, and Discussion only filled with announcements, open threads, and housekeeping posts, to bother checking LessWrong regularly, or,
• are busying themselves with other priorities, and are trying to limit how distracted they are by LessWrong and other media

the LessWrong Digest might work for you, and as a suggestion for your friends. I've fielded suggestions I transform this into a blog, Tumblr, or other format suitable for RSS Feed. Almost everyone is happy with email format right now, but if a few people express an interest in a blog or RSS format, I can make that happen too.

## Open thread, Feb. 23 - Mar. 1, 2015

3 23 February 2015 08:01AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

## Are Cognitive Load and Willpower drawn from the same pool?

5 23 February 2015 02:46AM

I was recently reading a blog here, that referenced a paper done in 1999 by Baba Shiv and Alex Fedorikhin (Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making). In it, volunteers are asked to memorise short or long numbers and then asked to chose a snack as a reward. The snack is either fruit or cake. The actual paper seems to go into a lot of details that are irrelevent to the blog post, but doesn't actually seem to contradict anything the blog post says. The result seems to be that those with a higher cognitive load were far more likely to chose the cake than those who weren't.

I was wondering if anyone has read any further on this line of research? The actual experiment seems to imply that the connection between cognitive load and willpower may be an acute effect - possibly not lasting very long. The choice of snack is made seconds after memorising a number and while actively trying to keep the number in memory for short term recall a few minutes later. There doesn't seem to be anything about the effect on willpower minutes or hours later.

Does anyone know if the effect lasts longer than a few seconds? If so, I would be interested in whether this affect has been incorporated into any dieting strategies.

## How to debate when authority is questioned, but really not needed?

3 23 February 2015 01:44AM

Especially in the comments of political articles or about economic issues I find myself arguing with people who question my authority about a topic rather than refute my arguments.

----

Examples may be:

1:

Me: I think money printing by the Fed will cause inflation if they continue like this.

Random commenter: Are you an economist?

Me: I am not, but it's not relevant.

Random commenter: Ok, so you are clueless.

2:

Me: The current strategy to fight terror is not working because ISIS is growing.

Random commenter: What would you do to stop terrorism?

Me: I have an idea of what I would do, but it's not relevant because I'm not an expert, but do you think the current strategy is working?

Random commenter: So you don't know what you are talking about.

----

It is not about my opinions above, or even if I am right or not, I would gladly change my opinion after a debate, but I think that I am being disqualified unfairly.

If I am right, how should I answer or continue these conversations?

## Request: Sequences book reading group

20 22 February 2015 01:06AM

The book version of the Sequences is supposed to be published in the next month or two, if I understand correctly. I would really enjoy an online reading group to go through the book together.

• It would give some of us the motivation to actually go through the Sequences finally.
• I have frequently had thoughts or questions on some articles in the Sequences, but I refrained from commenting because I assumed it would be covered in a later article or because I was too intimidated to ask a stupid question. A reading group would hopefully assume that many of the readers would be new to the Sequences, so asking a question or making a comment without knowing the later articles would not appear stupid.
• It may even bring back a bit of the blog-style excitement of the "old" LW ("I wonder what exciting new thoughts are going to be posted today?") that many have complained has been missing since the major contributors stopped posting.
I would recommend one new post per day, going in order of the book. I recommend re-posting the entire article to LW, including any edits or additions that are new in the book. Obviously this would require permission from the copyright holder (who is that? is there even going to be a copyright at all?), but I'm hoping that'll be fine.

I'd also recommend trying to make the barriers to entry as low as possible. As noted above, this means allowing people to ask questions / make comments without being required to have already read the later articles. Also, I suggest that people not be required to read all the comments from the original article. If something has already been discussed or if you think a particular comment from the original discussion was very important, then just link to it or quote it.

Finally, I think it would be very useful if some of the more knowledgeable LW members could provide links and references to the corresponding  "traditional" academic literature on each article.

Unfortunately, for various reasons I am unwilling to take responsibility for such a reading group. If you are willing to take on this responsibility, please post a comment to that effect below.

Thanks!

## Can we decrease the risk of worse-than-death outcomes following brain preservation?

9 21 February 2015 10:58PM

Content note: discussion of things that are worse than death

Over the past few years, a few people have claimed rejection of cryonics due to concerns that they might be revived into a world that they preferred less than being dead or not existing. For example, lukeprog pointed this out in a LW comment here, and Julia Galef expressed similar sentiments in a comment on her blog here

I use brain preservation rather than cryonics here, because it seems like these concerns are technology-platform agnostic.

To me one solution is that it seems possible to have an "out-clause": circumstances under which you'd prefer to have your preservation/suspension terminated.

Here's how it would work: you specify, prior to entering biostasis, circumstances in which you'd prefer to have your brain/body be taken out of stasis. Then, if those circumstances are realized, the organization carries out your request.

This almost certainly wouldn't solve all of the potential bad outcomes, but it ought to help some. Also, it requires that you enumerate some of the circumstances in which you'd prefer to have your suspension terminated.

While obvious, it seems worth pointing out that there's no way to decrease the probability of worse-than-death outcomes to 0%. Although this also is the case for currently-living people (i.e. people whose brains are not necessarily preserved could also experience worse-than-death outcomes and/or have their lifespan extended against their wishes).

1) Do you think that an opt-out clause is a useful-in-principle way to address your concerns?

2) If no to #1, is there some other mechanism that you could imagine which would work?

3) Can you enumerate some specific world-states that you think could lead to revival in a worse-than-death state? (Examples: UFAI is imminent, or a malevolent dictator's army is about to take over the world.)

## Rationality promoted by the American Humanist Association

7 21 February 2015 07:28PM

Happy to share that I got to discuss rationality-informed thinking strategies on the American Humanist Association's well-known and popular podcast, the Humanist Hour (here's the link to the interview). Now, this was aimed at secular audiences, so even before the interview the hosts steered me to orient specifically toward what they thought the audience would find valuable. Thus, the interview focused more on secular issues, such as finding meaning and purpose from a science-based perspective. Still, I got to talk about map and territory and other rationality strategies, as well as cognitive biases such as planning fallacy and sunken costs. So I'd call that a win. I'd appreciate any feedback from you all on how to optimize the way I present rationality-informed strategies in future media appearances.

## The Role of Physics in UDT: Part I

5 21 February 2015 10:51AM

Outline: In the previous post, I discussed the properties of utility functions in the extremely general setting of the Tegmark level IV multiverse. In the current post, I am going to show how the discovery of a theory of physics allows the agent performing a certain approximation in its decision theory. I'm doing this with an eye towards analyzing decision theory and utility calculus in universes governed by realistic physical theories (quantum mechanics, general relativity, eternal inflation...)

# A Naive Approach

Previously, we have used the following expression for the expected utility:

[1] $E[U]=\int_X U(x) d\mu(x)$

Since the integral is over the entire "level IV multiverse" (the space of binary sequences), [1] makes no reference to a specific theory of physics. On the other hand, a realistic agent is usually expected to use its observations to form theories about the universe it inhabits, subsequently optimizing its action with respect to the theory.

Since this process crucially depend on observations, we need to make the role of observations explicit. Since we assume the agent uses some version of UDT, we are not supposed to update on observations, instead evaluating the logical conditional expectation values

[2] $v_{A,U}(\pi)=E_{log}[\int_X U(x) d\mu(x) \mid \forall i \in I: A(i)=\pi(i)]$

Here $A$ is the agent, $\pi:I \rightarrow O$ is a potential policy for the agent (mapping from sensory inputs to actions) and $E_{log}$ is expectation value with respect to logical uncertainty.

Now suppose $A$ made observations $\tau$ leading it to postulate physical theory $T$. For the sake of simplicity, we suppose $A$ is only deciding its actions in the universes in which observations $\tau$ were made1. Thus, we assume that the input space factors as $I=I_{past} \times I_{future}$ and we're only interested in inputs in the set $\tau \times I_{future}$. This simplification leads to replacing [2] by

[3] $v^{future}_{A,U}(\pi) = E_{log}[\int_X U(x) d\mu(x) \mid \forall i \in I_{future}: A(\tau \times i)=\pi(i)]$

where $\pi:I_{future} \rightarrow O$ is a "partial" policy referring to the $\tau$-universe only.

The discovery of  $T$ allows $A$ to perform a certain approximation of [2']. A naive guess of the form of the approximation is

[4'] $v^{future}_{A,U}(\pi) \approx w^{future}_{A,U} + E_{log}[\int_X U(x) d\nu_T(x) \mid \forall i \in I_{future}: A(\tau \times i)=\pi(i)]$

Here, $w^{future}_{A,U}$ is a constant representing the contributions of the universes in which $T$ is not valid (whose logical-uncertainty correlation with $A$ we neglect) and $\nu_T$ is a measure on $X$ corresponding to $T$. Now, physical theories in the real world often specify time evolution equations without saying anything about the initial conditions. Such theories are "incomplete" from the point of view of the current formalism. To complete it we need a measure on the space of initial conditions: a "cosmology". A simple example of a "complete" theory $T$: a cellular automaton with deterministic (or probabilistic) evolution rules and a measure on the space of initial conditions (e.g. set each cell to an independently random state).

However, [4'] is in fact not a valid approximation of [3]. This is because the use of $\nu_T$ fixes the ontology: $\nu_T$ treats binary sequences as encoding the universe in a way natural for $T$ whereas dominant2 contributions to [3] come from binary sequences which encode the universe in a way natural for $U$

# Ontology Decoupling

Allow me a small digression to discussing desiderata of logical uncertainty. Consider an expression of the form $E_{log}[2x]$ where $x$ is a mathematical constant with some complex definition e.g. $\pi$ or the Euler-Mascheroni constant $\gamma$. From the point of view of an agent with bounded computing resources, $x$ is a random variable rather than a constant (since its value is not precisely known). Now, in usual probability theory we are allowed to use identities such as $E_{log}[2x]=2E_{log}[x]$. In the case of logical uncertainty, the identity is less obvious since the operation of multiplying by 2 has non-vanishing computing cost. However, since this cost is very small we expect to have the approximate identity $E_{log}[2x] \approx 2E_{log}[x]$.

Consider a set $\Delta$ of programs computing functions $X \rightarrow X$ containing the identity program. Then, the properties of the Solomonoff measure give us the approximation

[5] $\int_X U(x) d\mu(x) \approx \sum_{f \in \Delta} 2^{-|f|} \int_{X} U(f(x)) d\mu_\Delta(x)$

Here $\mu_\Delta$ is the restriction of $\mu$ to hypotheses which don't decompose as applying some program in $\Delta$ to another hypothesis and $|f|$ is the length of the program $f$.

Applying [5] to [3] we get

$v^{future}_{A,U}(\pi) \approx E_{log}[\sum_{f \in \Delta} 2^{-|f|} \int_{X} U(f(x)) d\mu_\Delta(x) \mid \psi^{future}_A(\pi)]$

Here $\psi^{\tau}_A(\pi)$ is a shorthand notation for $\forall i \in I_{future}: A(\tau \times i)=\pi(i)$. Now, according to the discussion above, if we choose $\Delta$ to be a set of sufficiently cheap programs3 we can make the further approximation

$v^{future}_{A,U}(\pi) \approx \sum_{f \in \Delta} 2^{-|f|} E_{log}[\int_{X} U(f(x)) d\mu_\Delta(x) \mid \psi^{future}_A(\pi)]$

If we also assume $\Delta$ to sufficiently large, it becomes plausible to use the approximation

[4] $v^{future}_{A,U}(\pi) \approx w^{future}_{A,U} + \sum_{f \in \Delta} 2^{-|f|} E_{log}[\int_X U(x) d\nu_T(x) \mid \psi^{future}_A(\pi)]$

The ontology problem disappears since $f$ bridges between the ontologies of $T$ and $U$. For example, if $T$ describes the Game of Life and $U$ describes glider maximization in the Game of Life, but the two are defined using different encodings of Game of Life histories, the term corresponding to the re-encoding $f$ will be dominant2 in [4].

# Stay Tuned

The formalism developed in this post does not yet cover the entire content of a physical theory. Realistic physical theories not only describe the universe in terms of an arbitrary ontology but explain how this ontology relates to the "classical" world we experience. In other words, a physical theory comes with an explanation of the embedding of the agent in the universe (a phenomenological bridge). This will be addressed in the next post where I explain the Cartesian approximation: the approximation decoupling between the agent and the rest of the universe.

Subsequent posts will apply this formalism to quantum mechanics and eternal inflation to understand utility calculus in Tegmark levels III and II respectively.

1 As opposed to a fully fledged UDT agent which has to simultaneously consider its behavior in all universes.

2 By "dominant" I mean dominant in dependence on the policy $\pi$ rather than absolutely.

3 They have to be cheap enough to take the entire sum out of the expectation value rather than only the $2^{-|f|}$ factor in a single term. This condition depends on the amount of computing resources available to our agent, which is an implicit parameter of the logical-uncertainty expectation values $E_{log}$

## Weekly LW Meetups

2 21 February 2015 06:32AM

This summary was posted to LW Main on February 13th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

## Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 108

5 20 February 2015 09:53PM

New long chapter! Since I expect its discussion to generate more than 160 comments (which would push the previous thread over the 500 comment limit) before the next chapter is posted, here is a new thread.

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 108 (and chapter 109, once it comes out on Monday).

EDIT: There have now been two separate calls for having one thread per chapter, along with a poll in this thread. If the poll in this thread indicates a majority preference for one thread per chapter by Monday, I will edit this post to make it for chapter 108 only. In that case a new thread for chapter 109 should be posted by whoever gets a chance and wants to after the chapter is released.

EDIT 2: The poll indicates a large majority (currently 78%) in favor of one thread per chapter. This post has been edited accordingly.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

## My mind must be too highly trained

5 20 February 2015 09:43PM

I've played various musical instruments for nearly 40 years now, but some simple things remain beyond my grasp. Most frustrating is sight reading while playing piano. Though I've tried for years, I can't read bass and treble clef at the same time. To sight-read piano music, when you see this:

you need your right hand to read it as C D E F, but your left hand to read it as E F G A. To this day, I can't do it, and I can only learn piano music by learning the treble and bass clef parts separately to the point where I don't rely on the score for more than reminders, then playing them together.

## Money threshold Trigger Action Patterns

16 20 February 2015 04:56AM

In American society, talking about money is a taboo. It is ok to talk about how much money someone else made when they sold their company, or how much money you would like to earn yearly if you got a raise, but in many different ways, talking about money is likely to trigger some embarrassment in the brain, and generate social discomfort. As one random example: no one dares suggest that bills should be paid according to wealth, for instance, instead people quietly assume that fair is each paying ~1/n, which of course completely fails utilitarian standards.

One more interesting thing people don't talk about, but would probably be useful to know, are money trigger action patterns. That would be a trigger action pattern that should trigger whenever you have more money than X, for varying Xs.

A trivial example is when should you stop caring about pennies, or quarters? When should you start taking cabs or Ubers everywhere? These are minor examples, but there are more interesting questions that would benefit from a money trigger action pattern.

An argument can be made for instance that one should invest in health insurance prior to cryonics, cryonics prior to painting a house and recommended charities before expensive soundsystems. But people never put numbers on those things.

When should you buy cryonics and life insurance for it? When you own \$1,000? \$10,000? \$1,000,000? Yes of course those vary from person to person, currency to currency, environment, age group and family size. This is no reason to remain silent about them. Money is the unit of caring, but some people can care about many more things than others in virtue of having more money. Some things are worth caring about if and only if you have that many caring units to spare.

I'd like to see people talking about what one should care about after surpassing specific numeric thresholds of money, and that seems to be an extremely taboo topic. Seems that would be particularly revealing when someone who does not have a certain amount suggests a trigger action pattern and someone who does have that amount realizes that, indeed, they should purchase that thing. Some people would also calibrate better over whether they need more or less money, if they had thought about these thresholds beforehand.

Some suggested items for those who want to try numeric triggers: health insurance, cryonics, 10% donation to favorite cause, virtual assistant, personal assistant, car, house cleaner, masseuse, quitting your job, driver, boat, airplane, house, personal clinician, lawyer, body guard,  etc...

...notice also that some of these are resource satisfiable, but some may not. It may always be more worth financing your anti-aging helper than your costume designer, so you'd hire the 10 millionth scientist to find out how to keep you young before considering hiring someone to design clothes specifically for you, perhaps because you don't like unique clothes. This is my feeling about boats, it feels like there are always other things that can be done with money that precede having a boat, though outside view is that a lot of people who own a lot of money buy boats.

## Easy wins aren't news

36 19 February 2015 07:38PM

Recently I talked with a guy from Grant Street Group. They make, among other things, software with which local governments can auction their bonds on the Internet.

By making the auction process more transparent and easier to participate in, they enable local governments which need to sell bonds (to build a high school, for instance), to sell those bonds at, say, 7% interest instead of 8%. (At least, that's what he said.)

They have similar software for auctioning liens on property taxes, which also helps local governments raise more money by bringing more buyers to each auction, and probably helps the buyers reduce their risks by giving them more information.

This is a big deal. I think it's potentially more important than any budget argument that's been on the front pages since the 1960s. Yet I only heard of it by chance.

People would rather argue about reducing the budget by eliminating waste, or cutting subsidies to people who don't deserve it, or changing our ideological priorities. Nobody wants to talk about auction mechanics. But fixing the auction mechanics is the easy win. It's so easy that nobody's interested in it. It doesn't buy us fuzzies or let us signal our affiliations. To an individual activist, it's hardly worth doing.

## [LINK] The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics

16 19 February 2015 06:06PM

Sean Carroll, physicist and proponent of Everettian Quantum Mechanics, has just posted a new article going over some of the common objections to EQM and why they are false. Of particular interest to us as rationalists:

Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate. And the actual postulates of the theory are quite simple indeed:

1. The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space.
2. The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian.

That is, as they say, it. Notice you don’t see anything about worlds in there. The worlds are there whether you like it or not, sitting in Hilbert space, waiting to see whether they become actualized in the course of the evolution. Notice, also, that these postulates are eminently testable — indeed, even falsifiable! And once you make them (and you accept an appropriate “past hypothesis,” just as in statistical mechanics, and are considering a sufficiently richly-interacting system), the worlds happen automatically.

Given that, you can see why the objection is dispiritingly wrong-headed. You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. You don’t object to general relativity because you can’t be absolutely sure that Einstein’s equation was holding true at some particular event a billion light years away. This distinction between what is postulated (which should be testable) and everything that is derived (which clearly need not be) seems pretty straightforward to me, but is a favorite thing for people to get confused about.

Very reminiscent of the quantum physics sequence here! I find that this distinction between number of entities and number of postulates is something that I need to remind people of all the time.

META: This is my first post; if I have done anything wrong, or could have done something better, please tell me!

## Virtue, group and individual prestige

0 19 February 2015 02:55PM

Let's assume now that people respect other people who have or appear to have high levels of  virtue.  Let's also say that Alice has Level 10 virtue and for this reason she has Level X prestige in other people's eyes, purely based on her individual merits.

Now let's assume that Alice teams up with a lot of other people who have Level 10 virtue and form the League of Extraordinarily Virtuous People. How high a prestige would membership in the League would convey on its members? Higher or lower than X?

I would say, higher, for two reasons. You give Alice a really close look, and you judge her virtue levels must be somewhere around Level 10. However you don't trust your judgement very much and for that reason you discount a bit the prestige points you award to her. However, she was accepted into the League by other people who also appear to be very virtuous. This suggests your estimation was correct, and you can afford to award her more points. Every Well Proven Virtue a League member has increases the chance that the virtues of other members are not fake either or else he or she would not accept to be in the same League with them, and this increases the amount of prestige points you award to them.  Second, few people know Alice up close and personally. The bigger the distance, the less they know about her, her personal fame radiates only so far. But the combined fame of the League radiates much farther. Thus more people notice their virtuousness and award prestige points to them.

In other words, if virtuous people want to maximize the prestige points they have, it is a good idea for them to form famous groups with strict entry requirements.

And suddenly Yale class rings make sense now. They get more prestige for being a member of a group who is famous for having whatever virtues it takes to graduate from Yale, than the prestige they could get for simply having those virtues.

The flip side of it, if you want to motivate people to be more virtuous, and if you think prestige assigned to virtue is a good way to do that, encourage them to form famous groups with strict entry requirements.

One funny thing is that the stricter you make the entry requirements (base minimum level of virtue), the more prestige the group will _automatically_ get. You just design the entry test, basically the cost paid, but you don't need to design the reward, it is automatically happening! That is just handy.

Well, the whole thing is fairly obvious as long as the virtue in question is "studying your butt off". It is done all the time. This is what the term "graduated from a prestigious university" means.

It is less obvious once the virtue in question is something like "stood up for the victims of injustice, even facing danger for it".

Have you ever wondered why the same logic is not done there? Find a moral cause. Pick out the people who support it the most virtuously, who took the most personal danger and the least personal benefit etc. make them examples and make them form an elite club. That club will convey a lot of prestige on its members. This suggests other people will take more pains to support that cause in order to get into that club.

Yet, it is not really done. What was the last time you saw strict entry requirements for any group or club or association related to any social cause? It is usually the opposite, making entry easy, just sign up for the newsletter here, which means it does not convey much prestige.

If there is anything that matters to you, not even necessarily a moral social cause, but just anything you wish more people done, just stop for a minute and think over if such high prestige famous elite groups with strict entry requirements should be formed with regard to that.

And now I don't understand why I don't see badges like "Top MIRI donator" beside usernames around here. Was the idea not thought before, or is it more like I am missing something important here?

It can also be useful to form groups of people who are virtuous at _anything_, putting the black-belt into the same group as the scholar or the activist who stood up against injustice. "Excel at anything and be one of us." This seems to be the most efficient prestige generator and thus motivator, because different people notice and reward with prestige points different kinds of virtues. If I respect mainly edge.org level scientists, if they are are willing to be in the same club as some political activist who never published science, I will find that activist curious, interesting and respectable.  That is partially why I toy with the idea of knightly orders.

View more: Next