Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

FOOM Articles

fowlertm 05 March 2015 09:32PM

I'd like recommendations for articles dealing with slow and hard takeoff scenarios. I already found Yudkowsky's post 'hard takeoff', I know 'Superintelligence' has a section on it, and I think the Yudkowsky/Hanson debate mostly dealt with it.

Is there anything else?

HPMOR Wrap Parties: Resources, Information and Discussion [link to Main post]

TylerJay 05 March 2015 09:15PM

Posted to Main. X-posting for visibility

I know a lot of people don't check Main very often. Wanted to make sure everyone interested saw this. 

False thermodynamic miracles

3 Stuart_Armstrong 05 March 2015 05:04PM

A putative new idea for AI control; index here.

Ok, here is the problem:

  • You have to create an AI that believes (or acts as if it believed) that event X is almost certain, while you believe that X is almost impossible. Furthermore, you have to be right. To make things more interesting, the AI is much smarter than you, knows everything that you do (and more), and has to react sensibly when event X doesn't happen.

Answers will be graded on mathematics, style, colours of ink, and compatibility with the laws of physics. Also, penmanship. How could you achieve this?

continue reading »

Satisficers' undefined behaviour

0 Stuart_Armstrong 05 March 2015 05:03PM

I previously posted an example of a satisficer (an agent seeking to achieve a certain level of expected utility u) transforming itself into a maximiser (an agent wanting to maximise expected u) to better achieve its satisficing goals.

But the real problem with satisficers isn't that they "want" to become maximisers; the real problem is that their behaviour is undefined. We conceive of them as agents that would do the minimum required to reach a certain goal, but we don't specify "minimum required".

For example, let A be a satisficing agent. It has a utility u that is quadratic in the number of paperclips it builds, except that after building 10100, it gets a special extra exponential reward, until 101000, where the extra reward becomes logarithmic, and after 1010000, it also gets utility in the number of human frowns divided by 3↑↑↑3 (unless someone gets tortured by dust specks for 50 years).

A's satisficing goal is a minimum expected utility of 0.5, and, in one minute, the agent can press a button to create a single paperclip.

So pressing the button is enough. In the coming minute, A could decide to transform itself into a u-maximiser (as that still ensures the button gets pressed). But it could also do a lot of other things. It could transform itself into a v-maximiser, for many different v's (generally speaking, given any v, either v or -v will result in the button being pressed). It could break out, send a subagent to transform the universe into cream cheese, and then press the button. It could rewrite itself into a dedicated button pressing agent. It could write a giant Harry Potter fanfic, force people on Reddit to come up with creative solutions for pressing the button, and then implement the best.

All these actions are possible for a satisficer, and are completely compatible with its motivations. This is why satisficers are un(der)defined, and why any behaviour we want from it - such as "minimum required" impact - has to be put in deliberately.

I've got some ideas for how to achieve this, being posted here.

Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116

4 Gondolinian 04 March 2015 08:11PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 116.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

Is status really that simple?

-4 DeVliegendeHollander 04 March 2015 11:15AM

I am reading The other kind of status and it seems to me that status is seen as a single number, either objective, or in the eyes of other people in the group, or your own, either ordinal or cardinal, but at the end of the day you can say your status is 67 points or 12th in rank. And I think it is not actually the case! A few examples why it is more complicated:

 

Intimidation, power, authority


People behave in a respectful, deferential, submissive way to people they are afraid of, be that their personal scariness or power and authority. However this is not genuine respect. (Well, it is hard to say exactly - I would say for most of you it is not so, but OTOH there are people out there who like strength or authority so much that they truly respect those who can intimidate them, because they too would like to be intimidating people. Let's say it is not genuine respect in all cases.) If your neighbor is a cop and people behave with him extra tactfully because if he gets pissed off he may find an excuse for an arrest, is that status? Better example: crimelords, The Godfather (by "normal" people, not their fellow criminals).

 

The opposite: the purely moral status

People who are very, very good, and their goodness also means they are very meek, they are very much the kind of people who would not hurt anyone not even in self defense, and it is obviously showing - they get a strange kind of respect. Many people genuinely treat them with respect, but somehow it lacks certain aspects of the respect a high-ranking businessman gets, somehow it seems if people are so obviously harmless, the respect has less depth.

 

Most common status

I think most common cases of status have elements of both. To be high status you need to have power - not necessarily in the social-political sense, but in the sense of "the ability to affect things". For example, a good example is being very intelligent and learned. It is a kind of power. And you need to use that power in ways we generally morally approve of, for we don't really respect a criminal mastermind. But you also need to have a bit of an intimidation potential too, you should not look too harmless, of course you don't need to behave in intimidating ways, but still if people think "wow, I would not want such a smart person as my enemy, I could get a check-mate", that gives more depth to the respect. Perhaps it is better  - less disturbing - if you call it not intimidation potential, but _ally potential_: if someone else would want to hurt you, does this person have anything to assist you in the conflict? Anything could mean intelligence, knowledge, social influence, charisma, political position, physical strength...

I dislike made-up evo-psy as much as everybody else, but this sort of makes sense in an ancestral environment. We respect people who are useful allies, tribe members, who have power i.e. abilitites or resources usable in affecting the world, but what makes them useful allies also makes them dangerous as potential enemies so there is also a bit of an intimidation potential as well, and generally we want them to use these abilities or resources for the tribe, not against it, which is probably where morality comes from.

 

But that is only the beginning

In the example above, status is not one number but two: power status 43, morality status 51.  This alone demonstrates the problem with the single-number approach. However there can be so many numbers... I have seen very, very confusing and ambiguous status-setups in my life that probably came from many numbers.

 

- For example, some people assign high status to people who wear business suits and their female equivalents because it suggests a powerful social position, but also some people (young-ish) were more like "Ah, so you work. How boring. Worky worky working bee tehehee. Why aren't you a rich playboy or gangster who does not need to work?" So I saw a kind of a wants-to-work vs. must-work split here or I am not even sure exactly what.

- I saw people who were generally materialistic and yet valued wearing designer clothes more than driving an expensive car in China-Mart clothes, so apparently they assigned a number to style and a number to wealth and it interacted in non-obvious ways.

- Or simply at school - it was not-obvious, whether the students with good grades had higher status, or those who considered it a romantic rebellion against authority to not study and not write tests and not answer teacher questions. Many kids envied the courage of the second group but were still afraid of punishment and studied conscientously anyway and the funniest part was that in trying to satisfy both goals, they studied conscientously, got good grades, then lied about it and boasted they did not study at all and got that good grade purely on luck or smarts! Because studying was seen like being a teacher boot licker, almost as bad as a snitch... but of course getting and admittance letter into a university of law (= "Wow Rob is gonna be a rich lawyer!!") made him a hero so both studying and not-studying conferred status!

- Still school, easier example: in the breaks, being funny and entertaining was valued. In the phys ed class, skill, as we played a lot of ball sports (and it was not considered being a teacher's pet to be good at it), high skilled players were respected. The hierarchy visibly changed in the break before and after phys ed class. This is a fairly clear example of status consisting of multiple numbers, like Humor 43, Skill 71.

Of course one could say it is just different people valuing different things and that is that, but I think the multiple-number hypothesis is better: the same people valuing other people in different ways, as in the very first example (intimidated respect to the crime boss or policeman, respect but without depth to the moral saint), or valuing other people differently in different circumstances...

Stupid Questions March 2015

4 Gondolinian 03 March 2015 11:37PM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115

3 Gondolinian 03 March 2015 06:02PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 114, and also, as a special case due to the exceptionally close posting times, chapter 115.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

Getting better at getting better

2 casebash 03 March 2015 11:12AM

I decided to link to this article, because this seems to be all about what Less Wrong is about: http://www.newyorker.com/magazine/2014/11/10/better-time. Out of interest, does anyone know of a good resource for learning more about the training techniques used in elite athletics?

[POLL] LessWrong group on YourMorals.org (2015)

11 gwern 03 March 2015 03:08AM

In 2011InquilineKea posted a Discussion topic on YourMorals.org, a psychology research website which provides scores of psychology scales/inventories/surveys/tests to the general public to gather large samples. Niftily, YourMorals lets users sign up for particular groups, and then when you take tests, you can see your own results alongside group averages of liberals/conservatives/libertarians & $GROUP. A lot of time has passed and I think most LWers don't know about it, so I'm reposting so people can use it.

 

The regular research has had interesting results like showing a distinct pattern of cognitive traits and values associated with libertarian politics, but there's no reason one can't use it for investigating LWers in more detail; for example, going through the results, "we can see that many of us consider purity/respect to be far less morally significant than most", and we collectively seem to have Conscientiousness issues. (I also drew on it recently for a gay marriage comment.) If there were more data, it might be interesting to look at the results and see where LWers diverge the most from libertarians (the mainstream group we seem most psychologically similar to), but unfortunately for a lot of the tests, there's too little to bother with (LW n<10). Maybe more people could take it.

 

You can sign up using http://www.yourmorals.org/setgraphgroup.php?grp=623d5410f705f6a1f92c83565a3cfffc

All quizzes: http://www.yourmorals.org/all_morality_values_quizzes.php

Big 5: http://www.yourmorals.org/bigfive_process.php

 

(You can see some of my results at http://www.gwern.net/Links#profile )

Superintelligence 25: Components list for acquiring values

6 KatjaGrace 03 March 2015 02:01AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the twenty-fifth section in the reading guideComponents list for acquiring values.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Component list” and “Getting close enough” from Chapter 13


Summary

  1. Potentially important choices to make before building an AI (p222)
    • What goals does it have?
    • What decision theory does it use?
    • How do its beliefs evolve? In particular, what priors and anthropic principles does it use? (epistemology)
    • Will its plans be subject to human review? (ratification)
  2. Incentive wrapping: beyond the main pro-social goals given to an AI, add some extra value for those who helped bring about the AI, as an incentive (p222-3)
  3. Perhaps we should indirectly specify decision theory and epistemology, like we have suggested doing with goals, rather than trying to resolve these issues now. (p224-5)
  4. An AI with a poor epistemology may still be very instrumentally smart, but for instance be incapable of believing the universe could be infinite (p225)
  5. We should probably attend to avoiding catastrophe rather than maximizing value (p227) [i.e. this use of our attention is value maximizing..]
  6. If an AI has roughly the right values, decision theory, and epistemology maybe it will correct itself anyway and do what we want in the long run (p227)

Another view

Paul Christiano argues (today) that decision theory doesn't need to be sorted out before creating human-level AI. Here's a key bit, but you might need to look at the rest of the post to understand his idea well:

Really, I’d like to leave these questions up to an AI. That is, whatever work Iwould do in order to answer these questions, an AI should be able to do just as well or better. And it should behave sensibly in the interim, just like I would.

To this end, consider the definition of a map U' : [Possible actions] → ℝ:

U'(a) = “How good I would judge the action to be, after an idealized process of reflection.”

Now we’d just like to build an “agent” that takes the action a maximizing 𝔼[U'(a)]. Rather than defining our decision theory or our beliefs, we will have to come up with some answer during the “idealized process of reflection.” And as long as an AI is uncertain about what we’d come up with, it will behave sensibly in light of its uncertainty.

This feels like a cheat. But I think the feeling is an illusion. 

(more)

Notes

1. MIRI's Research, and decision theory

MIRI focuses on technical problems that they believe can't be delegated well to an AI. Thus MIRI's technical research agenda describes many such problems and questions. In it, Nate Soares and Benja Fallenstein also discuss the question of why these can't be delegated:

Why can’t these tasks, too, be delegated? Why not, e.g., design a system that makes “good enough” decisions, constrain it to domains where its decisions are trusted, and then let it develop a better decision theory, perhaps using an indirect normativity approach (chap. 13) to figure out how humans would have wanted it to make decisions?

We cannot delegate these tasks because modern knowledge is not sufficient even for an indirect approach. Even if fully satisfactory theories of logical uncertainty and decision theory cannot be obtained, it is still necessary to have a sufficient theoretical grasp on the obstacles in order to justify high confidence in the system’s ability to correctly perform indirect normativity.

Furthermore, it would be risky to delegate a crucial task before attaining a solid theoretical understanding of exactly what task is being delegated. It is possible to create an intelligent system tasked with developing better and better approximations of Bayesian updating, but it would be difficult to delegate the abstract task of “find good ways to update probabilities” to an intelligent system before gaining an understanding of Bayesian reasoning. The theoretical understanding is necessary to ensure that the right questions are being asked.

If you want to learn more about the subjects of MIRI's research (which overlap substantially with the topics of the 'components list'), Nate Soares recently published a research guide. For instance here's some of it on the (pertinent this week) topic of decision theory:

Existing methods of counterfactual reasoning turn out to be unsatisfactory both in the short term (in the sense that they systematically achieve poor outcomes on some problems where good outcomes are possible) and in the long term (in the sense that self-modifying agents reasoning using bad counterfactuals would, according to those broken counterfactuals, decide that they should not fix all of their flaws). My talk “Why ain’t you rich?” briefly touches upon both these points. To learn more, I suggest the following resources:

  1. Soares & Fallenstein’s “Toward idealized decision theory” serves as a general overview, and further motivates problems of decision theory as relevant to MIRI’s research program. The paper discusses the shortcomings of two modern decision theories, and discusses a few new insights in decision theory that point toward new methods for performing counterfactual reasoning.

If “Toward idealized decision theory” moves too quickly, this series of blog posts may be a better place to start:

  1. Yudkowsky’s “The true Prisoner’s Dilemma” explains why cooperation isn’t automatically the ‘right’ or ‘good’ option.

  2. Soares’ “Causal decision theory is unsatisfactory” uses the Prisoner’s Dilemma to illustrate the importance of non-causal connections between decision algorithms.

  3. Yudkowsky’s “Newcomb’s problem and regret of rationality” argues for focusing on decision theories that ‘win,’ not just on ones that seem intuitively reasonable. Soares’ “Introduction to Newcomblike problems” covers similar ground.

  4. Soares’ “Newcomblike problems are the norm” notes that human agents probabilistically model one another’s decision criteria on a routine basis.

MIRI’s research has led to the development of “Updateless Decision Theory” (UDT), a new decision theory which addresses many of the shortcomings discussed above.

  1. Hintze’s “Problem class dominance in predictive dilemmas” summarizes UDT’s dominance over other known decision theories, including Timeless Decision Theory (TDT), another theory that dominates CDT and EDT.

  2. Fallenstein’s “A model of UDT with a concrete prior over logical statements” provides a probabilistic formalization.

However, UDT is by no means a solution, and has a number of shortcomings of its own, discussed in the following places:

  1. Slepnev’s “An example of self-fulfilling spurious proofs in UDT” explains how UDT can achieve sub-optimal results due to spurious proofs.

  2. Benson-Tilsen’s “UDT with known search order” is a somewhat unsatisfactory solution. It contains a formalization of UDT with known proof-search order and demonstrates the necessity of using a technique known as “playing chicken with the universe” in order to avoid spurious proofs.

For more on decision theory, here is Luke Muehlhauser and Crazy88's FAQ.

2. Can stable self-improvement be delegated to an AI?

Paul Christiano also argues for 'yes' here:

“Stable self-improvement” seems to be a primary focus of MIRI’s work. As I understand it, the problem is “How do we build an agent which rationally pursues some goal, is willing to modify itself, and with very high probability continues to pursue the same goal after modification?”

The key difficulty is that it is impossible for an agent to formally “trust” its own reasoning, i.e. to believe that “anything that I believe is true.” Indeed, even the natural concept of “truth” is logically problematic. But without such a notion of trust, why should an agent even believe that its own continued existence is valuable?

I agree that there are open philosophical questions concerning reasoning under logical uncertainty, and that reflective reasoning highlights some of the difficulties. But I am not yet convinced that stable self-improvement is an especially important problem for AI safety; I think it would be handled correctly by a human-level reasoner as a special case of decision-making under logical uncertainty. This suggests that (1) it will probably be resolved en route to human-level AI, (2) it can probably be “safely” delegated to a human-level AI. I would prefer use energy investigating other aspects of the AI safety problem... (more)

 

3. On the virtues of human review

Bostrom mentions the possibility of having an 'oracle' or some such non-interfering AI tell you what your 'sovereign' will do. He suggests some benefits and costs of this—namely, it might prevent existential catastrophe, and it might reveal facts about the intended future that would make sponsors less happy to defer to the AI's mandate (coherent extrapolated volition or some such thing). Four quick thoughts:

1) The costs and benefits here seem wildly out of line with each other. In a situation where you think there's a substantial chance your superintelligent AI will destroy the world, you are not going to set aside what you think is an effective way of checking, because it might cause the people sponsoring the project to realize that it isn't exactly what they want, and demand some more pie for themselves. Deceiving sponsors into doing what you want instead of what they would want if they knew more seems much, much, much much less important than avoiding existential catastrophe.

2) If you were concerned about revealing information about the plan because it would lift a veil of ignorance, you might artificially replace some of the veil with intentional randomness.

3) It seems to me that a bigger concern with humans reviewing AI decisions is that it will be infeasible. At least if the risk from an AI is that it doesn't correctly manifest the values we want. Bostrom describes an oracle with many tools for helping to explain, so it seems plausible such an AI could give you a good taste of things to come. However if the problem is that your values are so nuanced that you haven't managed to impart them adequately to an AI, then it seems unlikely that an AI can highlight for you the bits of the future that you are likely to disapprove of. Or at least you have to be in a fairly narrow part of the space of AI capability, where the AI doesn't know some details of your values, but for all the important details it is missing, can point to relevant parts of the world where the mismatch will manifest.

4) Human oversight only seems feasible in a world where there is much human labor available per AI. In a world where a single AI is briefly overseen by a programming team before taking over the world, human oversight might be a reasonable tool for that brief time. Substantial human oversight does not seem helpful in a world where trillions of AI agents are each smarter and faster than a human, and need some kind of ongoing control.

4. Avoiding catastrophe as the top priority

In case you haven't read it, Bostrom's Astronomical Waste is a seminal discussion of the topic.

 

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. See MIRI's research agenda
  2. For any plausible entry on the list of things that can't be well delegated to AI, think more about whether it belongs there, or how to delegate it.
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about strategy in directing science and technology. To prepare, read “Science and technology strategy” from Chapter 14. The discussion will go live at 6pm Pacific time next Monday 9 March. Sign up to be notified here.

Announcement: The Sequences eBook will be released in mid-March

42 RobbBB 03 March 2015 01:58AM

The Sequences are being released as an eBook, titled Rationality: From AI to Zombies, in about two weeks. (I'll update this post when we have a more precise release date.)

We went with the name "Rationality: From AI to Zombies" (based on shminux's suggestion) to make it clearer to people — who might otherwise be expecting a self-help book, or an academic text — that the style and contents of the Sequences are rather unusual. We want to filter for readers who have a wide-ranging interest in (/ tolerance for) weird intellectual topics. Alternative options tended to obscure what the book is about, or obscure its breadth / eclecticism.

 

The book's contents

Around 340 of Eliezer's essays from 2009 and earlier will be included, collected into twenty-six sections ("sequences"), compiled into six books:

  1. Map and Territory: sequences on the Bayesian conceptions of rationality, belief, evidence, and explanation.
  2. How to Actually Change Your Mind: sequences on confirmation bias and motivated reasoning.
  3. The Machine in the Ghost: sequences on optimization processes, cognition, and concepts.
  4. Mere Reality: sequences on science and the physical world.
  5. Mere Goodness: sequences on human values.
  6. Becoming Stronger: sequences on self-improvement and group rationality.

The six books will be released as a single sprawling eBook, making it easy to hop back and forth between different parts of the book. The whole book will be about 1,800 pages long. However, we'll also be releasing the same content as a series of six print books (and as six audio books) at a future date.

The Sequences have been tidied up in a number of small ways, but the content is mostly unchanged. The largest change is to how the content is organized. Some important Overcoming Bias and Less Wrong posts that were never officially sorted into sequences have now been added — 58 additions in all, forming four entirely new sequences (and also supplementing some existing sequences). Other posts have been removed — 105 in total. The following old sequences will be the most heavily affected:

  • Map and Territory and Mysterious Answers to Mysterious Questions are being merged, expanded, and reassembled into a new set of introductory sequences, with more focus placed on cognitive biases. The name 'Map and Territory' will be re-applied to this entire collection of sequences, constituting the first book.
  • Quantum Physics and Metaethics are being heavily reordered and heavily shortened.
  • Most of Fun Theory and Ethical Injunctions is being left out. Taking their place will be two new sequences on ethics, plus the modified version of Metaethics.

I'll provide more details on these changes when the eBook is out.

Unlike the print and audio-book versions, the eBook version of Rationality: From AI to Zombies will be entirely free. If you want to purchase it on Kindle Store and download it directly to your Kindle, it will also be available on Amazon for $4.99.

To make the content more accessible, the eBook will include introductions I've written up for this purpose. It will also include a LessWrongWiki link to a glossary, which I'll be recruiting LessWrongers to help populate with explanations of references and jargon from the Sequences.

I'll post an announcement to Main as soon as the eBook is available. See you then!

March 2015 Media Thread

6 ArisKatsaris 02 March 2015 06:51PM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

Towards a theory of nerds... who suffer.

-9 DeVliegendeHollander 02 March 2015 05:11PM

Summary: I will here focus on nerds who suffer, from the lack of self-respect and sexual, romantic, social success.  My thesis this stems from self-hatred, and the self-hatred stems from childhood bullying, and the solution will involve fixing things that made one a "tempting" bullying target, and some other ways to improve self-respect.

Motivated reasoning and offense

SSC wrote we don't yet have a science of nerds. http://slatestarcodex.com/2014/09/25/why-no-science-of-nerds/ My proposal is to use motivated reasoning and focus on the subset of nerds who suffer and need helping. I am mostly familiar with the white straight male demographic and in this, suffering nerds are often called "neckbeards", or "omega males".

One danger of such motivated reasoning is giving out offense, because problems that cause suffering and in need of helping have a huge overlap with traits that can be used as insults, many disabilities are good parallels here, it is possible to use disabilities as insults mainly for people who don't actually have them, especially when using emotionally loaded language like "cripple" or "retard". Any helpful doctor needs to be careful if he wants to diagnose a child with low IQ, parents will often be like, "my kid is not stupid!" and we have a similar issue here.

The solution to the offense issue is: if you are a nerd, and you find what I write here does not apply to you, good: you are not in the subset of nerds who need helping! You are a happy, well-adjusted person with some "nerdy" interests and preferences, which is entirely OK but also relatively uninteresting, I simply don't want to discuss that because that is mostly like discussing why some people don't like mushroom on their pizza: maybe borderline curious, but not important. I focus on nerds who suffer. Human suffering is what matters, and if I can help a hundred people who suffer while offending ten who do not understand that I am not talking about them, it is a good trade.

I am largely talking about the guys who are mocked and bullied by being called "forever a virgin", those whose traits cluster around interest in D&D, Magic: The Gathering, fantasy, anime, have poor body hygiene, dress and groom in ways considered unattractive, have poor social skills, very low chances of ever finding a girlfriend, and not have any social life besides teaming up with fellow social outcasts.

Self-hatred


I propose the core issue of suffering nerds, "neckbeards", "omega males" is self-hatred. I see three reasons for this:

A) Engaging in fantasy, D&D, discussing superheroes, Star Wars etc. can be seen as escaping from a self and life one hates.

Against1: every novel and movie is a way to that. Not just fantasy or superhero comics.

Pro1: have you noticed non-nerdy people like movies and novels that are more or less cast in the here and now, with heroes that are believable contemporary characters? While nerds are often bored by "mainstream" crime novels, Ludlum type spy novels, by stuff "normal people" read?

Against2: this can simply mean disliking the current, real world, but not necessary their own self.

Pro2: admittedly, unreal, magical adventures can have an allure to all. Our modern world really is disenchanted, as Max Weber had put it. Things were more interesting when people believed stone arrow heads found are from elves, not cavemen. Still, people who are happy with their own self are happy enough with seeing an improved version of their own self overcoming realistic obstacles in a "mainstream" crime or war novel or movie. Dreaming about being a fireball caster wizard or a superhero with superpowers means you do not trust yourself you could ever be like a guy in a "mainstream" movie, throwing punches, shooting guns and kissing models, it does not inspire you to become like that, it rather frustrates you  that you could be something like that and you are not, and thus you want your heroes and idols to be safely non-imitable. Nobody will give you shit why you cannot cast a lightning bolt spell. It does not remind you of your inadeqacies and the shit you were given for them. Instead of a real-world fantasy that gives you a painful reminder of your inadequacy, a magical fantasy allows you to fantasize about a completely different life, being a completely different person, someone you could never expected to be. Instead of these dreams painfully reminding you to improve yourself, in your fantasy you basically die as your current self and be reborn as someone entirely different in an entirely different life with entirely different rules.

Against3: so everybody who enjoys LOTR movies and the GoT series is hating himself?  Have you not noticed fantasy went mainstream in the recent years?

Pro3: indeed it did. But a version of it that lacks the unreal appeal. Game of Thrones is almost historical, it is just normal medieval people fighting and scheming for power, with very little supernatural thrown in. LOTR got hollywoodized in the movies, much more focus on flashy sword fighting against stupid looking brutes, less about supernatural stuff. They are to fantasy what Buck Rogers was to sci-fi.  And non-nerds just watch them, maybe read them, but do not obsess about them.

B) Their poor clothing and grooming habits suggest they do not think their own self deserves to be decorated.

Against1: maybe they are just not interested  in their looks.

Pro1: life is a trade-off. Time you invest into looks is time you take away from something else. How could people who spend their time fantasizing about Star Wars think their time is that important? Eliezer Yudkowsky thinks his time is invested into literally saving humankind from extinction and still takes time away from it to invest into grooming and dressing in an okay way and finding eyeglasses that match his face, because he knows otherwise his message will not be taken seriously enough. It is a worthy investment. People don't want to listen to someone with a "crazy scientist" or similar look. He knows he needs to look like he is selling software, kind of. I don't think anyone could seriously think the social gains from a basic okay wardrobe and regular barber visits do not worth taking some time away from D&D. Obesity is often a neckbeard problem too, and it is also unhealthy.

Against2: Okay, but maybe they either do not realize it, due to some kind of social blindness, or lack the ability to figure out how to look in a way that society approves. Chalk it up to poor social skills, not self-hatred?

Pro2: The heroes suffering nerds fantasize about actually look good in their own fantasy world. Often even in the real one. In the sense that Superman was a good looking journalist when he was not Superman and even Peter Parker being borderline okay, and most fantasy heroes look like someone who is appropriate in that social circumstance (simplified/heroized/sanitized/mythologized European middle ages). First of all they are not fat and rather muscular, they are well groomed, and so on. Suffering nerds don't even imitate their own heroes. Although someone trying to look like Aragorn would be weird today, basically being a tall and muscular guy with a long hair and short cropped, well groomed beard and maybe leather clothes would look like a biker rocker, which is leaps and bounds cooler in society's eyes than an obese neckbeard with greasy hair and Tux t-shirt with dirty baggy jeans and dirtier sneakers. If nerds would really try to look like fantasy heroes, the would be more popular. But they look more like, they feel don't deserve to improve their looks. But there is also something more:

C) When they sometimes improve their looks, this does not come accross as improving their real selves or finding something that matches who they are, rather as a symbolic imitation of an entirely different person. A good example is the fedora, which symbolizes an old fashioned gentleman in 1950 which does not match the rest of their clothes or the fact it is not 1950. This suggests self-hatred.

Against1: Doesn't it contradict the previous point?

Pro1: I think it strenghtens it. Any guy with a fedora or something like that cannot be said to be uninterested in looks, and misjudging what society considers to be attractive cannot possible mean you wear Dick Tracy's hat but not his suit, muscles, lack of paunch, and lack of neckbeard. I think it is more of a symbol that I don't want to be me, I want to be someone totally different.

A-C)

Against1: fine, neckbeards hate themselves and dream about being someone else. How do we know it is the source of their problems, and not an effect? How about lack of socio-sexual success making them both suffering and self-hating and they react to this like that?

Pro1: we don't, and it is a good point, something like autism may play a role. Socio-sexual success, being borderline "cool" or at least accepted is something not exactly bright high school dropouts can figure out, how comes often highly intelligent men cannot? Indeed, autism or Asperger may play a role. However there are charming, sexy people on the spectrum, this cannot possible be the cause. Besides certain symptoms overlap with self-hatred: if someone avoids eye contact, how to know if it comes from their Asperger syndrome or from self-hatred making them afraid to meet a gaze directly and rather wanting to hide from other people's eyes? Cannot obsessive tendencies be a way to avoid thinking about one's own self? It is entirely possible that many men on the spectrum developed a self-hatred due to the bullying the received for being on the spectrum and much of their problems come from that. One thing is clear - whatever other reasons there are for lacking socio-sexual success, the above characteristics make the situation much worse.

Against2: Satoshi Kanazawa argued high IQ suppresses instincts and makes you basically lack "common sense". Maybe it is just that?

Pro2: Yes. But the instinct in question is not simply basic social skills. I will get back to this.

Against3: Paul Graham wrote nerds are unpopular because they simply don't want to invest into being popular, having other interests.

Pro3: This seems to be true for non-suffering nerds. Primarily the nerds who are into this-worldly, productive, STEM stuff. Why care about fashionable clothes when you are learning fascinating things like physics? Slightly irritated about the superficiality of other people, the non-suffering nerd gets a zero-maintenance buzz cut and 7 polo shirts of the same basic color of a brand a random cute looking girl has recommended, so that he does not have to think about what to put on, and has a presentable look with minimal effort. Of course we know "neckbeards", "omegas" don't look like that. Much worse. Suffering nerds seem to have deeper problems than not wanting to invest a minimal amount of time into their looks. Besides, look at their interests. STEM nerds are into things that are useful in this today's real world. D&D nerds want to escape it.

Against4: Testosterone?

Pro4: Plays a role both ways, see below.


The cause of self-hatred

Other people despising you. Sooner or later you internalize it. There could be many causes for that... sometimes parents of the kind who always tell their kids they suck. Some people hit walls like racism or homophobia... some people get picked on as kids because they are disabled or disfigured.

Actually this latest is a good clue and a good proof of we are on a good track with this here. I certainly have seen an above-average % of disabled or disfigured youths playing D&D. It seems if you are a textbook target for bullying, if other kids tell you you are a worthless piece of feces in various ways for years, you will want to escape into a fantasy where you are a wizard casting fireballs burning the meanines to death. So we are getting a clue about what may cause this self-hatred.

However in my experience simply being a weak or cowardly boy causes the same shitstorm of bullying, humiliations, and beatings. Kids are cruel. It is basically a brutal form of setting up a dominance hierarchy by trying to torture everybody, those who don't even dare to resist get assigned the lowest rank, those who try and fail only slightly higher, and the bravest, bolderst, cruelest, most aggressive fighters being on top. And intelligence may be an obstacle here by suppressing your fighting instinct.

Being bullied into the lowest level of social rank basically destroys your serum teststerone levels. It also makes you depressed. Both depend on your rank in the pecking order. Low-T combined with depression is probably something really close to what I call "self-hatred", since high-T is often understood as pride and confidence, so the opposite of it is probably shame and submissiveness, and SSC wrote depressed people who are suicidial often say "I feel I am a burden" i.e. you are not worthy to others, a liability, not an asset. Shame, submissiveness and feeling worthless is precisely what I called self-hatred.

Thus these two well-documented aspects of getting a low social rank already cause something akin to self-hatred, but I think it is also important how it happens in childhood. If it would be simply kids e.g. respecting those with higher grades, or richer parents more but still behaving borderline polite with everybody, the way how adults do it, I think it would be less of an issue. Kids, boys, however, establish social rank with brutal beatings, humiliation, bullying, and making sure the other boy got the "you suck" message driven in with a sledgehammer. A textbook example of the "wedgie" which Wiki calls a prank: http://en.wikipedia.org/wiki/Wedgie and perhaps it is possible to do it in harmless pranky that way, too, but when four muscular boys boys capture a weak, scared, squealing one in the toilet, immobilize him, and give him an atomic one then force him to walk out like that so that everybody can laugh at his humiliation, this is no prank. This is the message hammered in: you suck, you are worthless, you are helpless, you are no man, you got no balls, we do whatever we want to you and you have no "figther rank" whatsoever, you did not even try to defend yourself.  And I have seen many such events when I was a child.

Against1: Ouch. But is this really about fighting ability? Don't you think other ways how kids rank each other, rank their popularity matters, especially in modern schools where fighting is strictly forbidden and surveillance is strong?

Pro1: not 100% sure. After all they do it teaming up. It is perfectly possible that if a brown skinned boy and a bunch of racist classmates interacted it would be the same for him even if he is strong and does MMA. Still... in my experience, it was usually about that. I mean, not about what karate belt you have, it was more like testing your masculinity, like courage, aggression, strength. If you are "man enough" they would respect you and leave you alone, basically assigning a higher rank. The whole thing felt like testing whatever I later learned about testosterone levels, both prenatal and serum. It seems bullies were trying to sniff out weakness, both emotional and physical, and T is the best predictor to a combination of both.  For example, the worst thing was to cry, you got called a girly boy and bullied even more, get the lowest possible rank. Surely boys being raised in patriarchical and homophobic cultures had something to do with it, but the whole thing still reminded me of something biological like reindeer "locking horns".  I think if there is ever such a thing as males establishing a dominance hierarchy largely through  testing each others prenatal or serum testosterone i.e. manly courage and strength and fierceness, it was that.

But I also find it likely being "different" in any way, race, sexuality, disability, must have made you much more of a target.

Obviously this reflects the values of society, too. In Russia even grown up soldiers and prison inmates do this, which probably reflects the highly toxic-masculinity values they have or the oppression they themselves receive from officers, or even formerly from fathers. Two fascinating links: http://en.wikipedia.org/wiki/Dedovshchina http://en.wikipedia.org/wiki/Thief_in_law#Ponyatiya so you can imagine what goes on in schools. And yes, on the other hand growing up in a textbook NY liberal community must be a lot easier in this regard. Most of Europe will be somewhere in between.

Against1: So, your argument is that bullying destroys your self-respect much more than any other way of achieving a low social rank, and this leads to self-hatred, which leads to fantasy escapism and typical nerd-neckbeard behaviors, which then adds up and results in the lack of socio-sexual success? Isn't it a job for Occam's razor?

Pro1: well, the argument is more like, whatever happens with you in your childhood is very important, boys tend to establish rank by bullying and fighting or even in the best case, by testing each others courage and masculinity by other means, daring each other to climb trees etc. My point is, not simply that bullying or even childhood bullying matters so much, my point is rather that bullying or courage tests in childhood make you realize the fact that indeed you are lacking in important masculine abilities like courage, fierceness or strength, so probably low prenatal T, and low social rank established via this cuts much deeper in a man's soul than simply low social rank because you are poor or get bad grades. It affirms you don't worth much as a man and this makes you hate yourself much more than simply internalizing that you are poor or something like that. This alone - such as the depressed T levels and general depression due to low social rank - could explain the suffering and lack of later socio-sexual success of nerds, but the fantasy-escapism as a coping method makes it worse. Without that, nerds, neckbeards would not be a noticable and much ridiculed type - without that, all you would see is that some guys are kind of sad and timid, but otherwise look and behave like all the other guys!

Against1: do you think anti-bullying policies could solve "neckbeards" for the next generation ?

Pro1: Trying to make people behave less cruel is ought to reduce the suffering of the victims and a good thing. Having said that, while this demographic I am talking about would suffer less victimization as a child, I am not entirely convinced they would end up with much less self-hatred and better socio-sexual success, thus less adult suffering.  Why? Because my thesis is not that victimization hurts, obviously it does, my thesis is that being truly, indeed, actually less masculine than other boys and having your nose rubbed into it so that you realize you are indeed not much of a man is what generates self-hatred, perhaps partially due to biology and partially to patriarchy, I don't know why. I mean, the bullies are ethically wrong, but truthfully right - they bully you because you are indeed weak, in emotion or body, and you hate yourself for being indeed, truly weak.  So for example something as light as not daring to climb a rope during gym class and the other boys giving you a contemptuous look could destroy your self-respect here, especially if afterwards you are interacted with as a low-rank social pariah. And this is not something the anti-bullying teachers can solve. Perhaps you can try to pressure boys to not judge each others for courage, not express it so, never treat anyone like a social outcast etc. but it would be a lot like trying to destroy their masculinity too, trying to destroy that competititve, dominant, judgemental spirit that is so strongly linked to testosterone. I don't think it can succeed and I don't think it would be ethical to try do so. This is what they are. You can teach them to express their views in less agressive ways, but human freedom means if you want to frown because you think another guys suck, you can. Nevertheless, still it is good to not tolerate bullies, it is better to force high-To boys to express their contempt in more civilized ways, to reduce the suffering of their victims, just don't expect it prevents later "nerd problems".

Against1: I am still not convinced other forms of discrimination or low social rank do not generate more self-hatred.

Pro1: Well, just look at those American blacks who are both poor and black, both giving them a lower social rank at school, and end up being gangsta-rappers or even criminal inmates, but still strong, tattooed, masculine as hell, really the opposite of neckbeards-nerds who typically have characteristics that are considered unmasculine. It seems you could be bullyed for many a thing, but apparently nerdiness, neckbeardery tends to be formed when it is specifically your lack of a masculine fighter spirit that made you a target.

Against1: Any ways to easily test all this?

Pro1: Yes. Ask your neckbeard friend to consent to a test that will not be physically harmful but may cause emotional triggering. Then pretend to slap or munch him in the face. Do you get a panicky, nervous reaction, like turtling up and blinking, or you get a "manly" one like leaning back and catching your hand? This predicts if he is used to fighting back, or used to getting beaten and not daring to fight.


The cure

How to fix all this? Well, I have found that some neckbeards have managed to fix themselves to a certain extent without really even planning to, via the following means:

- Career success giving you a certain sense of social rank and self-confidence. Being higher on the social ladder increases testosterone, which also gets you the feedback from others and yourself that you are less unmasculine now, which makes you hate yourself for being unmasculine less.

- During career, many neckbeards did the same thing as Eliezer and opted for a simple, easy smart-casual wardrobe and better groomed in a low-maintenance way. This improved feedback from others and thus their confidence.

- It seems sports, martial arts, to some extent even basic body building helped many a man.

- All this led to better self-acceptance.

But let's try to go deeper here.

Neckbeards need to find self-respect WHILE accepting they are intellectuals. The goal is neither to accept yourself the way you are - they way you currently are sucks - nor to hate yourself so much that you do not feel you deserve to be improved and thus projecting a false public image. The goal is to self-improve WHILE accepting you are an intellectual.

Step 1 is to realize that it is not intellectualism that makes people marginalized, ridiculed, and unable to find girlfriends. It is the lack of other skills than intellectual ones, largely, the lack of masculine virtues. Here the idea of a writer is a useful mental crutch: you as a neckbeard are probably a voracious reader, thinking you are made from the same material writers are made from is not entirely wrong, it is realistic, it is close enough to your real self or essence. As a voracious reader, you are as to writers what power users are to programmers. Close enough. It is not a fake persona for you if you make some writers your role models: you both are intellectuals in essence. And yes, sexy, masculine, socially and sexually succesful male writers exist: Richard Dawkins, Robert Heinlein, Albert Camus. Shaping yourself after them is both true to your real self and a way to improve yourself.

The basics are not hard.

- Sports (more about it later)

- Smart casual wardrobe, nice low maintenance haircut, facial hair probably to be completely avoided until you learn more about style. That is an advanced level milestone, postpone it.

- Dropping a nuke on your social shyness by joining Toastmasters - a writer should be able to give a speech on a podium? Toastmasters International (and the later is not just a name, they are in Europe etc. too) says on the can that they are about public speaking skills, which is true, but public speaking is simply the hardest kind of speaking for introverted, shy, or self-hating people, go through the Comm manual giving the 10 speeches, participate in table topics, and compared to that 1:1 socializing or chatting will be easy.


- One more thing you need to learn there, namely to develop a genuine interest in other people and not just obsessively talk about your interests to them, but also be interested in their stuff, or even in small talk. This is annoying,  but once you get a bit used to it, you realize that you are gaining validation from respectable looking people choosing to discuss the weather or similar stupid topics with you. If they "wasted" a minute or two on a worthless topic with you, then perhaps it is your own person that is not worthless for them. This helps with the self-hatred issue. Toastmasters tends to be very good at this. Old time members are happy to chat with newbies just about anything, because these meetings are marked as communicate, communicate, communicate in their calendar.

- Therapy, focusing on your childhood bullying for being perceived weak and cowardly, or general feedbacks about being less masculine. Well, this is one of the advices that is almost useless, because if you are the type of guy who goes to shrinks you have did it long ago and if you are the type who would not go near a shrink unless borderline suicidial you won't take this advice, but it simply had to be given, for the sake of my conscience more than for your benefit.

- So, back to sports. Yes, you need to get in shape. But also you need to convince your inner boy that you could not be bullied, beaten, your masculinity brutally challenged and your self humiliated and oppressed anymore. You need to compensate, and do it hard.  There are three schools of thought here. Many people recommend gym type body-building, weight-lifting. On one hand it is good, on the other hand it can make you feel fake: you feel you look like a fighter, but you feel you are still a timid, cowardly boy inside and it makes you feel faking it. It works better at 17, when you are more superficial, it does not work at 40.  A second school says martial arts, and indeed there are many a neckbeard in the local karate dojo, the issue is, that doing katas and kumite of the kind that stops at the first succesful hit is still not fighting. It is not going through figther moves that you need. It is to awaken a raw sense of masculinity in you, to face your fears and overcome them, and feel courage and fierceness. You need to get in touch with your inner animal a bit, and that is not karate. I recommend boxing. A light boxing sparring - done after about 6 months - is the closest thing to simulating someone really trying to beat you. Not at full force, but your opponent is really lauching a hundrend punches right in your face. This is why boxing has this rules. This is why it was a primary way to teach British intellectual boys to man up. It is not supposed to teach you street fighting techniques. It is supposed to help you conquer your fears and find your courage, your inner fierce animal with bared fangs, by focusing on the kinds of attacks that are most fearsome: punches right into your face. A grappling lock or MMA thigh kick may immobilize or hurt you, and they are effective at fighting, but they are not as effective at scaring people. This is the whole point. You need to get scared many times, until you learn courage. Boxing is courage training. And courage, not strength or skill, is what makes a man - and what makes an ex-unmanly-boy not hate himself.

 

Socially speaking, anti-bullying and reducing the worst aspects of toxic masculinity or highly patriarchical values should help but be careful! Natural born high-T bullies fly under the radar much more than bullied nerds who are trying to man up and thus doing spectacularly manly things. Do it the wrong way around, and you end up handicapping precisely those you are trying to help! Anyone who obsesses about guns, MMA or choppers, while wearing fatigues and Tapout tees are not the masculine bullies: they are the nerds trying to cope with not actually being or not having been masculine. While this is a questionable way to cope, it is not them you want to handicap, so if you want to fight toxic masculinity or patriarchy, do NOT focus on its lowest hanging fruits! The true bullies don't do these, they don't need to.

Open thread, Mar. 2 - Mar. 8, 2015

3 MrMind 02 March 2015 08:19AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Followup: Sequences book reading group

6 iarwain1 01 March 2015 05:37PM

It's been about a week since I posted a request for a reading group once the Sequences book comes out. As of this post, 25 people have indicated that they would like someone to do this, but we still have no volunteers to actually do it. I would volunteer to do it myself, but it's hard for me to commit to it. (For productivity reasons I usually have LessWrong blocked on my computer except in the evenings, and there are many evenings when I don't have time to log on at all.)

I propose that we use essentially the same model used for the Open Threads. If it's time for a new Reading Group post and nobody's posted it yet, post it yourself. If you feel that you can probably commit to help with this on occasion, please mention this in the comments. (I understand that having a few people volunteer while everybody else stays quiet might increase the bystander effect, but I think it's useful to have at least a few people mention that they can help. Everybody else, even if you didn't volunteer in the comments here, please step up to the plate anyway if you see nobody else is posting.)

We had a number of discussions / polls in the previous thread about exactly how the reading group should be conducted: What should the pace be? Should we re-post the entire article or just post a link to the original? Should we post individual articles (at whatever pace we decide) or should we post all the articles of the sequence all together? (This last link is to a new poll I just put up.) Or maybe we should just have a link on the sidebar to where the reading group is currently holding?

I propose that we start off the reading group with whatever seems to be the most popular options, and that we re-assess towards the end of each sequence. So for example we might start off at a rate of one individual article every other day1, which would mean we'd probably finish the first sequence in a little less than a month. Towards the end of that time we'd do the polls again and perhaps switch to a different pace or to posting the whole sequence at once.

Actionable items:

 

  • If you haven't voted on the linked polls and want to, please do so.
  • If you know how to set up the LW sidebar so that it shows a link to the current reading group article, please volunteer to do so.
  • If you are privy to information about the upcoming book, please let us know about whether or not there will be copyright issues with copy/pasting the articles into LW.
  • Please volunteer to help out with posting!

1 At the time of this posting there are 8 people who voted for 1 article per day, 6 said 1 every other day, and 2 said 1 per week. Going with 1 every other day, at least to start off, seems a reasonable compromise.

 

Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113

8 Gondolinian 28 February 2015 08:23PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 113.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.


IMPORTANT -- From the end of chapter 113:

This is your final exam.

You have 60 hours.

Your solution must at least allow Harry to evade immediate death,
despite being naked, holding only his wand, facing 36 Death Eaters
plus the fully resurrected Lord Voldemort.

If a viable solution is posted before
*12:01AM Pacific Time* (8:01AM UTC) on Tuesday, March 3rd, 2015,
the story will continue to Ch. 121.

Otherwise you will get a shorter and sadder ending.

Keep in mind the following:

1. Harry must succeed via his own efforts. The cavalry is not coming.
Everyone who might want to help Harry thinks he is at a Quidditch game.

2. Harry may only use capabilities the story has already shown him to have;
he cannot develop wordless wandless Legilimency in the next 60 seconds.

3. Voldemort is evil and cannot be persuaded to be good;
the Dark Lord's utility function cannot be changed by talking to him.

4. If Harry raises his wand or speaks in anything except Parseltongue,
the Death Eaters will fire on him immediately.

5. If the simplest timeline is otherwise one where Harry dies -
if Harry cannot reach his Time-Turner without Time-Turned help -
then the Time-Turner will not come into play.

6. It is impossible to tell lies in Parseltongue.

Within these constraints,
Harry is allowed to attain his full potential as a rationalist,
now in this moment or never,
regardless of his previous flaws.

Of course 'the rational solution',
if you are using the word 'rational' correctly,
is just a needlessly fancy way of saying 'the best solution'
or 'the solution I like' or 'the solution I think we should use',
and you should usually say one of the latter instead.
(We only need the word 'rational' to talk about ways of thinking,
considered apart from any particular solutions.)

And by Vinge's Principle,
if you know exactly what a smart mind would do,
you must be at least that smart yourself.
Asking someone "What would an optimal player think is the best move?"
should produce answers no better than "What do you think is best?"

So what I mean in practice,
when I say Harry is allowed to attain his full potential as a rationalist,
is that Harry is allowed to solve this problem
the way YOU would solve it.
If you can tell me exactly how to do something,
Harry is allowed to think of it.

But it does not serve as a solution to say, for example,
"Harry should persuade Voldemort to let him out of the box"
if you can't yourself figure out how.

The rules on Fanfiction dot Net allow at most one review per chapter.
Please submit *ONLY ONE* review of Ch. 113,
to submit one suggested solution.

For the best experience, if you have not already been following
Internet conversations about recent chapters, I suggest not doing so,
trying to complete this exam on your own,
not looking at other reviews,
and waiting for Ch. 114 to see how you did.

I wish you all the best of luck, or rather the best of skill.

Ch. 114 will post at 10AM Pacific (6PM UTC) on Tuesday, March 3rd, 2015.


ADDED:

If you have pending exams,
then even though the bystander effect is a thing,
I expect that the collective effect of
'everyone with more urgent life
issues stays out of the effort'
shifts the probabilities very little

(because diminishing marginal returns on more eyes
and an already-huge population that is participating).

So if you can't take the time, then please don't.
Like any author, I enjoy the delicious taste of my readers' suffering,
finer than any chocolate; but I don't want to *hurt* you.

Likewise, if you hate hate hate this sort of thing, then don't participate!
Other people ARE enjoying it. Just come back in a few days.
I shouldn't even need to point this out.

I remind you again that you have hours to think.
Use the Hold Off On Proposing Solutions, Luke.

And really truly, I do mean it,
Harry cannot develop any new magical powers
or transcend previously stated constraints on them
in the next sixty seconds.

Probability of coming into existence again ?

5 pzwczzx 28 February 2015 12:02PM

This question has been bothering me for a while now, but I have the nagging feeling that I'm missing something big and that the reasoning is flawed in a very significant way. I'm not well read in philosophy at all, and I'd be really surprised if this particular problem hasn't been addressed many times by more enlightened minds. Please don't hesitate to give reading suggestions if you know more. I don't even know where to start learning about such questions. I have tried the search bar but have failed to find a discussion around this specific topic.

I'll try and explain my train of thought as best as I can but I am not familiar with formal reasoning, so bear with me! (English is not my first language, either)

Based on the information and sensations currently available, I am stuck in a specific point of view and experience specific qualia. So far, it's the only thing that has been available to me; it is the entirety of my reality. I don't know if the cogito ergo sum is well received on Less Wrong, but it seems on the face of it to be a compelling argument for my own existence at least.

Let's assume that there are other conscious beings who "exist" in a similar way, and thus other possible qualia. If we don't assume this, doesn't it mean that we are in a dead end and no further argument is possible? Similar to what happens if there is no free will and thus nothing matters since no change is possible? Again, I am not certain about this reasoning but I can't see the flaw so far.

There doesn't seem to be any reason why I should be experiencing these specific qualia instead of others, that I "popped into existence" as this specific consciousness instead of another, or that I perceive time subjectively. According to what I know, the qualia will probably stop completely at some subjective point in time and I will cease to exist. The qualia are likely to be tied to a physical state of matter (for example colorblindness due to different cells in the eyes) and once the matter does not "function" or is altered, the qualia are gone. It would seem that there could be a link between the subjective and some sort of objective reality, if there is indeed such a thing.

On a side note, I think it's safe to ignore theism and all mentions of a pleasurable afterlife of some sort. I suppose most people on this site have debated this to death elsewhere and there's no real point in bringing it up again. I personally think it's not an adequate solution to this problem.

Based on what I know, and that qualia occur, what is the probability (if any) that I will pop into existence again and again, and experience different qualia each time, with no subjectively perceivable connection with the "previous" consciousness? If it has happened once, if a subjective observer has emerged out of nothing at some point, and is currently observing subjectively (as I think is happening to me), does the subjective observing ever end?

I know it sounds an awful lot like mysticism and reincarnation, but since I am currently existing and observing in a subjective way (or at least I think I am), how can I be certain that it will ever stop?

The only reason why this question matters at all is because suffering is not only possible but quite frequent according to my subjective experience and my intuition of what other possible observers might be experiencing if they do exist in the same way I do. If there were no painful qualia, or no qualia at all, nothing would really matter since there would be no change needed and no concept of suffering. I don't know how to define suffering, but I think it is a valid concept and is contained in qualia, based on my limited subjectivity.

This leads to a second, more disturbing question : does suffering have a limit or is it infinite? Is there a non zero probability to enter into existence as a being that experiences potentially infinite suffering, similar to the main character in I have no mouth and I must scream? Is there no way out of existence? If the answer is no, then how would it be possible to lead a rational life, seeing as it would be a single drop in an infinite ocean?

On a more positive note, this reasoning can serve as a strong deterrent to suicide, since it would be rationally better to prolong your current and familiar existence than to potentially enter a less fortunate one with no way to predict what might happen.

Sadly, these thoughts have shown to be a significant threat to motivation and morale. I feel stuck in this logic and can't see a way out at the moment. If you can identify a flaw here, or know of a solution, then I eagerly await your reply.

kind regards

 

 

 

Best of Rationality Quotes, 2014 Edition

12 DanielVarga 27 February 2015 10:43PM

Here is the way-too-late 2014 edition of the Best of Rationality Quotes collection. (Here is last year's.) Thanks Huluk for nudging me to do it.

Best of Rationality Quotes 2014 (300kB page, 235 quotes)
and Best of Rationality Quotes 2009-2014 (1900kB page, 1770 quotes)

The page was built by a short script (source code here) from all the LW Rationality Quotes threads so far. (We had such a thread each month since April 2009.) The script collects all comments with karma score 10 or more, and sorts them by score. Replies are not collected, only top-level comments.

As is now usual, I provide various statistics and top-lists based on the data. (Source code for these is also at the above link, see the README.) I added these as comments to the post:

In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him?

9 shminux 27 February 2015 08:57PM

Hopefully at least one or two would show a virtue of non-straw rationality.

Episode list

 

 

[Link] Algorithm aversion

15 Stefan_Schubert 27 February 2015 07:26PM

It has long been known that algorithms out-perform human experts on a range of topics (here's a LW post on this by lukeprog). Why, then, is it that people continue to mistrust algorithms, in spite of their superiority, and instead cling to human advice? A recent paper by Dietvorst, Simmons and Massey suggests it is due to a cognitive bias which they call algorithm aversion. We judge less-than-perfect algorithms more harshly than less-than-perfect humans. They argue that since this aversion leads to poorer decisions, it is very costly, and that we therefore must find ways of combating it.

Abstract: 

Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

General discussion: 

The results of five studies show that seeing algorithms err makes people less confident in them and less likely to choose them over an inferior human forecaster. This effect was evident in two distinct domains of judgment, including one in which the human forecasters produced nearly twice as much error as the algorithm. It arose regardless of whether the participant was choosing between the algorithm and her own forecasts or between the algorithm and the forecasts of a different participant. And it even arose among the (vast majority of) participants who saw the algorithm outperform the human forecaster.
The aversion to algorithms is costly, not only for the participants in our studies who lost money when they chose not to tie their bonuses to the algorithm, but for society at large. Many decisions require a forecast, and algorithms are almost always better forecasters than humans (Dawes, 1979; Grove et al., 2000; Meehl, 1954). The ubiquity of computers and the growth of the “Big Data” movement (Davenport & Harris, 2007) have encouraged the growth of algorithms but many remain resistant to using them. Our studies show that this resistance at least partially arises from greater intolerance for error from algorithms than from humans. People are more likely to abandon an algorithm than a human judge for making the same mistake. This is enormously problematic, as it is a barrier to adopting superior approaches to a wide range of important tasks. It means, for example, that people will more likely forgive an admissions committee than an admissions algorithm for making an error, even when, on average, the algorithm makes fewer such errors. In short, whenever prediction errors are likely—as they are in virtually all forecasting tasks—people will be biased against algorithms.
More optimistically, our findings do suggest that people will be much more willing to use algorithms when they do not see algorithms err, as will be the case when errors are unseen, the algorithm is unseen (as it often is for patients in doctors’ offices), or when predictions are nearly perfect. The 2012 U.S. presidential election season saw people embracing a perfectly performing algorithm. Nate Silver’s New York Times blog, Five Thirty Eight: Nate Silver’s Political Calculus, presented an algorithm for forecasting that election. Though the site had its critics before the votes were in— one Washington Post writer criticized Silver for “doing little more than weighting and aggregating state polls and combining them with various historical assumptions to project a future outcome with exaggerated, attention-grabbing exactitude” (Gerson, 2012, para. 2)—those critics were soon silenced: Silver’s model correctly predicted the presidential election results in all 50 states. Live on MSNBC, Rachel Maddow proclaimed, “You know who won the election tonight? Nate Silver,” (Noveck, 2012, para. 21), and headlines like “Nate Silver Gets a Big Boost From the Election” (Isidore, 2012) and “How Nate Silver Won the 2012 Presidential Election” (Clark, 2012) followed. Many journalists and popular bloggers declared Silver’s success a great boost for Big Data and statistical prediction (Honan, 2012; McDermott, 2012; Taylor, 2012; Tiku, 2012).
However, we worry that this is not such a generalizable victory. People may rally around an algorithm touted as perfect, but we doubt that this enthusiasm will generalize to algorithms that are shown to be less perfect, as they inevitably will be much of the time.

Weekly LW Meetups

3 FrankAdamek 27 February 2015 04:26PM

This summary was posted to LW Main on February 20th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

What subjects are important to rationality, but not covered in Less Wrong?

18 casebash 27 February 2015 11:57AM

As many people have noted, Less Wrong currently isn't receiving as much content as we would like. One way to think about expanding the content is to think about which areas of study deserve more articles written on them.

For example, I expect that sociology has a lot to say about many of our cultural assumptions. It is quite possible that 95% of it is either obvious or junk, but almost all fields have that 5% within them that could be valuable. Another area of study that might be interesting to consider is anthropology. Again this is a field that allows us to step outside of our cultural assumptions.

I don't know anything about media studies, but I imagine that they have some worthwhile things to say about how we the information that we hear is distorted.

What other fields would you like to see some discussion of on Less Wrong?

If you can see the box, you can open the box

44 ThePrussian 26 February 2015 10:36AM

First post here, and I'm disagreeing with something in the main sequences.  Hubris acknowledged, here's what I've been thinking about.  It comes from the post "Are your enemies innately evil?":

On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America.  Now why do you suppose they might have done that?  Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?

Realistically, most people don't construct their life stories with themselves as the villains.  Everyone is the hero of their own story.  The Enemy's story, as seen by the Enemy, is not going to make the Enemy look bad.  If you try to construe motivations that would make the Enemy look bad, you'll end up flat wrong about what actually goes on in the Enemy's mind.

If I'm misreading this, please correct me, but the way I am reading this is:

1) People do not construct their stories so that they are the villains,

therefore

2) the idea that Al Qaeda is motivated by a hatred of American freedom is false.

Reading the Al Qaeda document released after the attacks called Why We Are Fighting You you find the following:

 

What are we calling you to, and what do we want from you?

1.  The first thing that we are calling you to is Islam.

A.  The religion of tahwid; of freedom from associating partners with Allah Most High , and rejection of such blasphemy; of complete love for Him, the Exalted; of complete submission to his sharia; and of the discarding of all the opinions, orders, theories, and religions that contradict with the religion He sent down to His Prophet Muhammad.  Islam is the religion of all the prophets and makes no distinction between them. 

It is to this religion that we call you …

2.  The second thing we call you to is to stop your oppression, lies, immorality and debauchery that has spread among you.

A.  We call you to be a people of manners, principles, honor and purity; to reject the immoral acts of fornication, homosexuality, intoxicants, gambling and usury.

We call you to all of this that you may be freed from the deceptive lies that you are a great nation, which your leaders spread among you in order to conceal from you the despicable state that you have obtained.

B.  It is saddening to tell you that you are the worst civilization witnessed in the history of mankind:

i.  You are the nation who, rather than ruling through the sharia of Allah, chooses to invent your own laws as you will and desire.  You separate religion from you policies, contradicting the pure nature that affirms absolute authority to the Lord your Creator….

ii.  You are the nation that permits usury…

iii.   You are a nation that permits the production, spread, and use of intoxicants.  You also permit drugs, and only forbid the trade of them, even though your nation is the largest consumer of them.

iv.  You are a nation that permits acts of immorality, and you consider them to be pillars of personal freedom.  

"Freedom" is of course one of those words.  It's easy enough to imagine an SS officer saying indignantly: "Of course we are fighting for freedom!  For our people to be free of Jewish domination, free from the contamination of lesser races, free from the sham of democracy..."

If we substitute the symbol with the substance though, what we mean by freedom - "people to be left more or less alone, to follow whichever religion they want or none, to speak their minds, to try to shape society's laws so they serve the people" - then Al Qaeda is absolutely inspired by a hatred of freedom.  They wouldn't call it "freedom", mind you, they'd call it "decadence" or "blasphemy" or "shirk" - but the substance is what we call "freedom".

Returning to the syllogism at the top, it seems to be that there is an unstated premise.  The conclusion "Al Qaeda cannot possibly hate America for its freedom because everyone sees himself as the hero of his own story" only follows if you assume that What is heroic, what is good, is substantially the same for all humans, for a liberal Westerner and an Islamic fanatic.

(for Americans, by "liberal" here I mean the classical sense that includes just about everyone you are likely to meet, read or vote for.  US conservatives say they are defending the American revolution, which was broadly in line with liberal principles - slavery excepted, but since US conservatives don't support that, my point stands).

When you state the premise baldly like that, you can see the problem.  There's no contradiction in thinking that Muslim fanatics think of themselves as heroic precisely for being opposed to freedom, because they see their heroism as trying to extend the rule of Allah - Shariah - across the world.

Now to the point - we all know the phrase "thinking outside the box".  I submit that if you can recognize the box, you've already opened it.  Real bias isn't when you have a point of view you're defending, but when you cannot imagine that another point of view seriously exists.

That phrasing has a bit of negative baggage associated with it, that this is just a matter of pigheaded close-mindedness.  Try thinking about it another way.  Would you say to someone with dyscalculia "You can't get your head around the basics of calculus?  You are just being so close minded!"  No, that's obviously nuts.  We know that different peoples minds work in different ways, that some people can see things others cannot. 

Orwell once wrote about the British intellectuals inability to "get" fascism, in particular in his essay on H.G. Wells.  He wrote that the only people who really understood the nature and menace of fascism were either those who had felt the lash on their backs, or those who had a touch of the fascist mindset themselves.  I suggest that some people just cannot imagine, cannot really believe, the enormous power of faith, of the idea of serving and fighting and dying for your god and His prophet.  It is a kind of thinking that is just alien to many.

Perhaps this is resisted because people think that "Being able to think like a fascist makes you a bit of a fascist".  That's not really true in any way that matters - Orwell was one of the greatest anti-fascist writers of his time, and fought against it in Spain. 

So - if you can see the box you are in, you can open it, and already have half-opened it.  And if you are really in the box, you can't see the box.  So, how can you tell if you are in a box that you can't see versus not being in a box?  

The best answer I've been able to come up with is not to think of "box or no box" but rather "open or closed box".  We all work from a worldview, simply because we need some knowledge to get further knowledge.  If you know you come at an issue from a certain angle, you can always check yourself.  You're in a box, but boxes can be useful, and you have the option to go get some stuff from outside the box.

The second is to read people in other boxes.  I like steelmanning, it's an important intellectual exercise, but it shouldn't preclude finding actual Men of Steel - that is, people passionately committed to another point of view, another box, and taking a look at what they have to say.  

Now you might say: "But that's steelmanning!"  Not quite.  Steelmanning is "the art of addressing the best form of the other person’s argument, even if it’s not the one they presented."  That may, in some circumstances, lead you to make the mistake of assuming that what you think is the best argument for a position is the same as what the other guy thinks is the best argument for his position.  That's especially important if you are addressing a belief held by a large group of people.

Again, this isn't to run down steelmanning - the practice is sadly limited, and anyone who attempts it has gained a big advantage in figuring out how the world is.  It's just a reminder that the steelman you make may not be quite as strong as the steelman that is out to get you.  

[EDIT: Link included to the document that I did not know was available online before now]

"Human-level control through deep reinforcement learning" - computer learns 49 different games

11 skeptical_lurker 26 February 2015 06:21AM

full text

 

This seems like an impressive first step towards AGI. The games, like 'pong' and 'space invaders' are perhaps not the most cerebral games, but given that deep blue can only play chess, this is far more impressive IMO. They didn't even need to adjust hyperparameters between games.

 

I'd also like to see whether they can train a network that plays the same game on different maps without re-training, which seems a lot harder.

 

Are Cognitive Biases Design Flaws?

1 DonaldMcIntyre 25 February 2015 09:02PM

I am a newbie so today I read the article by Eliezer Yudkowski "Your Strength As A Rationalist" which helped me understand the focus of LessWrong, but I respectfully disagreed with a line that is written in the last paragraph:

It is a design flaw in human cognition...

So this was my comment in the article's comment section which I bring here for discussion:

Since I think evolution makes us quite fit to our current environment I don't think cognitive biases are design flaws, in the above example you imply that even if you had the information available to guess the truth, your guess was another one and it was false, therefore you experienced a flaw in your cognition.

My hypotheses is that reaching the truth or communicating it in the IRC may have not been the end objective of your cognitive process, in this case just to dismiss the issue as something that was not important anyway "so move on and stop wasting resources in this discussion" was maybe the "biological" objective and as such it should be correct, not a flaw.

If the above is true then all cognitive bias, simplistic heuristics, fallacies, and dark arts are good since we have conducted our lives for 200,000 years according to these and we are alive and kicking.

Rationality and our search to be LessWrong, which I support, may be tools we are developing to evolve in our competitive ability within our species, but not a "correction" of something that is wrong in our design.

Edit 1: I realize there is change in the environment and that may make some of our cognitive biases, which were useful in the past, to be obsolete. If the word "flaw" is also applicable to describe something that is obsolete then I was wrong above. If not, I prefer the word obsolete to characterize cognitive biases that are no longer functional for our preservation.

Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112

4 Gondolinian 25 February 2015 09:00PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 112.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111

3 b_sen 25 February 2015 06:52PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 111.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

Journal 'Basic and Applied Psychology' bans p<0.05 and 95% confidence intervals

11 Jonathan_Graehl 25 February 2015 05:15PM

Editorial text isn't very interesting; they call for descriptive statistics and don't recommend any particular analysis.

Does hormetism work? Opponent process theory.

7 DeVliegendeHollander 25 February 2015 02:00PM

To the fun theory, hedonic treadmill sequences.

http://gettingstronger.org/hormesis/

TL;DR stoicism with science.

Key idea: OPT, Opponent Process Theory: http://gettingstronger.org/2010/05/opponent-process-theory/

Research, PDF: http://gettingstronger.org/wp-content/uploads/2010/04/Solomon-Opponent-Process-1980.pdf

From the article:

"In hedonic reversal, a stimulus that initially causes a pleasant or unpleasant response does not just dissipate or fade away, as Irvine describes, but rather the initial feeling leads to an opposite secondary emotion or sensation. Remarkably, the secondary reaction is often deeper or longer lasting than the initial reaction.  And what is more, when the stimulus is repeated many times, the initial response becomes weaker and the secondary response becomes stronger and lasts longer."

Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 110

3 Gondolinian 24 February 2015 08:01PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 110.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

Saving for the long term

6 adamzerner 24 February 2015 03:33AM

I'm 22 years old, just got a job, and have the option of putting money in a 401k. More generally, I just started making money and need to think about how I'm going to invest and save it.

As far as long-term/retirement savings goes, the way I see it is that my goal is to ensure that I have a sufficient standard of living when I'm "old" (70-80). I see a few ways that this can happen:

  1. There is enough wealth creation and distribution by then such that I pretty much won't have to do anything. One way this could happen is if there was a singularity. I'm no expert on this topic, but the experts seem to be pretty confident that it'll happen by the time I retire.

    Median optimistic year (10% likelihood): 2022
    Median realistic year (50% likelihood): 2040
    Median pessimistic year (90% likelihood): 2075
    http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

    And even if they're wrong and there's no singularity, it still seems to be very likely that there will be immense wealth creation in the next 60 or so years, and I'm sure that there'll be a fair amount of distribution as well, such that the poorest people will probably have reasonably comfortable lives. I'm a believer in Kurweil's Law of Accelerating Returns, but even if you project linear growth, there'd still be immense growth.

    Given all of this, I find thinking that "wealth creation + distribution over the next 60 years -> sufficient standard of living for everyone" is a rather likely scenario. But my logic here is very "outside view-y" - I don't "really understand" the component steps and their associated likelihoods, so my confidence is limited.
  2. I start a startup, make a lot of money, and it lasts until retirement. I think that starting a startup and using the money to do good is the way for me to maximize the positive impact I have on the world, as well as my own happiness, and so I plan on working relentlessly until that happens. Ie. I'm going to continue to try, no matter how many times I fail. I may need to take some time to work in order to save up money and/or develop skills though.

    Anyway, I think that there is a pretty good chance that I succeed, in, say the next 20 years. I never thought hard enough about it to put a number on it, but I'll try it here.

    Say that I get 10 tries to start a startup in the next 20 years (I know that some take longer than 2 years to fail, but 2 years is the average, and it often takes shorter than 2 years to fail). At a 50% chance of success, that's a >99.9% chance that at least one of them succeeds (1-.5^10). I know 50% might seem high, but I think that my rationality skills, domain knowledge (eventually) and experience (eventually) give me an edge. Even at a 10% chance of success, I have about a 65% (1-.9^10) chance at succeeding in one of those 10 tries, and I think that 10% chance of success is very conservative.

    Things I may be underestimating: the chances that I judge something else (earning to give? AI research? less altruistic? a girl/family?) to be a better use of my time. Changes in the economy that make success a lot less likely. 

    Anyway, there seems to be a high likelihood that I continue to start startups until I succeed, and there seems to be a high likelihood that I will succeed by the time I retire, in which case I should have enough money to ensure that I have a sufficient standard of living for the rest of my life.
  3. I spend my life trying and failing at startups, not saving any money, but I develop enough marketable skills along the way and I continue to work well past normal retirement age (assuming I keep myself in good physical and mental condition, and assuming that 1. hasn't happened). I'm not one who wants to stop working.
  4. I work a normal-ish job, have a normal retirement plan, and save enough to retire at a normal age.

The point I want to make in this article is that 1, 2, 3 seem way more likely than 4. Which makes me think that long-term saving might not actually be such a good idea. 

The real question is "what are my alternatives to retirement saving and why are they better than retirement saving?". The main alternative is to live off of my savings while starting startups. Essentially to treat my money as runway, and use it to maximize the amount of time I spend working towards my (instrumental) goal of starting a successful startup. Ie. money that I would otherwise put towards retirement could be used to increase the amount of time I spend working on startups.

For the record:
  1. I'm frugal and conservative (hard to believe... I know).
  2. I know that these are unpopular thoughts. It's what my intuition says (a part of my intuition anyway), but I'm not too confident. I need to achieve a higher level of confidence before doing anything drastic, so I'm working to obtain more information and think it through some more.
  3. I don't plan on starting a startup any time too soon. I probably need to spend at least a few years developing my skills first. So right now I'm just learning and saving money.
  4. The craziest thing I would do is a) put my money in an index fund instead of some sort of retirement account, forgoing the tax benefits of a retirement account. And b) keeping a rather short runway. I'd probably work towards the goal of starting a startup as long as I have, say 6 months living expenses saved up.
  5. I know this is a bit of a weird thing to post on LW, but these aren't the kinds of arguments that normal people will take seriously ("I'm not going to save for retirement because there'll be a singularity. Instead I'm going to work towards reducing existential risk." That might be the kind of thing that actually get's you thrown into a mental hospital. I'm only partially joking). And I really need other people's perspectives. I judge that the benefits that other perspectives will bring me will outweigh the weirdness of posting this and any costs that come with people tracing this article to me.
Thoughts?

Superintelligence 24: Morality models and "do what I mean"

7 KatjaGrace 24 February 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the twenty-fourth section in the reading guideMorality models and "Do what I mean".

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Morality models” and “Do what I mean” from Chapter 13.


Summary

  1. Moral rightness (MR) AI: AI which seeks to do what is morally right
    1. Another form of 'indirect normativity'
    2. Requires moral realism to be true to do anything, but we could ask the AI to evaluate that and do something else if moral realism is false
    3. Avoids some complications of CEV
    4. If moral realism is true, is better than CEV (though may be terrible for us)
  2. We often want to say 'do what I mean' with respect to goals we try to specify. This is doing a lot of the work sometimes, so if we could specify that well perhaps it could also just stand alone: do what I want. This is much like CEV again.

Another view

Olle Häggström again, on Bostrom's 'Milky Way Preserve':

The idea [of a Moral Rightness AI] is that a superintelligence might be successful at the task (where we humans have so far failed) of figuring out what is objectively morally right. It should then take objective morality to heart as its own values.1,2

Bostrom sees a number of pros and cons of this idea. A major concern is that objective morality may not be in humanity's best interest. Suppose for instance (not entirely implausibly) that objective morality is a kind of hedonistic utilitarianism, where "an action is morally right (and morally permissible) if and only if, among all feasible actions, no other action would produce a greater balance of pleasure over suffering" (p 219). Some years ago I offered a thought experiment to demonstrate that such a morality is not necessarily in humanity's best interest. Bostrom reaches the same conclusion via a different thought experiment, which I'll stick with here in order to follow his line of reasoning.3 Here is his scenario:
    The AI [...] might maximize the surfeit of pleasure by converting the accessible universe into hedonium, a process that may involve building computronium and using it to perform computations that instantiate pleasurable experiences. Since simulating any existing human brain is not the most efficient way of producing pleasure, a likely consequence is that we all die.
Bostrom is reluctant to accept such a sacrifice for "a greater good", and goes on to suggest a compromise:
    The sacrifice looks even less appealing when we reflect that the superintelligence could realize a nearly-as-great good (in fractional terms) while sacrificing much less of our own potential well-being. Suppose that we agreed to allow almost the entire accessible universe to be converted into hedonium - everything except a small preserve, say the Milky Way, which would be set aside to accommodate our own needs. Then there would still be a hundred billion galaxies devoted to the maximization of pleasure. But we would have one galaxy within which to create wonderful civilizations that could last for billions of years and in which humans and nonhuman animals could survive and thrive, and have the opportunity to develop into beatific posthuman spirits.

    If one prefers this latter option (as I would be inclined to do) it implies that one does not have an unconditional lexically dominant preference for acting morally permissibly. But it is consistent with placing great weight on morality. (p 219-220)

What? Is it? Is it "consistent with placing great weight on morality"? Imagine Bostrom in a situation where he does the final bit of programming of the coming superintelligence, to decide between these two worlds, i.e., the all-hedonium one versus the all-hedonium-except-in-the-Milky-Way-preserve.4 And imagine that he goes for the latter option. The only difference it makes to the world is to what happens in the Milky Way, so what happens elsewhere is irrelevant to the moral evaluation of his decision.5 This may mean that Bostrom opts for a scenario where, say, 1024 sentient beings will thrive in the Milky Way in a way that is sustainable for trillions of years, rather than a scenarion where, say, 1045 sentient beings will be even happier for a comparable amount of time. Wouldn't that be an act of immorality that dwarfs all other immoral acts carried out on our planet, by many many orders of magnitude? How could that be "consistent with placing great weight on morality"?6

 

Notes

1. Do What I Mean is originally a concept from computer systems, where the (more modest) idea is to have a system correct small input errors.

2. To the extent that people care about objective morality, it seems coherent extrapolated volition (CEV) or Christiano's proposal would lead the AI to care about objective morality, and thus look into what it is. Thus I doubt it is worth considering our commitments to morality first (as Bostrom does in this chapter, and as one might do before choosing whether to use a MR AI), if general methods for implementing our desires are on the table. This is close to what Bostrom is saying when he suggests we outsource the decision about which form of indirect normativity to use, and eventually winds up back at CEV. But it seems good to be explicit.

3. I'm not optimistic that behind every vague and ambiguous command, there is something specific that a person 'really means'. It seems more likely there is something they would in fact try to mean, if they thought about it a bunch more, but this is mostly defined by further facts about their brains, rather than the sentence and what they thought or felt as they said it. It seems at least misleading to call this 'what they meant'. Thus even when '—and do what I mean' is appended to other kinds of goals than generic CEV-style ones, I would expect the execution to look much like a generic investigation of human values, such as that implicit in CEV.

4. Alexander Kruel criticizes 'Do What I Mean' being important, because every part of what an AI does is designed to be what humans really want it to be, so it seems unlikely to him that AI would do exactly what humans want with respect to instrumental behaviors (e.g. be able to understand language, and use the internet and carry out sophisticated plans), but fail on humans' ultimate goals:

Outsmarting humanity is a very small target to hit, requiring a very small margin of error. In order to succeed at making an AI that can outsmart humans, humans have to succeed at making the AI behave intelligently and rationally. Which in turn requires humans to succeed at making the AI behave as intended along a vast number of dimensions. Thus, failing to predict the AI’s behavior does in almost all cases result in the AI failing to outsmart humans.

As an example, consider an AI that was designed to fly planes. It is exceedingly unlikely for humans to succeed at designing an AI that flies planes, without crashing, but which consistently chooses destinations that it was not meant to choose. Since all of the capabilities that are necessary to fly without crashing fall into the category “Do What Humans Mean”, and choosing the correct destination is just one such capability.

I disagree that it would be surprising for an AI to be very good at flying planes in general, but very bad at going to the right places in them. However it seems instructive to think about why this is.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Are there other general forms of indirect normativity that might outsource the problem of deciding what indirect normativity to use?
  2. On common views of moral realism, is morality likely to be amenable to (efficient) algorithmic discovery?
  3. If you knew how to build an AI with a good understanding of natural language (e.g. it knows what the word 'good' means as well as your most intelligent friend), how could you use this to make a safe AI?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about other abstract features of an AI's reasoning that we might want to get right ahead of time, instead of leaving to the AI to fix. We will also discuss how well an AI would need to fulfill these criteria to be 'close enough'. To prepare, read “Component list” and “Getting close enough” from Chapter 13. The discussion will go live at 6pm Pacific time next Monday 2 March. Sign up to be notified here.

Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 109

5 Gondolinian 23 February 2015 08:05PM

This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 109.

There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)

Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:

You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).

If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.

A quick heuristic for evaluating elites (or anyone else)

4 DeVliegendeHollander 23 February 2015 04:22PM

Summary: suppose that for some reason you want to figure out how a society works or worked in a given time and place, largely you want to see if it is a meritocracy where productive people get ahead and thus conditions are roughly fair and efficient, or is it more like a parasitical elite sucking the blood out of everybody. I present a heuristic for this and also as a bonus this also predicts how intellectuals work there.

I would look at whether the elites are specialists or generalists. We learned it from Adam Smith that a division of labor is efficient, and if people get rich without being efficient, well, that is a red flag.  If someone's rich, and they tell you they are specialists like manufacturing food scented soap or are specialists in retina surgery, you could have the impression that when such people get ahead, circumstances are fair and meritocratic. But when the rich people you meet seemed to be more vague, like owning shares in various businesses and you don't see any connection between them (and they are not Buffet type investors either, they keep owning the same shares), or when all you gather is that their skillset is something very general, like generic businesses sense or people skills, you should be suspicious that the market may be rigged or corrupted, perhaps through establishing an overbearing and corrupted state, and overally the system is neither fair nor efficient.

Objection: but generic skills can be valuable!

Counter-objection: yes, but people with generic skills should be outcompeted by people with generic AND specialist skills, so the generalists should only see a middling level of success and not be on top. Alternatively, people who want to get very succesful using only generic skills should probably find the most success in a fair and efficient market by turning that generic skill into a specialization, usually a service/consulting. Thus, someone who has excellent people skills but does not like learning technical details would not see the most success (only a middling level) as a used car salesperson, or any technical product salesperson and would rather be providing sales training, communication training services, courses, consulting, writing books.

Counter-counter-objection: we know it since Adam Smith's Scottish Highlands blacksmith example that you can only specialize if there is a lot of competition, not only in the sense that only in this case you are forced to specialize, but also in the sense that only in that case it is good, beneficial for you and others to do so. If you are the only doctor in a days walk in a Borneo rainforest, don't specialize. If you are the only IT guy in a poverty stricken village, don't specialize.

Answer: this is a very good point. In general specialization is a comparative thing, if nobody is a doctor near you, then being a doctor is a specialization in itself. If there are a lot of doctors, you differentiate yourself by becoming a surgeon, if there are a lot of surgeons, you differentiate yourself by becoming eye surgeon. In a village where nobody knows computers, being generic IT guy is a specialization, in a city with many thousands of IT people, you differentate yourself by being an expert on SAP FI and CO modules.

So the heuristic works only so far as you can make a good enough guess what level of specialization or differntiation would be logical in the circumstances and then you see people who are the richest or most succesful not being so specialized. In fact if they are less specialized than their underlings, that is a clear red flag! When you see someone who is an excellent eye surgeon specialist, but he is not the highest ranking doc, the highest ranking one is someone whom people say to be a generic good leader but does not have any specialist skills - welcome to Corruption Country! Because a purely okay leader (generic skill) should mean a middling level of success, not a stellar one, the top of the top guns should be someone who has these generic skills and also a rock star in a specialist field.

Well, maybe this needs to be fleshed out more, but it is a starter of an idea.

BONUS. Suppose you figured out the elites are too generalists to assume they earned their wealth by providing value to others, they simply does not look that productive, don't seem to have a specialized enough skillset, they may look more like parasites. From this you can also figure out what intellectuals are like. By intellectuals I mean the people who write the books people from the middle classes up consume. If elites are productive, they are not interested in signalling, they have a get-things-done mentality and thus the intellectuals will often have a very pragmatic attitude, they won't be much into lofty, murky intellectualism, they will often see highbrowery as a way to solve practical problems. Because that is what their customers want. While if elites are unproductive, they will do a lot of signalling to try to excuse their high status. They cannot tell _exactly_ what they do, so they try to look _generally_ superior than a the plebs. They often signal being more sophisticated, having better taste and all that - all this means "I don't have a superior specialist skill, because I am an unproductive elite parasite, so I must look generally superior than the plebs".  They will also use books, intellectual ideas to express this and that kind of intellectualism will always be very murky, lofty, abstract. Not a get-things-done type.  One trick to look for is if intellectuals like to abuse terms like "higher", "spiritual", this suggests "you guise who read it are generally superior" and thus plays into the signalling of unproductive elites.

You can also use the heuristic in the reverse. If the most popular bestsellers books are like "The Power of Habits" (pragmatic, empirical, focusing on reality, like LW), you can also assume that the customers of these books, the elites, will be largely efficient people working in a honest market (or the other way around). If the most popular bestsellers are "The Spiritual Universe - The Ultimate Truths Behind The Meaning Of The Cosmos" - you can assume not only the intellectuals who write them are buffoons, but also the rich folks will also be unproductive, parasitical aristocrats, because they generally use stuff like this to make themselves look superior in general, without specialists skills. Because specialist, productive elites hate this stuff and do not finance it.

Why is this all useful?

You can quickly decide if you want to work with / in that kind of society. Will your efficient work be rewarded? Or more likely those who are amongst the well born will take the credit? You can also figure out if a society today or even in the historical past was politically unjust or not.

(And now I am officially horrible at writing essays, it is to writing what "er, umm, er, like" is to speaking. But I hope you can glean the meaning out of it. I am not a very verbal thinker, I am just trying to translate the shapes in my mind to words.)

GCRI: Updated Strategy and AMA on EA Forum next Tuesday

7 RyanCarey 23 February 2015 12:35PM

Just announcing for those interested that Seth Baum from the Global Catastrophic Risks Institute (GCRI) will be coming to the Effective Altruism Forum to answer a wide range of questions (like a Reddit "Ask Me Anything") next week at 7pm US ET on March 3.

Seth is an interesting case - more of a 'mere mortal' than Bostrom and Yudkowsky. (Clarification: his background is more standard, and he's probably more emulate-able!). He had a PhD in geography, and had come to a maximising consequentialist view, in which GCR-reduction is overwhelmingly important. So three years ago,  with risk analyst Tony Barrett, he cofounded the Global Catstrophic Risks Institute - one of the handful of places working on these particularly important problems. Since then, it's done some academic outreach and have covered issues like double-catastrophe/ recovery from catstrophe, bioengineering, food security and AI.

Just last week, they've updated their strategy, giving the following announcement:

Dear friends,

I am delighted to announce important changes in GCRI’s identity and direction. GCRI is now just over three years old. In these years we have learned a lot about how we can best contribute to the issue of global catastrophic risk. Initially, GCRI aimed to lead a large global catastrophic risk community while also performing original research. This aim is captured in GCRI’s original mission statement, to help mobilize the world’s intellectual and professional resources to meet humanity’s gravest threats.

Our community building has been successful, but our research has simply gone farther. Our research has been published in leading academic journals. It has taken us around the world for important talks. And it has helped us publish in the popular media. GCRI will increasingly focus on in-house research.

Our research will also be increasingly focused, as will our other activities. The single most important GCR research question is: What are the best ways to reduce the risk of global catastrophe? To that end, GCRI is launching a GCR Integrated Assessment as our new flagship project. The Integrated Assessment puts all the GCRs into one integrated study in order to assess the best ways of reducing the risk. And we are changing our mission statement accordingly, to develop the best ways to confront humanity’s gravest threats.

So 7pm ET Tuesday, March 3 is the time to come online and post your questions about any topic you like, and Seth will remain online until at least 9 to answer as many questions as he can. Questions in the comments here can also be ported across.

On the topic of risk organisations, I'll also mention that i) video is available from CSER's recent seminar, in which Mark Lipsitch and Derek Smith's discussed potentially pandemic pathogens, and ii) I'm helping Sean to write up an update of CSER's progress for LessWrong and effective altruists which will go online soon.

Announcing LessWrong Digest

25 Evan_Gaensbauer 23 February 2015 10:41AM

I've been making rounds on social media with the following message.

Great content on LessWrong isn't as frequent as it used to be, so not as many people read it as frequently. This makes sense. However, I read it at least once every two days for personal interest. So, I'm starting a LessWrong/Rationality Digest, which will be a summary of all posts or comments exceeding 20 upvotes within a week. It will be like a newsletter. Also, it's a good way for those new to LessWrong to learn cool things without having to slog through online cultural baggage. It will never be more than once weekly. If you're curious here is a sample of what the Digest will be like.

https://docs.google.com/document/d/1e2mHi7W0H2toWPNooSq7QNjEhx_xa0LcLw_NZRfkPPk/edit

Also, major blog posts or articles from related websites, such as Slate Star Codex and Overcoming Bias, or publications from the MIRI, may be included occasionally. If you want on the list send an email to:

lesswrongdigest *at* gmail *dot* com

 

Users of LessWrong itself have noticed this 'decline' in frequency of quality posts on LessWrong. It's not necessarily a bad thing, as much of the community has migrated to other places, such as Slate Star Codex, or even into meatspace with various organizations, meetups, and the like. In a sense, the rationalist community outgrew LessWrong as a suitable and ultimate nexus. Anyway, I thought you as well would be interested in a LessWrong Digest. If you or your friends:

  • find articles in 'Main' are too infrequent, and Discussion only filled with announcements, open threads, and housekeeping posts, to bother checking LessWrong regularly, or,
  • are busying themselves with other priorities, and are trying to limit how distracted they are by LessWrong and other media

the LessWrong Digest might work for you, and as a suggestion for your friends. I've fielded suggestions I transform this into a blog, Tumblr, or other format suitable for RSS Feed. Almost everyone is happy with email format right now, but if a few people express an interest in a blog or RSS format, I can make that happen too. 

 

Open thread, Feb. 23 - Mar. 1, 2015

3 MrMind 23 February 2015 08:01AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Are Cognitive Load and Willpower drawn from the same pool?

5 avichapman 23 February 2015 02:46AM

I was recently reading a blog here, that referenced a paper done in 1999 by Baba Shiv and Alex Fedorikhin (Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making). In it, volunteers are asked to memorise short or long numbers and then asked to chose a snack as a reward. The snack is either fruit or cake. The actual paper seems to go into a lot of details that are irrelevent to the blog post, but doesn't actually seem to contradict anything the blog post says. The result seems to be that those with a higher cognitive load were far more likely to chose the cake than those who weren't.

I was wondering if anyone has read any further on this line of research? The actual experiment seems to imply that the connection between cognitive load and willpower may be an acute effect - possibly not lasting very long. The choice of snack is made seconds after memorising a number and while actively trying to keep the number in memory for short term recall a few minutes later. There doesn't seem to be anything about the effect on willpower minutes or hours later.

Does anyone know if the effect lasts longer than a few seconds? If so, I would be interested in whether this affect has been incorporated into any dieting strategies.

How to debate when authority is questioned, but really not needed?

3 DonaldMcIntyre 23 February 2015 01:44AM

Especially in the comments of political articles or about economic issues I find myself arguing with people who question my authority about a topic rather than refute my arguments.

----

Examples may be:

1:

Me: I think money printing by the Fed will cause inflation if they continue like this.

Random commenter: Are you an economist?

Me: I am not, but it's not relevant.

Random commenter: Ok, so you are clueless.

2: 

Me: The current strategy to fight terror is not working because ISIS is growing.

Random commenter: What would you do to stop terrorism?

Me: I have an idea of what I would do, but it's not relevant because I'm not an expert, but do you think the current strategy is working?

Random commenter: So you don't know what you are talking about.

----

It is not about my opinions above, or even if I am right or not, I would gladly change my opinion after a debate, but I think that I am being disqualified unfairly. 

If I am right, how should I answer or continue these conversations?

View more: Next