Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Overly convenient clusters, or: Beware sour grapes

2 KnaveOfAllTrades 02 September 2014 04:04AM

Related to: Policy Debates Should Not Appear One-Sided

There is a well-known fable which runs thus:

“Driven by hunger, a fox tried to reach some grapes hanging high on the vine but was unable to, although he leaped with all his strength. As he went away, the fox remarked 'Oh, you aren't even ripe yet! I don't need any sour grapes.' People who speak disparagingly of things that they cannot attain would do well to apply this story to themselves.”

This gives rise to the common expression ‘sour grapes’, referring to a situation in which one incorrectly claims to not care about something to save face or feel better after being unable to get it.

This seems to be related to a general phenomenon, in which motivated cognition leads one to flinch away from the prospect of an action that is inconvenient or painful in the short term by concluding that a less-painful option strictly dominates the more-painful one.

In the fox’s case, the allegedly-dominating option is believing (or professing) that he did not want the grapes. This spares him the pain of feeling impotent in face of his initial failure, or the embarrassment of others thinking him to have failed. If he can’t get the grapes anyway, then he might as well erase the fact that he ever wanted them, right? The problem is that considering this line of reasoning will make it more tempting to conclude that the option really was dominating—that he really couldn’t have gotten the grapes. But maybe he could’ve gotten the grapes with a bit more work—by getting a ladder, or making a hook, or Doing More Squats in order to Improve His Vert.

The fable of the fox and the grapes doesn’t feel like a perfect fit, though, because the fox doesn’t engage in any conscious deliberation before giving up on sour grapes; the whole thing takes place subconsciously. Here are some other examples that more closely illustrate the idea of conscious rationalization by use of overly convenient partitions:

The Seating Fallacy:

“Be who you are and say what you feel, because those who mind don't matter and those who matter don't mind.”

This advice is neither good in full generality nor bad in full generality. Clearly there are some situations where some person is worrying too much about other people judging them, or is anxious about inconveniencing others without taking their own preferences into account. But there are also clearly situations (like dealing with an unpleasant, incompetent boss) where fully exposing oneself or saying whatever comes into one’s head is not strategic and outright disastrous. Without taking into account the specifics of the situation of the recipient of the advice, it is of limited use.

It is convenient to absolve oneself of blame by writing off anybody who challenges our first impulse as someone who ‘doesn’t matter’; it means that if something goes wrong, one can avoid the painful task of analysing and modifying one’s behaviour.

In particular, we have the following corollary:

The Fundamental Fallacy of Dating:

“Be yourself and don’t hide who you are. Be up-front about what you want. If it puts your date off, then they wouldn’t have been good for you anyway, and you’ve dodged a bullet!”

In the short-term it is convenient to not have to filter or reflect on what one says (face-to-face) or writes (online dating). In the longer term, having no filter is not a smart way to approach dating. As the biases and heuristics program has shown, people are often mistaken about what they would prefer under reflection, and are often inefficient and irrational in pursuing what they want. There are complicated courtship conventions governing timelines for revealing information about oneself and negotiating preferences, that have evolved to work around these irrationalities, to the benefit of both parties. In particular, people are dynamically inconsistent, and willing to compromise a lot more later on in a courtship than they thought they would earlier on; it is often a favour to both of you to respect established boundaries regarding revealing information and getting ahead of the current stage of the relationship.

For those who have not much practised the skill of avoiding triggering Too Much Information reactions, it can feel painful and disingenuous to even try changing their behaviour, and they rationalise it via the Fundamental Fallacy. At any given moment, changing this behaviour is painful and causes a flinch reaction, even though the value of information of trying a different approach might be very high, and might cause less pain (e.g. through reduced loneliness) in the long term.

We also have:

PR rationalization and incrimination:

“There’s already enough ammunition out there if anybody wants to assassinate my character, launch a smear campaign, or perform a hatchet job. Nothing I say at this point could make it worse, so there’s no reason to censor myself.”

This is an overly convenient excuse. It does not take into account, for example, that new statements provide a new opportunity for one to come to the attention of quote miners in the first place, or that different statements might be more or less easy to seed a smear campaign; ammunition can vary in type and accessibility, so that adding more can increase the convenience of a hatchet job. It might turn out, after weighing the costs and benefits, that speaking honestly is the right decision. But one can’t know that on the strength of a convenient deontological argument that doesn’t consider those costs. Similarly:

“I’ve already pirated so much stuff I’d be screwed if I got caught. Maybe it was unwise and impulsive at first, but by now I’m past the point of no return.”

 This again fails to take into account the increased risk of one’s deeds coming to attention; if most prosecutions are caused by (even if not purely about) offences shortly before the prosecution, and you expect to pirate long into the future, then your position now is the same as when you first pirated; if it was unwise then, then it’s unwise now.

~~~~

The common fallacy in all these cases is that one looks at only the extreme possibilities, and throws out the inconvenient, ambiguous cases. This results in a disconnected space of possibilities that is engineered to allow one to prove a convenient conclusion. For example, the Seating Fallacy throws out the possibility that there are people who mind but also matter; the Fundamental Fallacy of Dating prematurely rules out people who are dynamically inconsistent or are imperfect introspectors, or who have uncertainty over preferences; PR rationalization fails to consider marginal effects and quantify risks in favour of a lossy binary approach.

What are other examples of situations where people (or Less Wrongers specifically) might fall prey to this failure mode?

September 2014 Media Thread

1 ArisKatsaris 01 September 2014 05:05PM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

Rationality Quotes September 2014

2 jaime2000 01 September 2014 12:30PM
  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.

Open thread, Sept. 1-7, 2014

2 polymathwannabe 01 September 2014 12:18PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Robin Hanson's "Overcoming Bias" posts as an e-book.

17 ciphergoth 31 August 2014 01:26PM

At Luke Muehlhauser's request, I wrote a script to scrape all of Robin Hanson's posts to Overcoming Bias into an e-book; here's a first beta release. Please comment here with any problems—posts in the wrong order, broken links, bad formatting, missing posts. Thanks!

 


 

Superintelligence reading group

11 KatjaGrace 31 August 2014 02:59PM

In just over two weeks I will be running an online reading group on Nick Bostrom's Superintelligence, on behalf of MIRI. It will be here on LessWrong. This is an advance warning, so you can get a copy and get ready for some stimulating discussion. MIRI's post, appended below, gives the details.


Nick Bostrom’s eagerly awaited Superintelligence comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.

The reading group will “meet” on a weekly post on the LessWrong discussion forum. For each ‘meeting’, we will read about half a chapter of Superintelligence, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)

Discussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.

We welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome.

We will follow this preliminary reading guide, produced by MIRI, reading one section per week.

If you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.

If this sounds good to you, first grab a copy of Superintelligence. You may also want to sign up here to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on Monday, September 15th. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing Superintelligence with, link them to this post!

Topics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.

Tips for writing philosophical texts

2 Jan_Rzymkowski 31 August 2014 10:38PM

For about four years I am struggling to write a series of articles presenting few of my ideas. While this "philosophy" (I'd rather avoid being too pompous about it) is still developing, there is a bunch of stuff of which I have a clear image in my mind. It is a framework for model building, with some possible applications for AI developement, paradox resolving, semantics. Not any serious impact, but I do believe it would prove useful.

I tried making notes or plans for articles several times, but every time I was discouraged by those problems:

  • presented concept is too obvious
  • presented concept is superflous
  • presented concept needs more basic ideas to be introduced beforehand

So the core problem is that to show applications of the theory (or generally more interesing results), more basic concepts must be introduced first. Yet presenting the basics seems boring and uninsightful without the application side. This seems to characterise many complex ideas.

Can you provide me with any practical tips as how to tackle this problem?

Solstice 2014 / Rational Ritual Retreat - A Call to Arms

10 Raemon 30 August 2014 05:51PM


Summary:

 •  I'm beginning work on the 2014 Winter Solstice. There are a lot of jobs to be done, and the more people who can dedicate serious time to it, the better the end result will be and the more locations it can take place. A few people have volunteered serious time, and I wanted to issue a general call, to anyone who's wanted to be part of this but wasn't sure how. Send me an e-mail at raemon777@gmail.com if you'd like to help with any of the tasks listed below (or others I haven't thought of).

 •  More generally, I think people working on rational ritual, in any form, should be sharing notes and collaborating more. There's a fair number of us, but we're scattered across the country and haven't really felt like part of the same team. And it seems a bit silly for people working on ritual, to be scattered and unified. So I am hosting the first Rational Ritual Retreat at the end of September. The exact date and location have yet to be determined. You can apply at humanistculture.com, noting your availability, and I will determine



The Rational Ritual Retreat

For the past three years, I've been running a winter solstice holiday, celebrating science and human achievement. Several people have come up to me and told me it was one of the most unique, profound experiences they've participated in, inspiring them to work harder to make sure humanity has a bright future. 

I've also had a number of people concerned that I'm messing with dangerous aspects of human psychology, fearing what will happen to a rationality community that gets involved with ritual.

Both of these thoughts are incredibly important. I've written a lot on the value and danger of ritual. [1]

Ritual is central to the human experience. We've used it for thousands of years to bind groups together. It helps us internalize complex ideas. A winning version of rationality needs *some* way of taking complex ideas and getting System 1 to care about them, and I think ritual is at least one tool we should consider.

In the past couple weeks, a few thoughts occurred to me at once:

1) Figuring out a rational approach to ritual that has a meaningful, useful effect on the world will require a lot of coordination among many skilled people.

2) If this project *were* to go badly somehow, I think the most likely reason would be someone copying parts of what I'm working on without understanding all the considerations that went into it, and creating a toxic (or hollow) variant that spirals out of control.

3) Many other people have approached the concept of rational ritual. But we've generally done so independently, often duplicating a lot of the same work and rarely moving on to more interesting and valuable experimentation. When we do experiment, we rarely share notes.

This all prompted a fourth realization:

4) If ritual designers are isolated and poorly coordinated... if we're duplicating a lot of the same early work and not sharing concerns about potential dangers, then one obvious (in retrospect) solution is to have a ritual about ritual creation.

So, the Rational Ritual Retreat. We'll hike out into a dark sky reserve, when there's no light pollution and the Milky Way looms large and beautiful above us. We'll share our stories, our ideas for a culture grounded in rationality yet tapped into our primal human desires. Over the course of an evening we'll create a ceremony or two together, through group consensus and collaboration. We'll experiment with new ideas, aware that some may work well, and some may not - that's how progress is made.

This is my experiment, attempting to answer the question Eliezer raised in "Bayesians vs Barbarians." It just seems really exceptionally silly to me that people motivated by rationality AND ritual should be so uncoordinated. 

Whether you're interested directly creating ritual, or helping to facilitate its creation in one way or another (helping with art, marketing, logistics or funding of future projects), you are invited to attend. The location is currently undecided - there are reasons to consider the West Coast, East Coast or (if there's enough interest in both locations) both. 

Send in a brief application so I can make decisions about where and when to host it. I'll make the final decisions this upcoming Friday.

 


The Winter Solstice

The Retreat is part of a long-term vision, of many people coming together to produce a culture (undoubtably, with numerous subcultures focusing on different aesthetics). Tentatively, I'd expect a successful rational-ritual culture to look sort of Open Source ish. (Or, more appropriately - I'd expect it to look like Burning Man. To be clear, Burning Man and variations already exist, my goal is not to duplicate that effort. It's to create something that's a) easier to integrate into people's lives, and b) specifically focuses on rationality and human progress)

The Winter Solstice project as (at least for now) an important piece of that, partly because of the particular ideas it celebrates, but also because it's a demonstration of how you create *any* cultural holiday from scratch that celebrates serious ideas in a non-ironic fashion.

My minimum goal this year is to finish the Hymnal, put more material online to help people create their own private events, and run another largish event in NYC. My stretch goals are to have a high quality public event in Boston and San Francisco. (Potentially other places if a lot of local people are interested and are willing to do the legwork). 

My hope, to make those stretch goals possible, is to find collaborators willing to put in a fair amount of work. I'm specifically looking for people who can:

  • Creative Collaboration. Want to perform, create music, visual art, or host an event in your city?
  • Help with logistics, especially in different cities. (Finding venues, arranging catering, etc)
  • Marketing, reaching out to bloggers, or creating images or videos for the social media campaign.
  • Helping with technical aspects of production for the Hymnal (editing, figuring out best places

Each of these are things I'm able to do, but I have limited time, and the more time I can focus on creating

If you're interested in collaborating, volunteering, or running a local event, either reply here or send me an e-mail at raemon777@gmail.com 

 

 

[LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI

17 Sarokrae 30 August 2014 02:04PM

http://www.theguardian.com/technology/2014/aug/30/saviours-universe-four-unlikely-men-save-world

The article is titled "The scientific A-Team saving the world from killer viruses, rogue AI and the paperclip apocalypse", and features interviews with Martin Rees, Huw Price, Jaan Tallinn and Partha Dasgupta. The author takes a rather positive tone about CSER and MIRI's endeavours, and mentions x-risks other than AI (bioengineered pandemic, global warming with human interference, distributed manufacturing).

I find it interesting that the inferential distance for the layman to the concept of paperclipping AI is much reduced by talking about paperclipping America, rather than the entire universe: though the author admits still struggling with the concept. Unusually for an journalist who starts off unfamiliar with these concepts, he writes in a tone that suggests that he takes the ideas seriously, without the sort of "this is very far-fetched and thus I will not lower myself to seriously considering it" countersignalling usually seen with x-risk coverage. There is currently the usual degree of incredulity in the comments section though.

For those unfamiliar with The Guardian, it is a British left-leaning newspaper with a heavy focus on social justice and left-wing political issues. 

Meetup Report Thread: September 2014

7 Viliam_Bur 30 August 2014 12:32PM

If you had an interesting Less Wrong meetup recently, but don't have the time to write up a big report to post to Discussion, feel free to write a comment here.  Even if it's just a couple lines about what you did and how people felt about it, it might encourage some people to attend meetups or start meetups in their area.

If you have the time, you can also describe what types of exercises you did, what worked and what didn't.  This could help inspire meetups to try new things and improve themselves in various ways.

If you're inspired by what's posted below and want to organize a meetup, check out this page for some resources to get started!  You can also check FrankAdamek's weekly post on meetups for the week.

Previous Meetup Report Thread: February 2014

 

Guidelines:  Please post the meetup reports as top-level comments, and debate the specific meetup below its comment.  Anything else goes under the "Meta" top-level comment.  The title of this thread should be interpreted as "up to and including September 2014", which means feel free to post reports of meetups that happened in August, July, June, etc.

View more: Next