Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] pplapi is a virtual database of the entire human population.

1 morganism 12 January 2017 02:33AM

[Link] Dares are social signaling at its purest

2 morganism 03 January 2017 10:48PM

Empirically assess your time use

4 Elo 27 December 2016 12:01AM

Original post on my blog: http://bearlamp.com.au/empirically-assess-your-time-use/

168 hours

You and me both buddy.  We both have equal access to our 168 hours.  We have different demands on our hours like the need for sleep or how demanding our job is, but fundamentally we all start with the same 168 hours (24*7=168).  That means we all have an opportunity to look at these hours and figure out how to get the most out of them by doing the things we want to do in the limited hours we have.  

This process is about looking at your hours and discovering where they go.


Make a list of all the things you have done over the last 7 days.  As a time management technique, this is a procedure to follow that will be useful for other times.  

This might take 30 minutes to do, and might take 10 minutes.

  1. Acquire writing implements
  2. Write down the day and time.
  3. Subtract 7 days from now, and figure out where you were at the beginning of that day. 
  4. Go through your diary, your phone call log, your sms history, your messenger histories and work out what you did, in rough chunks, from when you woke up on that day until now.
  5. Start with sleep - You probably have a regular enough wake up time, and a regular enough time that you go to sleep.  This will put upper bounds on the number of hours in your life.
  6. Identify Routines and how long they take - regular meals, lunch breaks, shower processes.
  7. Identify regular commitments - meetings, clubs, events.
  8. Identify any social time.
  9. Identify time spent deeply working, studying, reading, planning
  10. Identify exercise.
  11. Identify dual purpose time, i.e. I play on facebook on the bus.

Here is an example

Monday 
7 wake up
7-7:30 shower and get ready (30mins)
7:30-8 drive to work (30mins)
8-10 check emails and respond to people (1hour)
10-10:20 coffee break (20mins)
10:20-1 work meeting (2hrs40mins)
1-1:30 lunch (30mins) 
1:30-3:30 computer work (2hrs)
3:30-3:40 distraction food break (10mins)
3:40-4:30 work (50mins)
4:30-5 drive home (30mins)
5-6 make and eat dinner (1hr)
6-7:30 netflix (1.5hrs)
7:30-11:30 play on computer (3.5hrs)
11:30-11:45 get ready and head to bed. (15mins)

Another option might be to simplify this to:

Monday
7am wake up
7-7:30 shower and breakfast (30mins)
7:30-8 drive to work (30mins)
8-1 work(5hrs)
1-1:30 lunch (30mins)
1:30-4:30 work (3hrs)
4:30-5 drive home (30mins)
5-6 dinner (1hr)
6-11:30 relax and entertain myself (5.5hrs)
11:30-11:45 get ready and go to bed (15mins)

Now repeat for the rest of the days.


Why?

Time management is about knowing where your time is going.  it might be interesting to know that you spend an hour commuting, or 8 hours working or two hours dealing with food each day.  Or maybe that's just Mondays.  And when you go out for dinner you spend more time with food.

If you are trying to work out the value of your time it helps to know what you do with your time.  Doing this exercise for 2-4 weeks in a row can help you establish a baseline of your 168 hours.  And where efficiency or inefficiencies lie.  If you want to grow; it starts with evidence and observation.

Yea but why did I do that? And what comes next?  

Nothing, just think about what you uncovered.  Do as I do and revel in the joy of the merely real, knowing what you actually do with your time.  This is your life.  Do with it what you will.  Know that how you spend your time are your revealed preferences (post coming soon).


Meta: This took 45mins to write.  This is an exercise that I invented for myself years ago, this is the first time I have written it up.  You can't get the most out of the exercise without trying it out to see what it's like.  Good luck!

The 12 Second Rule (i.e. think before answering) and other Epistemic Norms

17 Raemon 05 September 2016 11:08PM

Epistemic Status/Effort: I'm 85% confident this is a good idea, and that the broader idea is at least a good direction. Have gotten feedback from a few people and spend some time actively thinking through ramifications of it. Interested in more feedback.

TLDR:

1) When asking a group a question, i.e. "what do you think about X?", ask people to wait 12 seconds, to give each other time to think. If you notice someone else ask a question and people immediately answering, suggest people pause the conversation until people have had some time to think. (Probably specific mention "12 second rule" to give people a handy tag to remember)

2) In general, look for opportunities to improve or share social norms that'll help your community think more clearly, and show appreciation when others do so (i.e. "Epistemic Norms")

(this was originally conceived for the self-described "rationality" community, but I think is a good idea any group that'd like to improve their critical thinking as well as creativity.)

There are three reasons the 12-second rule seems important to me:

  • On an individual level, it makes it easier to think of the best answer, rather than going with your cached thought.
  • On the group level, it makes it easier to prevent anchoring/conformity/priming effects.
  • Also on the group level, it means that people take longer to think of answers get to practice actually thinking for themselves
If you're using it with people who aren't familiar with it, make sure to briefly summarize what you're doing and why.

Elaboration:

While visiting rationalist friends in SF, I was participating in a small conversation (about six participants) in which someone asked a question. Immediately, one person said "I think Y. Or maybe Z." A couple other people said "Yeah. Y or Z, or... maybe W or V?" But the conversation was already anchored around the initial answers.

I said "hey, shouldn't we stop to each think first?" (this happens to be a thing my friends in NYC do). And I was somewhat surprised that the response was more like "oh, I guess that's a good idea" than "oh yeah whoops I forgot."

It seemed like a fairly obvious social norm for a community that prides itself on rationality, and while the question wasn't *super* important, I think its helpful to practice this sort of social norm on a day-to-day basis.

This prompted some broader questions - it occurred to me there were likely norms and ideas other people had developed in their local networks that I probably wasn't aware of. Given that there's no central authority on "good epistemic norms", how do we develop them and get them to spread? There's a couple people with popular blogs who sometimes propose new norms which maybe catch on, and some people still sharing good ideas on Less Wrong, effective-altruism.com, or facebook. But it doesn't seem like those ideas necessarily reach saturation.

Atrophied Skills

The first three years I spent in the rationality community, my perception is that my strategic thinking and ability to think through complex problems actually *deteriorated*. It's possible that I was just surrounded by smarter people than me for the first time, but I'm fairly confident that I specifically acquired the habit of "when I need help thinking through a problem, the first step is not to think about it myself, but to ask smart people around me for help."

Eventually I was hired by a startup, and I found myself in a position where the default course for the company was to leave some important value on the table. (I was working in an EA-adjaecent company, and wanted to push it in a more Effective Altruism-y direction with higher rigor). There was nobody else I could turn to for help. I had to think through what "better epistemic rigor" actually meant and how to apply it in this situation.

Whether or not my rationality had atrophied in the past 3 years, I'm certain that for the first time in long while, certain mental muscles *flexed* that I hadn't been using. Ultimately I don't know whether my ideas had a noteworthy effect on the company, but I do know that I felt more empowered and excited to improve my own rationality. 

I realized that, in the NYC meetups, quicker-thinking people tended to say what they thought immediately when a question was asked, and this meant that most of the people in the meetup didn't get to practice thinking through complex questions. So I started asking people to wait for a while before answering - sometimes 5 minutes, sometimes just a few seconds.

"12 seconds" seems like a nice rule-of-thumb to avoid completely interrupting the flow of conversation, while still having some time to reflect, and make sure you're not just shouting out a cached thought. It's a non-standard number which is hopefully easier to remember.

(That said, a more nuanced alternative is "everyone takes a moment to think until they feel like they're hitting diminishing returns on thinking or it's not worth further halting the conversation, and then raising a finger to indicate that they're done")

Meta Point: Observation, Improvement and Sharing

The 12-second rule isn't the main point though - just one of many ways this community could do a better job of helping both newcomers and old-timers hone their thinking skills. "Rationality" is supposed to be our thing. I think we should all be on the lookout for opportunities to improve our collective ability to think clearly. 

I think specific conversational habits are helpful both for their concrete, immediate benefits, as well as an opportunity to remind everyone (newcomers and old-timers alike) that we're trying to actively improve in this area.

I have more thoughts on how to go about improving the meta-issues here, which I'm less confident and will flesh out in future posts.

Call for information, examples, case studies and analysis: votes and shareholder resolutions v.s. divestment for social and environmental outcomes

-1 [deleted] 05 May 2016 12:08AM

Typology: since not elsewhere disambiguated, divestment will be considered a form of shareholder activism in this article.


The aim of this call for information is to identify under what conditions shareholder activism or divestment is more appropriate. Shareholder activism referrers to the action and activities around proposing and rallying support for a resolution at a company AGM such as reinstatement or impeachment of a director, or a specific action like renouncing a strategic direction (like investment in coal). In contrast, divestment infers to withdrawal of an investment in a company by shareholders, such as a tobacco or fossil fuel company. By identifying the important variables that determine which strategy is most appropriate, activists and shareholders will be able to choose strategies that maximise social and environmental outcomes while companies will be able to maximise shareholder value.


Very little published academic literature exists on the consequences of divestment. Very little published academic literature exists on the social and environmental consequences of shareholder activism other than the impact on the financial performance of the firm, and conventional metrics of shareholder value.


Controversy (1)


One item of non academic literature, a manifestos on a socially responsible investing blog (http://www.socialfunds.com/media/index.cgi/activism.htm) weighs up the option of divestment against shareholder activism by suggesting that divestment is appropriate as a last resort, if considerable support is rallied, the firm is interested in its long term financial sustainability, and responds whereas voting on shareholder resolutions is appropriate when groups of investors are interested in having an impact. It’s unclear how these contexts are distinguished. DVDivest, a divestment activist group (dcdivest.org/faq/#Wouldn’t shareholder activism have more impact than divestment?) contends in their manifesto the shareholder activism is better suited to changing one aspect of a company's operation whereas divestment is appropriate when rejected a basic business model. This answer too is inadequate as a decision model since one companies can operate multiple simultaneous business models, own several businesses, and one element of their operation may not be easily distinguished from the whole system - the business. They also identify non-responsiveness of companies to shareholder action as a plausible reason to side with divestment.


Controversy (2)


Some have claimed that resolutions that are turned down have an impact. It’s unclear how to enumerate that impact and others. The enumeration of impacts is itself controversially and of course methodologically challenging.


Research Question(s)


Population: In publicly listed companies

Exposure: is shareholder activism in the form of proxy voting, submitting shareholder resolutions and rallying support for shareholder resolution

Comparator: compared to shareholder activism in the form of divestment

Outcome: associated with outcomes  - shareholder resolutions (votes and resolutions) and/or indicators or eventuation of financial (non)sustainability (divestment) and/or media attention (both)



Potential EA application:

Activists could nudge corporations to do the rest of their activism for them. To illustrate: Telstra, Paypal UPS Disney, Coca Cola, Apple and plenty other corporations have objected to specific pieces of legislation and commanded political change in different instances, independently and in unison, in different places, as described [here](http://www.onlineopinion.com.au/view.asp?article=18183). This could be a way to leverage just a controlling share of influence in an organisation to leverage a whole organisations lobbying power and magnify impact.

Lesswrong Potential Changes

17 Elo 19 March 2016 12:24PM

I have compiled many suggestions about the future of lesswrong into a document here:

https://docs.google.com/document/d/1hH9mBkpg2g1rJc3E3YV5Qk-b-QeT2hHZSzgbH9dvQNE/edit?usp=sharing

It's long and best formatted there.

In case you hate leaving this website here's the summary:

Summary

There are 3 main areas that are going to change.

  1. Technical/Direct Site Changes

 

  1.  
    1. new home page

    2. new forum style with subdivisions

      1. new sub for “friends of lesswrong” (rationality in the diaspora)

    3. New tagging system

    4. New karma system

    5. Better RSS

  2. Social and cultural changes

    1. Positive culture; a good place to be.

    2. Welcoming process

    3. Pillars of good behaviours (the ones we want to encourage)

    4. Demonstrate by example

    5. 3 levels of social strategies (new, advanced and longtimers)

  3. Content (emphasis on producing more rationality material)

    1. For up-and-coming people to write more

      1. for the community to improve their contributions to create a stronger collection of rationality.

    2. For known existing writers

      1. To encourage them to keep contributing

      2. To encourage them to work together with each other to contribute

Less Wrong Potential Changes

Summary

Why change LW?

How will we know we have done well (the feel of things)

How will we know we have done well (KPI - technical)

Technical/Direct Site Changes

Homepage

Subs

Tagging

Karma system

Moderation

Users

RSS magic

Not breaking things

Funding support

Logistical changes

Other

Done (or Don’t do it):

Social/cultural

General initiatives

Welcoming initiatives

Initiatives for moderates

Initiatives for long-time users

Rationality Content

Target: a good 3 times a week for a year.

Approach formerly prominent writers

Explicitly invite

Place to talk with other rationalists

Pillars of purpose
(with certain sub-reddits for different ideas)

Encourage a declaration of intent to post

Specific posts

Other notes


Why change LW?

 

Lesswrong has gone through great times of growth and seen a lot of people share a lot of positive and brilliant ideas.  It was hailed as a launchpad for MIRI, in that purpose it was a success.  At this point it’s not needed as a launchpad any longer.  While in the process of becoming a launchpad it became a nice garden to hang out in on the internet.  A place of reasonably intelligent people to discuss reasonable ideas and challenge each other to update their beliefs in light of new evidence.  In retiring from its “launchpad” purpose, various people have felt the garden has wilted and decayed and weeds have grown over.  In light of this; and having enough personal motivation to decide I really like the garden, and I can bring it back!  I just need a little help, a little magic, and some little changes.  If possible I hope for the garden that we all want it to be.  A great place for amazing ideas and life-changing discussions to happen.


How will we know we have done well (the feel of things)

 

Success is going to have to be estimated by changes to the feel of the site.  Unfortunately that is hard to do.  As we know outrage generates more volume than positive growth.  Which is going to work against us when we try and quantify by measurable metrics.  Assuming the technical changes are made; there is still going to be progress needed on the task of socially improving things.  There are many “seasoned active users” - as well as “seasoned lurkers” who have strong opinions on the state of lesswrong and the discussion.  Some would say that we risk dying of niceness, others would say that the weeds that need pulling are the rudeness.  


Honestly we risk over-policing and under-policing at the same time.  There will be some not-niceness that goes unchecked and discourages the growth of future posters (potentially our future bloggers), and at the same time some other niceness that motivates trolling behaviour as well as failing to weed out potential bad content which would leave us as fluffy as the next forum.  there is no easy solution to tempering both sides of this challenge.  I welcome all suggestions (it looks like a karma system is our best bet).


In the meantime I believe being on the general niceness, steelman side should be the motivated direction of movement.  I hope to enlist some members as essentially coaches in healthy forum growth behaviour.  Good steelmanning, positive encouragement, critical feedback as well as encouragement, a welcoming committee and an environment of content improvement and growth.


While at the same time I want everyone to keep up the heavy debate; I also want to see the best versions of ourselves coming out onto the publishing pages (and sometimes that can be the second draft versions).


So how will we know?  By trying to reduce the ugh fields to people participating in LW, by seeing more content that enough people care about, by making lesswrong awesome.


The full document is just over 11 pages long.  Please go read it, this is a chance to comment on potential changes before they happen.


Meta: This post took a very long time to pull together.  I read over 1000 comments and considered the ideas contained there.  I don't have an accurate account of how long this took to write; but I would estimate over 65 hours of work has gone into putting it together.  It's been literally weeks in the making, I really can't stress how long I have been trying to put this together.

If you want to help, please speak up so we can help you help us.  If you want to complain; keep it to yourself.

Thanks to the slack for keeping up with my progress and Vanvier, Mack, Leif, matt and others for reviewing this document.

As usual - My table of contents

Study partner matching thread

5 AspiringRationalist 25 January 2016 04:25AM

Nate Soares recommends pairing up when studying, so I figured it would be useful to facilitate that.

If you are looking for a study partner, please post a top-level comment saying:

 

  • What you want to study
  • Your level of relevant background knowledge
  • If you have sources in mind (MOOCs, textbooks, etc), what those are
  • Your time zone

 

List of techniques to help you remember names

8 Elo 11 December 2015 12:41AM

Name are very important. Everyone has one; everyone likes to know when you know their name. Everyone knows them to be a part of social interaction. You can't avoid names (well you can, but it gets tricky). In becoming more awesome at names, here is a bunch of suggestions that can help you.

 

The following is an incomplete list of some reasonably good techniques to help you remember names.  Good luck and put them to good use.


0. Everyone can learn to remember names

in a growth mindset sense, stop thinking you can’t.  Stop saying that, everyone is bad at it.  Your 0’th task is to actually try harder than that, if you can’t do that - stop reading.  Face blindness does exist but most of these will help with that.

 

1. Decide that names are important. 

If you don't think they are important then change your mind. They are. Everyone says they are, everyone responds to their name. It’s a fact of life that being able to be communicated with directly by name will be useful.

 

2. Make sure you hear the name clearly the first time, and repeat it till you have it. 

I tend to shake people's hands, then not let go until they tell me their name, and share them mine clearly (sometimes twice).  

 

3. Repeat their name*

Part of 2, but also – if you repeat it (at least once) you have a higher chance of remembering it. Look them in the eye and say their name. "Nice to meet you Bob". Suddenly your brain got a good picture of their face as well as a good cue as to their name.  If you want to supercharge this particular part; “Nice to meet you bob with the hat”, “susan with the glasses”, “john in the dress” works great!

*Repeating a name also has the effect of someone correcting you if you have it wrong.  And if you are in a group - allowing other people to learn or remember a name more easily.

 

4. Associating that name.

Does that name have a meaning as another thing? Mark, Ivy, Jack.

Does that name rhyme with something? Or sound like something? Victoria, IsaBelle, Dusty, Bill, Norris, Jarrod (Jar + Rod), Leopold.

Does someone you already know have that name? Can you make a mental link between this person and the person who's name you already remember. Worst case about being able to remember their name, "oh I have a cousin also called Alexa"-type statements are harmless.

Is the name famous? Luke, Albert, Jesus, Bill, Simba, Bruce, Clark, Edward, Victoria. Any thing that you can connect to this person to hold their name.

 

5. Write it down

Do you have a spare piece of paper? Can you write it down?  I literally carry a notebook and write names down as I hear them.  Usually people compliment me on it if they ever find out.

 

6. Running a script about it

There are naturally lulls in your conversation.  You don’t speak like a wall of text, or if you do you could probably learn to do this over the top. If you take a moment during one of those lulls, while someone else is talking - to look around and take note of if you have forgotten someone's name, do so at 1minute, 5minutes, 10minutes (or where necessary).  Just recite each person’s name in your head.

 

7. The first letter.

There are 26 English letters. If you can't remember – try to remember the first letter. If you get it and it doesn't jog your memory, try use the statement, "your name started with J right?"

 

8. Facebook, LinkedIn, Anki

Use the resources available to you. Check Facebook if you forget! Similarly if people are wearing nametags; test yourself (think – her name is Mary – then check) if you don't remember at all then certainly check. Build an anki deck - I am yet to see a script to make an anki deck from a Facebook friends list but this would be an excellent feature. 

 

9. put that name somewhere.  

It seems to help some people to give the name a box to go in.  “This name goes with the rest of the names of people I am related to”, “this name goes with the box of the rest of my tennis club”, By allocating boxes you can bring back names via the box of names.  (works for some people)

 

10. Mnemonics

I never bothered because with the above list; I don’t need this yet.  Apparently they work excellently. It’s about creating a sensory object in your head that reminds you of the thing you are after, i.e. a person named Rose – imagine a rose on top of her head, that was bright red, and smelt like a rose. Use all senses and make something vivid. You want to remember? Make it vivid and ridiculous.  Yes this works; And yes it’s more effort.  Names are really valuable and worth remembering.


Disclaimer: All of these things work for some of the people some of the time.  You should try the ones you think will work; if they do - excellent, if they don’t - oh well.  keep trying.

Also see: http://lesswrong.com/lw/gx5/boring_advice_repository/8ywe

and this video on name skill: https://www.youtube.com/watch?v=G1_o4oZCEmM

Note: This is also recommended from the book "how to win friends and influence people"


Meta: I wrote this post for a dojo in the Sydney Lesswrong group on the name remembering skills following a lightning talk that I gave in the Melbourne Lesswrong group on the same ideas.

time: 3hrs to write.

To see my other posts - check out my Table of contents

Any suggestions, recommendations or updates please advise below.

Online Social Groups?

6 Regex 01 May 2015 04:20AM

Are there any LessWrong Skype groups, or active live chatrooms? I've been looking around and found nothing. Well, with the exception of the LW Study Hall, but it doesn't quite fit since it is primarily for social work/study facilitation purposes with only minor breaks. This would fulfill a primarily social function.

But you ask, wouldn't a regular Skype chat reduce effectiveness by distracting people from their work? A little bit, but I'd rather the distracting thing be increasing my rationality by getting me engaging in the ideas with other people who are actively trying to do the same. I expect it to have an overall positive effect on productivity since I am bound to encounter one or two ideas to do so.

Thus, the value of such a group for me would be to discuss topics pertinent to rationality, and increase increase the shininess and entertainment value of LessWrong's ideas- it is already pretty interesting, and I've had fun thinking while sitting around reading the Sequences (finished How To Actually Change Your Mind not too long ago). There are no meetups near me, and I'd rather engage via online interactions anyway.

If there is no such group already, I'd be happy to start one. Feel free to either leave your Skype name in the comments or send me a PM if you're interested.

 

edit: My Skype id is bluevertro

Australia wide - LessWrong meetup camp weekend of awesome!

3 Elo 31 March 2015 01:54AM

Posting here as a boost; not sure how "nearest meetup" listings work, and I want to pass this on to everyone in Australia/New Zeland.

http://lesswrong.com/meetups/1bt

 

Camp is super interesting; very much achieves the goal of meeting and hanging out with other brilliant lesswrong people.  Last year I made such good friends that I still consider them some of the closest I have ever made; even without talking to them for weeks at a time.  Although I also do tend to talk to them every other day usually.

There is also an opportunity to learn skills, this year's camp is themed around topics such as:

  • productivity
  • effectiveness
  • functioning successfully at life
  • Food, exercise, health, technology skills
  • How to win at life
  • Turbocharging training
  • High impact culture
  • Effective communication
  • CoZE

 

Its gonna be great, Please come along if you are in Australia!  Part of what makes camp so great is that so many LessWrong people come along and also enjoy the company of each other.

 

If you know someone who is not regularly on www.lesswrong.com and likely to miss this post - please make sure to direct them to here.

 

Any questions - send me a message. :)

Where can I go to exploit social influence to fight akrasia?

9 Snorri 26 March 2015 03:39PM

Briefly: I'm looking for a person (or group) with whom I can mutually discuss self improvement and personal goals (and nothing else) on a regular basis.

Also, note, this post is an example of asking a personally important question on LW. The following idea is not meant as a general mindhack, but just as something I want to try out myself.

We are unconsciously motivated by those around us. The Milgram experiment and the Asch conformity experiment are the two best examples of social influence that come to my mind, though I'm sure there are plenty more (if you haven't heard of them, I really suggest spending a minute).

I've tended to see this drive to conform to the expectations of others as a weakness of the human mind, and yes, it can be destructive. However, as long as its there, I should exploit it. Specifically, I want to exploit it to fight akrasia.

Utilizing positive social influence is a pretty common tactic for fighting drug addictions (like in AA), but I haven't really heard of it being used to fight unproductivity. Sharing your personal work/improvement goals with someone in the same position as yourself, along with reflecting on previous attempts, could potentially be powerful. Humans simply feel more responsible for the things they tell other people about, and less responsible for the things they bottle up and don't tell anyone (like all of my productivity strategies).

The setup that I envision would be something like this:

  • On a chat room, or some system like skype.1
  • Meet weekly at a very specific time for a set amount of time.
  • Your partner has a list of the productivity goals you set during the previous session. They ask you about your performance, forcing you to explain either your success or your failure.
  • Your partner tries to articulate what went wrong or what went right from your explanation (giving you a second perspective).
  • Once both parties have shared and evaluated, you set your new goals in light of your new experience (and with your partner's input, hopefully being more effective).
  • The partnership continues as long as it is useful for all parties.

I've tried doing something similar to this with my friends, but it just didn't work. We already knew each other too well, and there wasn't that air of dispassionate professionality. We were friends, but not partners (in this sense of the word).

If something close to what I describe already exists, or at least serves the same purpose, I would love to hear about it (I already tried the LW study hall, but it wasn't really the structure or atmosphere I was going for). Otherwise, I'd be thrilled to find someone here to try doing this with. You can PM me if you don't want to post here.

 


 

1. After explaining this whole idea to someone IRL, they remarked that there would be little social influence because we would only be meeting online in a pseudo-anonymous way. However, I don't find this to be the case personally when I talk with people online, so a virtual environment would be no detriment (hopefully this isn't just unique to me).

Edit (29/3/2015): Just for the record, I wanted to say that I was able to make the connection I wanted, via a PM. Thanks LW!

[link] Large Social Networks can be Targeted for Viral Marketing with Small Seed Sets

3 Gunnar_Zarncke 01 September 2014 10:03AM

Large Social Networks can be Targeted for Viral Marketing with Small Seed Sets

It shows how easy a population can be influenced if control over a small sub-set exists.  

A key problem for viral marketers is to determine an initial "seed" set [<1% of total size] in a network such that if given a property then the entire network adopts the behavior. Here we introduce a method for quickly finding seed sets that scales to very large networks. Our approach finds a set of nodes that guarantees spreading to the entire network under the tipping model. After experimentally evaluating 31 real-world networks, we found that our approach often finds such sets that are several orders of magnitude smaller than the population size. Our approach also scales well - on a Friendster social network consisting of 5.6 million nodes and 28 million edges we found a seed sets in under 3.6 hours. We also find that highly clustered local neighborhoods and dense network-wide community structure together suppress the ability of a trend to spread under the tipping model.

This is relevant for LW because

a) Rational agents should hedge against this.

b) An UFAI could exploit this.

c) It gives hints to proof systems against this 'exploit'.

Meta: social influence bias and the karma system

16 Snorri 17 February 2014 01:07AM

Given LW’s keen interest in bias, it would seem pertinent to be aware of the biases engendered by the karma system. Note: I used to be strictly opposed to comment scoring mechanisms, but witnessing the general effectiveness in which LWers use karma has largely redeemed the system for me.

In “Social Influence Bias: A Randomized Experiment” by Muchnik et al, random comments on a “social news aggregation Web site” were up-voted after being posted. The likelihood of such rigged comments receiving additional up-votes were quantified in comparison to a control group. The results show that users were significantly biased towards the randomly up-voted posts:

The up-vote treatment significantly increased the probability of up-voting by the first viewer by 32% over the control group ... Uptreated comments were not down-voted significantly more or less frequently than the control group, so users did not tend to correct the upward manipulation. In the absence of a correction, positive herding accumulated over time.

At the end of their five month testing period, the comments that had artificially received an up-vote had an average rating 25% higher than the control group. Interestingly, the severity of the bias was largely dependent on the topic of discussion:

We found significant positive herding effects for comment ratings in “politics,” “culture and society,” and “business,” but no detectable herding behavior for comments in “economics,” “IT,” “fun,” and “general news”.

The herding behavior outlined in the paper seems rather intuitive to me. If before I read a post, I see a little green ‘1’ next to it, I’m probably going to read the post in a better light than if I hadn't seen that little green ‘1’ next to it. Similarly, if I see a post that has a negative score, I’ll probably see flaws in it much more readily. One might say that this is the point of the rating system, as it allows the group as a whole to evaluate the content. However, I’m still unsettled by just how easily popular opinion was swayed in the experiment.

This certainly doesn't necessitate that we reprogram the site and eschew the karma system. Moreover, understanding the biases inherent in such a system will allow us to use it much more effectively. Discussion on how this bias affects LW in particular would be welcomed. Here are some questions to begin with:

  • Should we worry about this bias at all? Are its effects negligible in the scheme of things?
  • How does the culture of LW contribute to this herding behavior? Is it positive or negative?
  • If there are damages, how can we mitigate them?

Notes:

In the paper, they mentioned that comments were not sorted by popularity, therefore “mitigating the selection bias.” This of course implies that the bias would be more severe on forums where comments are sorted by popularity, such as this one.

For those interested, another enlightening paper is “Overcoming the J-shaped distribution of product reviews” by Nan Hu et al, which discusses rating biases on websites such as amazon. User gwern has also recommended a longer 2007 paper by the same authors which the one above is based upon: "Why do Online Product Reviews have a J-shaped Distribution? Overcoming Biases in Online Word-of-Mouth Communication"

Democracy and rationality

8 homunq 30 October 2013 12:07PM

Note: This is a draft; so far, about the first half is complete. I'm posting it to Discussion for now; when it's finished, I'll move it to Main. In the mean time, I'd appreciate comments, including suggestions on style and/or format. In particular, if you think I should(n't) try to post this as a sequence of separate sections, let me know.

Summary: You want to find the truth? You want to win? You're gonna have to learn the right way to vote. Plurality voting sucks; better voting systems are built from the blocks of approval, medians (Bucklin cutoffs), delegation, and pairwise opposition. I'm working to promote these systems and I want your help.

Contents: 1. Overblown¹ rhetorical setup ... 2. Condorcet's ideals and Arrow's problem ... 3. Further issues for politics ... 4. Rating versus ranking; a solution? ... 5. Delegation and SODA ... 6. Criteria and pathologies ... 7. Representation, Proportional representation, and Sortition ... 8. What I'm doing about it and what you can ... 9. Conclusions and future directions ... 10. Appendix: voting systems table ... 11. Footnotes

1.

This is a website focused on becoming more rational. But that can't just mean getting a black belt in individual epistemic rationality. In a situation where you're not the one making the decision, that black belt is just a recipe for frustration.

Of course, there's also plenty of content here about how to interact rationally; how to argue for truth, including both hacking yourself to give in when you're wrong and hacking others to give in when they are. You can learn plenty here about Aumann's Agreement Theorem on how two rational Bayesians should never knowingly disagree.

But "two rational Bayesians" isn't a whole lot better as a model for society than "one rational Bayesian". Aspiring to be rational is well and good, but the Socratic ideal of a world tied together by two-person dialogue alone is as unrealistic as the sociopath's ideal of a world where their own voice rules alone. Society needs structures for more than two people to interact. And just as we need techniques for checking irrationality in one- and two-person contexts, we need them, perhaps all the more, in multi-person contexts.

Most of the basic individual and dialogical rationality techniques carry over. Things like noticing when you are confused, or making your opponent's arguments into a steel man, are still perfectly applicable. But there's also a new set of issues when n>2: the issues of democracy and voting. For a group of aspiring rationalists to come to a working consensus, of course they need to begin by evaluating and discussing the evidence, but eventually it will be time to cut off the discussion and just vote. When they do so, they should understand the strengths and pitfalls of voting in general and of their chosen voting method in particular.

And voting's not just useful for an aspiring rationalist community. As it happens, it's an important part of how governments are run. Discussing politics may be a mind-killer in many contexts, but there are an awful lot of domains where politics is a part of the road to winning.² Understanding voting processes a little bit can help you navigate that road; understanding them deeply opens the possibility of improving that road and thus winning more often.

2. Collective rationality: Condorcet's ideals and Arrow's problem

Imagine it's 1785, and you're a member of the French Academy of Sciences. You're rubbing elbows with most of the giants of science and mathematics of your day: Coulomb, Fourier, Lalande, Lagrange, Laplace, Lavoisier, Monge; even the odd foreign notable like Franklin with his ideas to unify electrostatics and electric flow.

They'll remember your names

One day, they'll put your names in front of lots of cameras (even though that foreign yokel Franklin will be in more pictures)

And this academy, with many of the smartest people in the world, has votes on stuff. Who will be our next president; who should edit and schedule our publications; etc. You're sure that if you all could just find the right way to do the voting, you'd get the right answer. In fact, you can easily prove that, or something like it: if a group is deciding between one right and one wrong option, and each member is independently more than 50% likely to get it right, then as the group size grows the chance of a majority vote choosing the right option goes to 1.

But somehow, there's still annoying politics getting in the way. Some people seem to win the elections simply because everyone expects them to win. So last year, the academy decided on a new election system to use, proposed by your rival, Charles de Borda, in which candidates get different points for being a voters first, second, or third choice, and the one with the most points wins. But you're convinced that this new system will lead to the opposite problem: people who win the election precisely because nobody expected them to win, by getting the points that voters strategically don't want to give to a strong rival. But when people point that possibility out to Borda, he only huffs that "my system is meant for honest men!"

So with your proof of the above intuitive, useful result about two-way elections, you try to figure out how to reduce an n-way election to the two-candidate case. Clearly, you can show that Borda's system will frequently give the wrong results from that perspective. But frustratingly, you find that there could sometimes be no right answer; that there will be no candidate who would beat all the others in one-on-one races. A crack has opened up; could it be that the collective decisions of intelligent individual rational agents could be irrational?

Of course, the "you" in this story is the Marquis de Condorcet, and the year 1785 is when he published his Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix, a work devoted to the question of how to acheive collective rationality. The theorem referenced above is Condorcet's Jury Theorem, which seems to offer hope that democracy can point the way from individually-imperfect rationality towards an ever-more-perfect collective rationality. Just as Aumann's Agreement Theorem shows that two rational agents should always move towards consensus, the Condorcet Jury Theorem apparently shows that if you have enough rational agents, the resulting consensus will be correct.

But as I said, Condorcet also opened a crack in that hope: the possibility that collective preferences will be cyclical. If the assumptions of the jury theorem don't hold — if each voter doesn't have a >50% chance of being right on a randomly-selected question, OR if the correctness of two randomly-selected voters is not independent and uncorrelated — then individually-sensible choices can lead to collectively-ridiculous ones. 

What do I mean by "collectively-ridiculous"? Let's imagine that the Rationalist Marching Band is choosing the colors for their summer, winter, and spring uniforms, and that they all agree that the only goal is to have as much as possible of the best possible colors. The summer-style uniforms come in red or blue, and they vote and pick blue; the winter-style ones come in blue or green, and they pick green; and the spring ones come in green or red, and they pick red.

Obviously, this makes us doubt their collective rationality. If, as they all agree they should, they had a consistent favorite color, they should have chosen that color both times that it was available, rather than choosing three different colors in the three cases. Theoretically, the salesperson could use such a fact to pump money out of them; for instance, offering to let them "trade up" their spring uniform from red to blue, then to green, then back to red, charging them a small fee each time; if they voted consistently as above, they would agree to each trade (though of course in reality human voters would probably catch on to the trick pretty soon, so the abstract ideal of an unending circular money pump wouldn't work).

This is the kind of irrationality that Condorcet showed was possible in collective decisionmaking. He also realized that there was a related issue with logical inconsistencies. If you were take a vote on 3 logically related propositions — say, "Should we have a Minister of Silly Walks, to be appointed by the Chancellor of the Excalibur", "Should we have a Minister of Silly Walks, but not appointed by the Chancellor of the Excalibur", and "Should we in fact have a Minister of Silly Walks at all", where the third cannot be true unless one of the first is — then you could easily get majority votes for inconsistent results — in this case, no, no, and yes, respectively. Obviously, there are many ways to fix the problem in this simple case — probably many less-wrong'ers would suggest some Bayesian tricks related to logical networks and treating votes as evidence⁸ — but it's a tough problem in general even today, especially when the logical relationships can be complex, and Condorcet was quite right to be worried about its implications for collective rationality.³

And that's not the only tough problem he correctly foresaw. Nearly 200 years later and an ocean away, in the 1960s, Kenneth Arrow showed that it was impossible for a preferential voting system to avoid the problem of a "Condorcet cycle" of preferences. Arrows theorem shows that any voting system which can consistently give the same winner (or, in ties, winners) for the same voter preferences; which does not make one voter the effective dictator; which is sure to elect a candidate if all voters prefer them; and which will switch the results for two candidates if you switch their names on all the votes... must exhibit, in at least some situation, the pathology that befell the Rationalist Marching Band above, or in other words, must fail "independence of irrelevant alternatives".

Arrow's theorem is far from obvious a priori, but proof is not hard to understand intuitively using Condorcet's insight. Say that there are three candidates, X, Y, and Z, with roughly equal bases of support; and that they form a Condorcet cycle, because in two-way races, X would beat Y with help from Z supporters, Y would beat Z with help from X supporters, and Z would beat X with help from Y supporters. So whoever wins in the three-way race — say, X — just remove the one who would have lost to them — Y in this case — and that "irrelevant" change will change the winner to be the third — Z in this case.

Summary of above: Collective rationality is harder than individual or two-way rationality. Condorcet saw the problem and tried to solve it, but Arrow saw that Condorcet had been doomed to fail.

3. Further issues for politics

So Condorcet's ideals of better rationality through voting appear to be in ruins. But at least we can hope that voting is a good way to do politics, right?

Not so fast. Arrow's theorem quickly led to further disturbing results. Alan Gibbard (and also Mark Satterthwaite) extended that there is no voting system which doesn't encourage voting strategy. That is, if you view an voting system as a class of games where the finite players and finite available strategies are fixed, no player is effectively a dictator, and the only thing that varies are the payoffs for each player from each outcome, there is no voting system where you can derive your best strategic vote purely by looking "honestly" at your own preferences; there is always the possibility of situations where you have to second-guess what others will do.

Amartya Sen piled on with another depressing extension of Arrows' logic. He showed that there is no possible way of aggregating individual choices into collective choice that satisfies two simple criteria. First, it shouldn't choose pareto-dominated outcomes; if everyone prefers situation XYZ to ABC, that they don't do XYZ. Second, it is "minimally liberal"; that is, there are at least two people who each get to freely make their own decision on at least one specific issue each, no matter what, so for instance I always get to decide between X and A (in Gibbard's⁴ example, colors for my house), and you always get to decide between Y and B (colors for your own house). The problem is that if you nosily care more about my house's color, the decision that should have been mine, and I nosily care about yours, more than we each care about our own, then the pareto-dominant situation is the one where we don't decide our own houses; and that nosiness could, in theory, be the case for any specific choice that, a priori, someone might have labelled as our Inalienable Right. It's not such a surprising result when you think about it that way, but it does clearly show that unswerving ideals of Democracy and Liberty will never truly be compatible.

Meanwhile, "public choice" theorists⁵ like Duncan Black, James Buchanan, etc. were busy undermining the idea of democratic government from another direction: the motivations of the politicians and bureaucrats who are supposed to keep it running. They showed that various incentives, including the strange voting scenarios explored by Condorcet and Arrow, would tend open a gap between the motives of the people and those of the government, and that strategic voting and agenda-setting within a legislature would tend to extend the impact of that gap. Where Gibbard and Sen had proved general results, these theorists worked from specific examples. And in one aspect, at least, their analysis is devastatingly unanswerable: the near-ubiquitous "democratic" system of plurality voting, also known as first-past-the-post or vote-for-one or biggest-minority-wins, is terrible in both theory and practice.

So, by the 1980s, things looked pretty depressing for the theory of democracy. Politics, the theory went, was doomed forever to be a worse than sausage factory; disgusting on the inside and distasteful even from outside.

Should an ethical rationalist just give up on politics, then? Of course not. As long as the results it produces are important, it's worth trying to optimize. And as soon as you take the engineer's attitude of optimizing, instead of dogmatically searching for perfection or uselessly whining about the problems, the results above don't seem nearly as bad.

From this engineer's perspective, public choice theory serves as an unsurprising warning that tradeoffs are necessary, but more usefully, as a map of where those tradeoffs can go particularly wrong. In particular, its clearest lesson, in all-caps bold with a blink tag, that PLURALITY IS BAD, can be seen as a hopeful suggestion that other voting systems may be better. Meanwhile, the logic of both Sen's and Gibbard's theorems are built on Arrow's earlier result. So if we could find a way around Arrow, it might help resolve the whole issue.

Summary of above: Democracy is the worst political system... (...except for all the others?) But perhaps it doesn't have to be quite so bad as it is today.

4. Rating versus ranking

So finding a way around Arrow's theorem could be key to this whole matter. As a mathematical theorem, of course, the logic is bulletproof. But it does make one crucial assumption: that the only inputs to a voting system are rankings, that is, voters' ordinal preference orders for the candidates. No distinctions can be made using ratings or grades; that is, as long as you prefer X to Y to Z, the strength of those preferences can't matter. Whether you put Y almost up near X or way down next to Z, the result must be the same.

Relax that assumption, and it's easy to create a voting system which meets Arrow's criteria. It's called Score voting⁶, and it just means rating each candidate with a number from some fixed interval (abstractly speaking, a real number; but in practice, usually an integer); the scores are added up and the highest total or average wins. (Unless there are missing values, of course, total or average amount to the same thing.) You've probably used it yourself on Yelp, IMDB, or similar sites. And it clearly passes all of Arrow's criteria. Non-dictatorship? Check. Unanimity? Check. Symmetry over switching candidate names? Check. Independence of irrelevant alternatives? In the mathematical sense — that is, as long as the scores for other candidates are unchanged — check.

So score voting is an ideal system? Well, it's certainly a far sight better than plurality. But let's check it against Sen and against Gibbard.

Sen's theorem was based on a logic similar to Arrow. However, while Arrow's theorem deals with broad outcomes like which candidate wins, Sen's deals with finely-grained outcomes like (in the example we discussed) how each separate house should be painted. Extending the cardinal numerical logic of score voting to such finely-grained outcomes, we find we've simply reinvented markets. While markets can be great things and often work well in practice, Sen's result still holds in this case; if everything is on the market, then there is no decision which is always yours to make. But since, in practice, as long as you aren't destitute, you tend to be able to make the decisions you care the most about, Sen's theorem seems to have lost its bite in this context.

What about Gibbard's theorem on strategy? Here, things are not so easy. Yes, Gibbard, like Sen, parallels Arrow. But while Arrow deals with what's written on the ballot, Gibbard deals with what's in the voters head. In particular, if a voter prefers X to Y by even the tiniest margin, Gibbard assumes (not unreasonably) that they may be willing to vote however they need to, if by doing so they can ensure X wins instead of Y. Thus, the internal preferences Gibbard treats are, effectively, just ordinal rankings; and the cardinal trick by which score voting avoided Arrovian problems no longer works.

How does score voting deal with strategic issues in practice? The answer to that has two sides. On the one hand, score never requires voters to be actually dishonest. Unlike the situation in a ranked system such as plurality, where we all know that the strategic vote may be to dishonestly ignore your true favorite and vote for a "lesser evil" among the two frontrunners, in score voting you never need to vote a less-preferred option above a more-preferred option. At worst, all you have to do is exaggerate some distinctions and minimize others, so that you might end giving equal votes to less- and more-preferred options.

Did I say "at worst"? I meant, "almost always". Voting strategy only matters to the result when, aside from your vote, two or more candidates are within one vote of being tied for first. Except in unrealistic, perfectly-balanced conditions, as the number of voters rises, the probability that anyone but the two a priori frontrunner candidates is in on this tie falls to zero.⁷ Thus, in score voting, the optimal strategy is nearly always to vote your preferred frontrunner and all candidate above at the maximum, and your less-preferred frontrunner and all candidates below at the minimum. In other words, strategic score voting is basically equivalent to approval voting, where you give each candidate a 1 or 0 and the highest total wins.

In one sense, score voting reducing to approval OK. Approval voting is not a bad system at all. For instance, if there's a known majority Condorcet winner — a candidate who could beat any other by a majority in a one-on-one race — and voters are strategic — they anticipate the unique strong Nash equilibrium, the situation where no group of voters could improve the outcome for all its members by changing their votes, whenever such a unique equilibrium exists — then the Condorcet winner will win under approval. That's a lot of words to say that approval will get the "democratic" results you'd expect in most cases.

But in another sense, it's a problem. If one side of an issue is more inclined to be strategic than the other side, the more-strategic faction could win even if it's a minority. That clashes with many people's ideals of democracy; and worse, it encourages mind-killing political attitudes, where arguments are used as soldiers rather than as ways to seek the truth.

But score and approval voting are not the only systems which escape Arrow's theorem through the trapdoor of ratings. If score voting, using the average of voter ratings, too-strongly encourages voters to strategically seek extreme ratings, then why not use the median rating instead? We know that medians are less sensitive to outliers than averages. And indeed, median-based systems are more resistant to one-sided strategy than average-based ones, giving better hope for reasonable discussion to prosper. That is to say, in a simple model, a minority would need twice as much strategic coordination under median as under average, in order to overcome a majority; and there's good reason to believe that, because of natural factional separation, reality is even more favorable to median systems than that model.

There are several different median systems available. In the US during the 1910-1925 Progressive Era, early versions collectively called "Bucklin voting" were used briefly in over a dozen cities. These reforms, based on counting all top preferences, then adding lower preferences one level at a time until some candidate(s) reach a majority, were all rolled back soon after, principally by party machines upset at upstart challenges or victories. The possibility of multiple, simultaneous majorities is a principal reason for the variety of Bucklin/Median systems. Modern proposals of median systems include Majority Approval Voting, Majority Judgment, and Graduated Majority Judgment, which would probably give the same winners almost all of the time. An important detail is that most median system ballots use verbal or letter grades rather than numeric scores. This is justifiable because the median is preserved under any monotonic transformation, and studies suggest that it would help discourage strategic voting.

Serious attention to rated systems like approval, score, and median systems barely began in the 1980s, and didn't really pick up until 2000. Meanwhile, the increased amateur interest in voting systems in this period — perhaps partially attributable to the anomalous 2000 US presidential election, or to more-recent anomalies in the UK, Canada, and Australia — has led to new discoveries in ranked systems as well. Though such systems are still clearly subject to Arrow's theorem, new "improved Condorcet" methods which use certain tricks to count a voter's equal preferences between to candidates on either side of the ledger depending on the strategic needs, seem to offer promise that Arrovian pathologies can be kept to a minimum.

With this embarrassment of riches of systems to choose from, how should we evaluate which is best? Well, at least one thing is a clear consensus: plurality is a horrible system. Beyond that, things are more controversial; there are dozens of possible objective criteria one could formulate, and any system's inventor and/or supporters can usually formulate some criterion by which it shines.

Ideally, we'd like to measure the utility of each voting system in the real world. Since that's impossible — it would take not just a statistically-significant sample of large-scale real-world elections for each system, but also some way to measure the true internal utility of a result in situations where voters are inevitably strategically motivated to lie about that utility — we must do the next best thing, and measure it in a computer, with simulated voters whose utilities are assigned measurable values. Unfortunately, that requires assumptions about how those utilities are distributed, how voter turnout is decided, and how and whether voters strategize. At best, those assumptions can be varied, to see if findings are robust.

In 2000, Warren Smith performed such simulations for a number of voting systems. He found that score voting had, very robustly, one of the top expected social utilities (or, as he termed it, lowest Bayesian regret). Close on its heels were a median system and approval voting. Unfortunately, though he explored a wide parameter space in terms of voter utility models and inherent strategic inclination of the voters, his simulations did not include voters who were more inclined to be strategic when strategy was more effective. His strategic assumptions were also unfavorable to ranked systems, and slightly unrealistic in other ways. Still, though certain of his numbers must be taken with a grain of salt, some of his results were large and robust enough to be trusted. For instance, he found that plurality voting and instant runoff voting were clearly inferior to rated systems; and that approval voting, even at its worst, offered over half the benefits compared to plurality of any other system.

Summary of above: Rated systems, such as approval voting, score voting, and Majority Approval Voting, can avoid the problems of Arrow's theorem. Though they are certainly not immune to issues of strategic voting, they are a clear step up from plurality. Starting with this section, the opinions are my own; the two prior sections were based on general expert views on the topic.

5. Delegation and SODA

Rated systems are not the only way to try to beat the problems of Arrow and Gibbard (/Satterthwaite).

Summary of above:

6. Criteria and pathologies

do.

Summary of above:

7. Representation, proportionality, and sortition

do.

Summary of above:

8. What I'm doing about it and what you can

do.

Summary of above:

9. Conclusions and future directions

do.

Summary of above:

10. Appendix: voting systems table

Compliance of selected systems (table)

The following table shows which of the above criteria are met by several single-winner systems. Note: contains some errors; I'll carefully vet this when I'm finished with the writing. Still generally reliable though.

 Major­ity/
MMC
Condorcet/
Majority Condorcet
Cond.
loser

Mono­tone
Consist­ency/
Particip­ation
Rever­sal
sym­metry

IIA
Cloneproof
Poly­time/
Resolv­able
Summ­able
Equal rankings
allowed
Later
prefs
allowed
Later-no-harm­/
Later-no-help
FBC:No
favorite
betrayal
Approval[nb 1] Ambig­uous NoStrategic yes[nb 2] No Yes Yes[nb 2] Yes Ambig­uous Ambig.­[nb 3] Yes O(N) Yes No [nb 4] Yes
Borda count No No Yes Yes Yes Yes No No (teaming) Yes O(N) No Yes No No
Copeland Yes Yes Yes Yes No Yes No (but ISDA) No (crowding) Yes/No O(N2) Yes Yes No No
IRV (AV) Yes No Yes No No No No Yes Yes O(N!)­[nb 5] No Yes Yes No
Kemeny-Young Yes Yes Yes Yes No Yes No (but ISDA) No (teaming) No/Yes O(N2[nb 6] Yes Yes No No
Majority Judg­ment[nb 7] Yes[nb 8] NoStrategic yes[nb 2] No[nb 9] Yes No[nb 10] No[nb 11] Yes Yes Yes O(N)­[nb 12] Yes Yes No[nb 13] Yes Yes
Minimax Yes/No Yes[nb 14] No Yes No No No No (spoilers) Yes O(N2) Some variants Yes No[nb 14] No
Plurality Yes/No No No Yes Yes No No No (spoilers) Yes O(N) No No [nb 4] No
Range voting[nb 1] No NoStrategic yes[nb 2] No Yes Yes[nb 2] Yes Yes[nb 15] Ambig.­[nb 3] Yes O(N) Yes Yes No Yes
Ranked pairs Yes Yes Yes Yes No Yes No (but ISDA) Yes Yes O(N2) Yes Yes No No
Runoff voting Yes/No No Yes No No No No No (spoilers) Yes O(N)­[nb 16] No No[nb 17] Yes[nb 18] No
Schulze Yes Yes Yes Yes No Yes No (but ISDA) Yes Yes O(N2) Yes Yes No No
SODA voting[nb 19] Yes Strategic yes/yes Yes Ambig­uous[nb 20] Yes/Up to 4 cand. [nb 21] Yes[nb 22] Up to 4 candidates[nb 21] Up to 4 cand. (then crowds) [nb 21] Yes[nb 23] O(N) Yes Limited[nb 24] Yes Yes
Random winner/
arbitrary winner[nb 25]
No No No NA No Yes Yes NA Yes/No O(1) No No   Yes
Random ballot[nb 26] No No No Yes Yes Yes Yes Yes Yes/No O(N) No No   Yes

"Yes/No", in a column which covers two related criteria, signifies that the given system passes the first criterion and not the second one.

  1. Jump up to:a b These criteria assume that all voters vote their true preference order. This is problematic for Approval and Range, where various votes are consistent with the same order. See approval voting for compliance under various voter models.
  2. Jump up to:a b c d e In Approval, Range, and Majority Judgment, if all voters have perfect information about each other's true preferences and use rational strategy, any Majority Condorcet or Majority winner will be strategically forced – that is, win in the unique Strong Nash equilibrium. In particular if every voter knows that "A or B are the two most-likely to win" and places their "approval threshold" between the two, then the Condorcet winner, if one exists and is in the set {A,B}, will always win. These systems also satisfy the majority criterion in the weaker sense that any majority can force their candidate to win, if it so desires. (However, as the Condorcet criterion is incompatible with the participation criterion and the consistency criterion, these systems cannot satisfy these criteria in this Nash-equilibrium sense. Laslier, J.-F. (2006) "Strategic approval voting in a large electorate,"IDEP Working Papers No. 405 (Marseille, France: Institut D'Economie Publique).)
  3. Jump up to:a b The original independence of clones criterion applied only to ranked voting methods. (T. Nicolaus Tideman, "Independence of clones as a criterion for voting rules", Social Choice and Welfare Vol. 4, No. 3 (1987), pp. 185–206.) There is some disagreement about how to extend it to unranked methods, and this disagreement affects whether approval and range voting are considered independent of clones. If the definition of "clones" is that "every voter scores them within ±ε in the limit ε→0+", then range voting is immune to clones.
  4. Jump up to:a b Approval and Plurality do not allow later preferences. Technically speaking, this means that they pass the technical definition of the LNH criteria - if later preferences or ratings are impossible, then such preferences can not help or harm. However, from the perspective of a voter, these systems do not pass these criteria. Approval, in particular, encourages the voter to give the same ballot rating to a candidate who, in another voting system, would get a later rating or ranking. Thus, for approval, the practically meaningful criterion would be not "later-no-harm" but "same-no-harm" - something neither approval nor any other system satisfies.
  5. Jump up^ The number of piles that can be summed from various precincts is floor((e-1) N!) - 1.
  6. Jump up^ Each prospective Kemeny-Young ordering has score equal to the sum of the pairwise entries that agree with it, and so the best ordering can be found using the pairwise matrix.
  7. Jump up^ Bucklin voting, with skipped and equal-rankings allowed, meets the same criteria as Majority Judgment; in fact, Majority Judgment may be considered a form of Bucklin voting. Without allowing equal rankings, Bucklin's criteria compliance is worse; in particular, it fails Independence of Irrelevant Alternatives, which for a ranked method like this variant is incompatible with the Majority Criterion.
  8. Jump up^ Majority judgment passes the rated majority criterion (a candidate rated solo-top by a majority must win). It does not pass the ranked majority criterion, which is incompatible with Independence of Irrelevant Alternatives.
  9. Jump up^ Majority judgment passes the "majority condorcet loser" criterion; that is, a candidate who loses to all others by a majority cannot win. However, if some of the losses are not by a majority (including equal-rankings), the Condorcet loser can, theoretically, win in MJ, although such scenarios are rare.
  10. Jump up^ Balinski and Laraki, Majority Judgment's inventors, point out that it meets a weaker criterion they call "grade consistency": if two electorates give the same rating for a candidate, then so will the combined electorate. Majority Judgment explicitly requires that ratings be expressed in a "common language", that is, that each rating have an absolute meaning. They claim that this is what makes "grade consistency" significant. MJ. Balinski M. and R. Laraki (2007) «A theory of measuring, electing and ranking». Proceedings of the National Academy of Sciences USA, vol. 104, no. 21, 8720-8725.
  11. Jump up^ Majority judgment can actually pass or fail reversal symmetry depending on the rounding method used to find the median when there are even numbers of voters. For instance, in a two-candidate, two-voter race, if the ratings are converted to numbers and the two central ratings are averaged, then MJ meets reversal symmetry; but if the lower one is taken, it does not, because a candidate with ["fair","fair"] would beat a candidate with ["good","poor"] with or without reversal. However, for rounding methods which do not meet reversal symmetry, the chances of breaking it are on the order of the inverse of the number of voters; this is comparable with the probability of an exact tie in a two-candidate race, and when there's a tie, any method can break reversal symmetry.
  12. Jump up^ Majority Judgment is summable at order KN, where K, the number of ranking categories, is set beforehand.
  13. Jump up^ Majority judgment meets a related, weaker criterion: ranking an additional candidate below the median grade (rather than your own grade) of your favorite candidate, cannot harm your favorite.
  14. Jump up to:a b A variant of Minimax that counts only pairwise opposition, not opposition minus support, fails the Condorcet criterion and meets later-no-harm.
  15. Jump up^ Range satisfies the mathematical definition of IIA, that is, if each voter scores each candidate independently of which other candidates are in the race. However, since a given range score has no agreed-upon meaning, it is thought that most voters would either "normalize" or exaggerate their vote such that it votes at least one candidate each at the top and bottom possible ratings. In this case, Range would not be independent of irrelevant alternatives. Balinski M. and R. Laraki (2007) «A theory of measuring, electing and ranking». Proceedings of the National Academy of Sciences USA, vol. 104, no. 21, 8720-8725.
  16. Jump up^ Once for each round.
  17. Jump up^ Later preferences are only possible between the two candidates who make it to the second round.
  18. Jump up^ That is, second-round votes cannot harm candidates already eliminated.
  19. Jump up^ Unless otherwise noted, for SODA's compliances:
    • Delegated votes are considered to be equivalent to voting the candidate's predeclared preferences.
    • Ballots only are considered (In other words, voters are assumed not to have preferences that cannot be expressed by a delegated or approval vote.)
    • Since at the time of assigning approvals on delegated votes there is always enough information to find an optimum strategy, candidates are assumed to use such a strategy.
  20. Jump up^ For up to 4 candidates, SODA is monotonic. For more than 4 candidates, it is monotonic for adding an approval, for changing from an approval to a delegation ballot, and for changes in a candidate's preferences. However, if changes in a voter's preferences are executed as changes from a delegation to an approval ballot, such changes are not necessarily monotonic with more than 4 candidates.
  21. Jump up to:a b c For up to 4 candidates, SODA meets the Participation, IIA, and Cloneproof criteria. It can fail these criteria in certain rare cases with more than 4 candidates. This is considered here as a qualified success for the Consistency and Participation criteria, which do not intrinsically have to do with numerous candidates, and as a qualified failure for the IIA and Cloneproof criteria, which do.
  22. Jump up^ SODA voting passes reversal symmetry for all scenarios that are reversible under SODA; that is, if each delegated ballot has a unique last choice. In other situations, it is not clear what it would mean to reverse the ballots, but there is always some possible interpretation under which SODA would pass the criterion.
  23. Jump up^ SODA voting is always polytime computable. There are some cases where the optimal strategy for a candidate assigning delegated votes may not be polytime computable; however, such cases are entirely implausible for a real-world election.
  24. Jump up^ Later preferences are only possible through delegation, that is, if they agree with the predeclared preferences of the favorite.
  25. Jump up^ Random winner: Uniformly randomly chosen candidate is winner. Arbitrary winner: some external entity, not a voter, chooses the winner. These systems are not, properly speaking, voting systems at all, but are included to show that even a horrible system can still pass some of the criteria.
  26. Jump up^ Random ballot: Uniformly random-chosen ballot determines winner. This and closely related systems are of mathematical interest because they are the only possible systems which are truly strategy-free, that is, your best vote will never depend on anything about the other voters. They also satisfy both consistency and IIA, which is impossible for a deterministic ranked system. However, this system is not generally considered as a serious proposal for a practical method.

11. Footnotes

¹ When I call my introduction "overblown", I mean that I reserve the right to make broad generalizations there, without getting distracted by caveats. If you don't like this style, feel free to skip to section 2.

 

² Of course, the original "politics is a mind killer" sequence was perfectly clear about this: "Politics is an important domain to which we should individually apply our rationality—but it's a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational." The focus here is on the first part of that quote, because I think Less Wrong as a whole has moved too far in the direction of avoiding politics as not a domain for rationalists.

 

³ Bayes developed his theorem decades before Condorcet's Essai, but Condorcet probably didn't know of it, as it wasn't popularized by Laplace until about 30 years later, after Condorcet was dead.

 

⁴ Yes, this happens to be the same Alan Gibbard from the previous paragraph.

 

⁵ Confusingly, "public choice" refers to a school of thought, while "social choice" is the name for the broader domain of study. Stop reading this footnote now if you don't want to hear mind-killing partisan identification. "Public choice" theorists are generally seen as politically conservative in the solutions they suggest. It seems to me that the broader "social choice" has avoided taking on a partisan connotation in this sense.

 

⁶ Score voting is also called "range voting" by some. It is not a particularly new idea — for instance, the "loudest cheer wins" rule of ancient Sparta, and even aspects of honeybees' process for choosing new hives, can be seen as score voting — but it was first analyzed theoretically around 2000. Approval voting, which can be seen as a form of score voting where the scores are restricted to 0 and 1, had entered theory only about two decades earlier, though it too has a history of practical use back to antiquity.

 

⁷ OK, fine, this is a simplification. As a voter, you have imperfect information about the true level of support and propensity to vote in the superpopulation of eligible voters, so in reality the chances of a decisive tie between other than your two expected frontrunners is non-zero. Still, in most cases, it's utterly negligible.

 

⁸ This article will focus more on the literature on multi-player strategic voting (competing boundedly-instrumentally-rational agents) than on multi-player Aumann (cooperating boundedly-epistemically-rational agents). If you're interested in the latter, here are some starting points: Scott Aaronson's work is, as far as I know, the state of the art on 2-player Aumann, but its framework assumes that the players have a sophisticated ability to empathize and reason about each others' internal knowledge, and the problems with this that Aaronson plausibly handwaves away in the 2-player case are probably less tractable in the multi-player one. Dalkiran et al deal with an Aumann-like problem over a social network; they find that attempts to "jump ahead" to a final consensus value instead of simply dumbly approaching it asymptotically can lead to failure to converge. And Kanoria et al have perhaps the most interesting result from the perspective of this article; they use the convergence of agents using a naive voting-based algorithm to give a nice upper bound on the difficulty of full Bayesian reasoning itself. None of these papers explicitly considers the problem of coming to consensus on more than one logically-related question at once, though Aaronson's work at least would clearly be easy to extend in that direction, and I think such extensions would be unsurprisingly Bayesian.

HIKE: A Group Dynamics Case Study

10 iconreforged 16 July 2013 07:14PM

I belong to a group at my university that organizes a backpacking trip for incoming freshmen in the two weeks before orientation week. This organization, which I will refer to as HIKE (not the real name), is particularly interesting in terms of group design. Why? It is approximately 30 years old, is run entirely by current students, and brings together a very large group of people and knits them into a largish community. Pretty much everyone involved  agrees that HIKE works very well. During my involvement (I was a participating freshman, and I have since become staff) I have continually wondered, why is this group so much more fun than any other group I've been a part of?

It's also particularly effective. Leading ~80 incoming freshmen, who have no current friends, and who know no one, and who don't generally have any backpacking experience, into the woods for two weeks, is no easy task. HIKE manages its own logistics, staff training, and organization, entirely with student volunteers who staff the trip, with little to no university interaction. (We get them to advertise our trip, and they generally permit us to continue to exist.) It takes some dedication to keep this rolling, and I have seen other campus groups completely fail to find that kind of dedication from their membership.

While it's not a rationalist group, it seems to have stumbled upon a cocktail of instrumentally rational practices. 

HIKE uses an interesting process of network homogenization. When staff members (who have generally been on several trips before) are assigned crews, staff members fill out "Who Do You Know?" forms, on which you rank how well you know other staff on a scale from 1 to 5. The people in charge of making groups, usually Project Directors, then group staffers based on how well you don't know other staff. You usually staff a trip with people that you haven't gotten to know very well, and then get to know them. Because of this process of strengthening the weakest bonds, HIKE is able to function as a relatively large social group, even across graduation classes and around existing cliques.

As far as actual interaction, HIKE involves a lot of face time with your crew of 10 freshmen and your co-staffers. There aren't really any breaks (with the exception of solos, see below) and you are hiking, eating, and chatting together for approximately 225 hours (15 waking hours in a day * 15 days). I had 13 hours and 40 minutes of class a week the Spring 2013 semester. HIKE is approximately 7+ weeks of class at that rate.

One of the more beloved HIKE traditions is the solo, where the hiking leaders pick a spot with plenty of isolated spaces, and the participants can choosee to spend ~24 hours alone and, optionally, fasting. It's a novel experience, and people like the time to rest and reflect in the middle of a very social, very intensive hiking trip.

My suspicion for why this all works is that HIKE very closely simulates a hunter-gatherer lifestyle. You travel in ~10 member groups, on foot, carrying your food, on mountain trails. You spend your every waking hour with the crew. The 2-3 hiking leaders are there to facilitate only (read: perform first aid if necessary, guide conversation, teach outdoor skills if necessary, and nudge the group if they get off track), and all decisions are made by consensus (which isn't an all-purpose decision making process, but is very egalitarian, and helps the group gel).

Maybe I'm just praising my friend-group, but I feel like I stumbled into a particularly strong group of people. We all feel very well-connected and we feel a lot of commitment to the program. My experience with other college groups has been that members are pulled apart by other commitments and a lack of familiarity with other members, and HIKE seems to avoid that with a critical mass of consecutive face time. We manage to have continuity of social norms across the years, but a great deal of flexibility (no one remembers what happened 4 years ago, and some traditions disappear and others cement themselves as ancient and hallowed despite being only two years old).

I'm interested in hearing any thoughts on this, and any relevant experience with other groups, ideas for testing cross-application, requests for further elaboration, etc.


 

 

Three more ways identity can be a curse

40 gothgirl420666 28 April 2013 02:53AM

The Buddhists believe that one of the three keys to attaining true happiness is dissolving the illusion of the self. (The other two are dissolving the illusion of permanence, and ceasing the desire that leads to suffering.) I'm not really sure exactly what it means to say "the self is an illusion", and I'm not exactly sure how that will lead to enlightenment, but I do think one can easily take the first step on this long journey to happiness by beginning to dissolve the sense of one's identity. 

Previously, in "Keep Your Identity Small", Paul Graham showed how a strong sense of identity can lead to epistemic irrationally, when someone refuses to accept evidence against x because "someone who believes x" is part of his or her identity. And in Kaj Sotala's "The Curse of Identity", he illustrated a human tendency to reinterpret a goal of "do x" as "give the impression of being someone who does x". These are both fantastic posts, and you should read them if you haven't already. 

Here are three more ways in which identity can be a curse.

1. Don't be afraid to change

James March, professor of political science at Stanford University, says that when people make choices, they tend to use one of two basic models of decision making: the consequences model, or the identity model. In the consequences model, we weigh the costs and benefits of our options and make the choice that maximizes our satisfaction. In the identity model, we ask ourselves "What would a person like me do in this situation?"1

The author of the book I read this in didn't seem to take the obvious next step and acknowledge that the consequences model is clearly The Correct Way to Make Decisions and basically by definition, if you're using the identity model and it's giving you a different result then the consequences model would, you're being led astray. A heuristic I like to use is to limit my identity to the "observer" part of my brain, and make my only goal maximizing the amount of happiness and pleasure the observer experiences, and minimizing the amount of misfortune and pain. It sounds obvious when you lay it out in these terms, but let me give an example. 

Alice is a incoming freshman in college trying to choose her major. In Hypothetical University, there are only two majors: English, and business. Alice absolutely adores literature, and thinks business is dreadfully boring. Becoming an English major would allow her to have a career working with something she's passionate about, which is worth 2 megautilons to her, but it would also make her poor (0 mu). Becoming a business major would mean working in a field she is not passionate about (0 mu), but it would also make her rich, which is worth 1 megautilon. So English, with 2 mu, wins out over business, with 1 mu.

However, Alice is very bright, and is the type of person who can adapt herself to many situations and learn skills quickly. If Alice were to spend the first six months of college deeply immersing herself in studying business, she would probably start developing a passion for business. If she purposefully exposed herself to certain pro-business memeplexes (e.g. watched a movie glamorizing the life of Wall Street bankers), then she could speed up this process even further. After a few years of taking business classes, she would probably begin to forget what about English literature was so appealing to her, and be extremely grateful that she made the decision she did. Therefore she would gain the same 2 mu from having a job she is passionate about, along with an additional 1 mu from being rich, meaning that the 3 mu choice of business wins out over the 2 mu choice of English.

However, the possibility of self-modifying to becoming someone who finds English literature boring and business interesting is very disturbing to Alice. She sees it as a betrayal of everything that she is, even though she's actually only been interested in English literature for a few years. Perhaps she thinks of choosing business as "selling out" or "giving in". Therefore she decides to major in English, and takes the 2 mu choice instead of the superior 3 mu.

(Obviously this is a hypothetical example/oversimplification and there are a lot of reasons why it might be rational to pursue a career path that doesn't make very much money.)

It seems to me like human beings have a bizarre tendency to want to keep certain attributes and character traits stagnant, even when doing so provides no advantage, or is actively harmful. In a world where business-passionate people systematically do better than English-passionate people, it makes sense to self-modify to become business-passionate. Yet this is often distasteful.

For example, until a few weeks ago when I started solidifying this thinking pattern, I had an extremely adverse reaction to the idea of ceasing to be a hip-hop fan and becoming a fan of more "sophisticated" musical genres like jazz and classical, eventually coming to look down on the music I currently listen to as primitive or silly. This doesn't really make sense - I'm sure if I were to become a jazz and classical fan I would enjoy those genres at least as much as I currently enjoy hip hop. And yet I had a very strong preference to remain the same, even in the trivial realm of music taste. 

Probably the most extreme example is the common tendency for depressed people to not actually want to get better, because depression has become such a core part of their identity that the idea of becoming a healthy, happy person is disturbing to them. (I used to struggle with this myself, in fact.) Being depressed is probably the most obviously harmful characteristic that someone can have, and yet many people resist self-modification.

Of course, the obvious objection is there's no way to rationally object to people's preferences - if someone truly prioritizes keeping their identity stagnant over not being depressed then there's no way to tell them they're wrong, just like if someone prioritizes paperclips over happiness there's no way to tell them they're wrong. But if you're like me, and you are interested in being happy, then I recommend looking out for this cognitive bias. 

The other objection is that this philosophy leads to extremely unsavory wireheading-esque scenarios if you take it to its logical conclusion. But holding the opposite belief - that it's always more important to keep your characteristics stagnant than to be happy - clearly leads to even more absurd conclusions. So there is probably some point on the spectrum where change is so distasteful that it's not worth a boost in happiness (e.g. a lobotomy or something similar). However, I think that in actual practical pre-Singularity life, most people set this point far, far too low. 

2. The hidden meaning of "be yourself"

(This section is entirely my own speculation, so take it as you will.)

"Be yourself" is probably the most widely-repeated piece of social skills advice despite being pretty clearly useless - if it worked then no one would be socially awkward, because everyone has heard this advice. 

However, there must be some sort of core grain of truth in this statement, or else it wouldn't be so widely repeated. I think that core grain is basically the point I just made, applied to social interaction. I.e, optimize always for social success and positive relationships (particularly in the moment), and not for signalling a certain identity. 

The ostensible purpose of identity/signalling is to appear to be a certain type of person, so that people will like and respect you, which is in turn so that people will want to be around you and be more likely to do stuff for you. However, oftentimes this goes horribly wrong, and people become very devoted to cultivating certain identities that are actively harmful for this purpose, e.g. goth, juggalo, "cool reserved aloof loner", guy that won't shut up about politics, etc. A more subtle example is Fred, who holds the wall and refuses to dance at a nightclub because he is a serious, dignified sort of guy, and doesn't want to look silly. However, the reason why "looking silly" is generally a bad thing is because it makes people lose respect for you, and therefore make them less likely to associate with you. In the situation Fred is in, holding the wall and looking serious will cause no one to associate with him, but if he dances and mingles with strangers and looks silly, people will be likely to associate with him. So unless he's afraid of looking silly in the eyes of God, this seems to be irrational.

Probably more common is the tendency to go to great care to cultivate identities that are neither harmful nor beneficial. E.g. "deep philosophical thinker", "Grateful Dead fan", "tough guy", "nature lover", "rationalist", etc. Boring Bob is a guy who wears a blue polo shirt and khakis every day, works as hard as expected but no harder in his job as an accountant, holds no political views, and when he goes home he relaxes by watching whatever's on TV and reading the paper. Boring Bob would probably improve his chances of social success by cultivating a more interesting identity, perhaps by changing his wardrobe, hobbies, and viewpoints, and then liberally signalling this new identity. However, most of us are not Boring Bob, and a much better social success strategy for most of us is probably to smile more, improve our posture and body language, be more open and accepting of other people, learn how to make better small talk, etc. But most people fail to realize this and instead play elaborate signalling games in order to improve their status, sometimes even at the expense of lots of time and money.

Some ways by which people can fail to "be themselves" in individual social interactions: liberally sprinkle references to certain attributes that they want to emphasize, say nonsensical and surreal things in order to seem quirky, be afraid to give obvious responses to questions in order to seem more interesting, insert forced "cool" actions into their mannerisms, act underwhelmed by what the other person is saying in order to seem jaded and superior, etc. Whereas someone who is "being herself" is more interested in creating rapport with the other person than giving off a certain impression of herself.  

Additionally, optimizing for a particular identity might not only be counterproductive - it might actually be a quick way to get people to despise you. 

I used to not understand why certain "types" of people, such as "hipsters"2 or Ed Hardy and Affliction-wearing "douchebags" are so universally loathed (especially on the internet). Yes, these people are adopting certain styles in order to be cool and interesting, but isn't everyone doing the same? No one looks through their wardrobe and says "hmm, I'll wear this sweater because it makes me uncool, and it'll make people not like me". Perhaps hipsters and Ed Hardy Guys fail in their mission to be cool, but should we really hate them for this? If being a hipster was cool two years ago, and being someone who wears normal clothes, acts normal, and doesn't do anything "ironically" is cool today, then we're really just hating people for failing to keep up with the trends. And if being a hipster actually is cool, then, well, who can fault them for choosing to be one?

That was my old thought process. Now it is clear to me that what makes hipsters and Ed Hardy Guys hated is that they aren't "being themselves" - they are much more interested in cultivating an identity of interestingness and masculinity, respectively, than connecting with other people. The same thing goes for pretty much every other collectively hated stereotype I can think of3 - people who loudly express political opinions, stoners who won't stop talking about smoking weed, attention seeking teenage girls on facebook, extremely flamboyantly gay guys, "weeaboos", hippies and new age types, 2005 "emo kids", overly politically correct people, tumblr SJA weirdos who identify as otherkin and whatnot, overly patriotic "rednecks", the list goes on and on. 

This also clears up a confusion that occurred to me when reading How to Win Friends and Influence People. I know people who have a Dale Carnegie mindset of being optimistic and nice to everyone they meet and are adored for it, but I also know people who have the same attitude and yet are considered irritatingly saccharine and would probably do better to "keep it real" a little. So what's the difference? I think the difference is that the former group are genuinely interested in being nice to people and building rapport, while members of the second group have made an error like the one described in Kaj Sotala's post and are merely trying to give off the impression of being a nice and friendly person. The distinction is obviously very subtle, but it's one that humans are apparently very good at perceiving. 

I'm not exactly sure what it is that causes humans to have this tendency of hating people who are clearly optimizing for identity - it's not as if they harm anyone. It probably has to do with tribal status. But what is clear is that you should definitely not be one of them. 

3. The worst mistake you can possibly make in combating akrasia

The main thesis of PJ Eby's Thinking Things Done is that the primary reason why people are incapable of being productive is that they use negative motivation ("if I don't do x, some negative y will happen") as opposed to positive motivation ("if i do x, some positive y will happen"). He has the following evo-psych explanation for this: in the ancestral environment, personal failure meant that you could possibly be kicked out of your tribe, which would be fatal. A lot of depressed people make statements like "I'm worthless", or "I'm scum" or "No one could ever love me", which are illogically dramatic and overly black and white, until you realize that these statements are merely interpretations of a feeling of "I'm about to get kicked out of the tribe, and therefore die." Animals have a freezing response to imminent death, so if you are fearing failure you will go into do-nothing mode and not be able to work at all.4

In Succeed: How We Can Reach Our Goals, Phd psychologist Heidi Halvorson takes a different view and describes positive motivation and negative motivation as having pros and cons. However, she has her own dichotomy of Good Motivation and Bad Motivation: "Be good" goals are performance goals, and are directed at achieving a particular outcome, like getting an A on a test, reaching a sales target, getting your attractive neighbor to go out with you, or getting into law school. They are very often tied closely to a sense of self-worth. "Get better" goals are mastery goals, and people who pick these goals judge themselves instead in terms of the progress they are making, asking questions like "Am I improving? Am I learning? Am I moving forward at a good pace?" Halvorson argues that "get better" goals are almost always drastically better than "be good" goals5. An example quote (from page 60) is:

When my goal is to get an A in a class and prove that I'm smart, and I take the first exam and I don't get an A... well, then I really can't help but think that maybe I'm not so smart, right? Concluding "maybe I'm not smart" has several consequences and none of them are good. First, I'm going to feel terrible - probably anxious and depressed, possibly embarrassed or ashamed. My sense of self-worth and self-esteem are going to suffer. My confidence will be shaken, if not completely shattered. And if I'm not smart enough, there's really no point in continuing to try to do well, so I'll probably just give up and not bother working so hard on the remaining exams. 

And finally, in Feeling Good: The New Mood Therapy, David Burns describes a destructive side effect of depression he calls "do-nothingism":

One of the most destructive aspects of depression is the way it paralyzes your willpower. In its mildest form you may simply procrastinate about doing a few odious chores. As your lack of motivation increases, virtually any activity appears so difficult that you become overwhelmed by the urge to do nothing. Because you accomplish very little, you feel worse and worse. Not only do you cut yourself off from your normal sources of stimulation and pleasure, but your lack of productivity aggravates your self-hatred, resulting in further isolation and incapacitation.

Synthesizing these three pieces of information leads me to believe that the worst thing you can possibly do for your akrasia is to tie your success and productivity to your sense of identity/self-worth, especially if you're using negative motivation to do so, and especially if you suffer or have recently suffered from depression or low-self esteem. The thought of having a negative self-image is scary and unpleasant, perhaps for the evo-psych reasons PJ Eby outlines. If you tie your productivity to your fear of a negative self-image, working will become scary and unpleasant as well, and you won't want to do it.

I feel like this might be the single number one reason why people are akratic. It might be a little premature to say that, and I might be biased by how large of a factor this mistake was in my own akrasia. But unfortunately, this trap seems like a very easy one to fall into. If you're someone who is lazy and isn't accomplishing much in life, perhaps depressed, then it makes intuitive sense to motivate yourself by saying "Come on, self! Do you want to be a useless failure in life? No? Well get going then!" But doing so will accomplish the exact opposite and make you feel miserable. 

So there you have it. In addition to making you a bad rationalist and causing you to lose sight of your goals, a strong sense of identity will cause you to make poor decisions that lead to unhappiness, be unpopular, and be unsuccessful. I think the Buddhists were onto something with this one, personally, and I try to limit my sense of identity as much as possible. A trick you can use in addition to the "be the observer" trick I mentioned, is to whenever you find yourself thinking in identity terms, swap out that identity for the identity of "person who takes over the world by transcending the need for a sense of identity". 


This is my first LessWrong discussion post, so constructive criticism is greatly appreciated. Was this informative? Or was what I said obvious, and I'm retreading old ground? Was this well written? Should this have been posted to Main? Should this not have been posted at all? Thank you. 


1. Paraphrased from page 153 of Switch: How to Change When Change is Hard

2. Actually, while it works for this example, I think the stereotypical "hipster" is a bizarre caricature that doesn't match anyone who actually exists in real life, and the degree to which people will rabidly espouse hatred for this stereotypical figure (or used to two or three years ago) is one of the most bizarre tendencies people have. 

3. Other than groups that arguably hurt people (religious fundamentalists, PUAs), the only exception I can think of is frat boy/jock types. They talk about drinking and partying a lot, sure, but not really any more than people who drink and party a lot would be expected to. Possibilities for their hated status include that they do in fact engage in obnoxious signalling and I'm not aware of it, jealousy, or stigmatization as hazers and date rapists. Also, a lot of people hate stereotypical "ghetto" black people who sag their jeans and notoriously type in a broken, difficult-to-read form of English. This could either be a weak example of the trend (I'm not really sure what it is they would be signalling, maybe dangerous-ness?), or just a manifestation of racism.

4. I'm not sure if this is valid science that he pulled from some other source, or if he just made this up.

5. The exception is that "be good" goals can lead to a very high level of performance when the task is easy. 

 

Post-college: changing nature of friend interactions

19 calcsam 09 January 2013 10:26PM

As a working professional a couple of years out college, I’ve been noticing how interactions with my friends has changed since the beginning of college – and especially since graduation.

 In college, my social groups typically formed around groups with common meeting places -- freshman dorm, newspaper, church, “draw group” (essentially a group of friends that ‘draw’ into the same dorm).

Because there was a common space where everyone could hang out, everyone else felt comfortable just showing up (at least at designated times), and so there were always people to talk to. No-permission-required-meeting was a self-sustaining norm.

With jobs and schedules, we shift to a permission-required-meeting-situation – you don’t just show up at your friend’s house, we say “Hey, what’s a good time to meet up?”

This adds an additional barrier to meeting, and so that happen less often.

People usually realize this at some level, and employ a variety of ad-hoc strategies to counteract this. These are usually well-deployed in our professional lives, but in our personal lives, there are some complications, and usually room for improvement.

  • Group meetings. There are 10 connections between five people, as opposed to one connection between two people. But generally – assuming people share fairly common schedules – it will take less than 10x initiative to get five people together as two

Disadvantage: Often most of our close friends don’t form groups. Only a small subset of mine does.  

  • Non-face-to-face communication. Christmas cards are a time-honored way of doing this. E-mail, like mail, is a no-permission-required system. Every year, I send out a general Life Update email to my old and current friends and family. My friends and I more frequently email each other interesting links. When I read something cool online, I often think “who could I send this to?”

  Disadvantage: for most people, compared to face-to-face interaction, it’s not the same.

  • Scheduling regular meetings: I live in CA and my girlfriend lives in NY, so for the last five months we have set aside 10am PST / 1pm EST to talk every weekday. For the last 8 months, I have met my friend Caleb* have weekly 1-to-2 hour meetings on Sunday mornings where we discuss how the last week went and make goals for the next week. We plan for every week, or day, and it happens 60-80% of the time.

 Disadvantage: The well-known “my schedule is too full to see you” is illuminated by analogy. In The Road to Serfdom (1944), economist FA Hayek discussed the politics of price and wage controls. These policies would shelter one particular group, he wrote, but at the risk of leaving everyone else out in the cold, and now slightly colder.[1]

Something similar happens with planning one’s schedule. Perhaps because I’m busy with the above and additional planned activities with my other friends, I don’t see my friend Christine* enough, and I rarely talk to my college friends Lina* and Maya* anymore

So Christine and I have decided to go running every Tuesday evening after work. Sure, I’ll be even more scheduled, and less likely to meet new, interesting people outside of my designated “meet new people” events.

But at least I’ll get some exercise.  

Commenters: really curious to hear additional tactics, improvements, or experiences!

*Names changed.



[1] Hayek warned that in this situation, each group would increase its clamor to be “let in,” but granting each seemingly-reasonable demand would lead one step closer to a planned economy. Meanwhile, the most vulnerable but ill-connected or ill-organized groups, such as immigrants or the non-unionized-working class, would be left largely out in the cold. 

Statistical checks on some social science

17 NancyLebovitz 17 December 2012 05:23PM

Simonsohn, a social scientist, investigates bad use of statistics in his field.

A few good quotes:

The three social psychologists set up a test experiment, then played by current academic methodologies and widely permissible statistical rules. By going on what amounted to a fishing expedition (that is, by recording many, many variables but reporting only the results that came out to their liking); by failing to establish in advance the number of human subjects in an experiment; and by analyzing the data as they went, so they could end the experiment when the results suited them, they produced a howler of a result, a truly absurd finding. They then ran a series of computer simulations using other experimental data to show that these methods could increase the odds of a false-positive result—a statistical fluke, basically—to nearly two-thirds.

Laugh or cry?:"He prefers psychology’s close-up focus on the quirks of actual human minds to the sweeping theory and deduction involved in economics."

Last summer, not long after Sanna and Smeesters left their respective universities, Simonsohn laid out his approach to fraud-busting in an online article called “Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone”. Afterward, his inbox was flooded with tips from strangers. People wanted him to investigate election results, drug trials, the work of colleagues they’d long doubted. He has not replied to these messages. Making a couple of busts is one thing. Assuming the mantle of the social sciences’ full-time Grand Inquisitor would be quite another.

This looks like a clue that there's work available for anyone who knows statistics. Eventually, there will be an additional line of work for how to tell whether a forensic statistician is competent.

 

Empirical claims, preference claims, and attitude claims

5 John_Maxwell_IV 15 November 2012 07:41PM

What do the following statements have in common?

  • "Atlas Shrugged is the best book ever written."
  • "You break it, you buy it."
  • "Earth is the most interesting planet in the solar system."

My answer: None of them are falsifiable claims about the nature of reality.  They're all closer to what one might call "opinions".  But what is an "opinion", exactly?

There's already been some discussion on Less Wrong about what exactly it means for a claim to be meaningful.  This post focuses on the negative definition of meaning: what sort of statements do people make where the primary content of the statement is non-empirical?  The idea here is similar to the idea behind anti-virus software: Even if you can't rigorously describe what programs are safe to run on your computer, there still may be utility in keeping a database of programs that are known to be unsafe.

Why is it useful to be able to be able to flag non-empirical claims?  Well, for one thing, you can believe whatever you want about them!  And it seems likely that this pattern-matching approach works better for flagging them than a more constructive definition.

continue reading »

Using existing Strong AIs as case studies

-6 ialdabaoth 16 October 2012 10:59PM

I would like to put forth the argument that we already have multiple human-programmed "Strong AI" operating among us, they already exhibits clearly "intelligent", rational, self-modifying goal-seeking behavior, and we should systematically study these entities before engaging in any particularly detailed debates about "designing" AI with particular goals.

They're called "Bureaucracies".

Essentially, a modern bureaucracy - whether it is operating as the decision-making system for a capitalist corporation, a government, a non-profit charity, or a political party, is an artificial intelligence that uses human brains as its basic hardware and firmware, allowing it to "borrow" a lot of human computational algorithms to do its own processing.

The fact that bureaucratic decisions can be traced back to individual human decisions is irrelevant - even within a human or computer AI, a decision can theoretically be traced back to single neurons or subroutines - the fact is that bureaucracies have evolved to guide and exploit human decision-making towards their own ends, often to the detriment of the individual humans that comprise said bureaucracy.

Note that when I say "I would like to put forth the argument", I am at least partially admitting that I'm speaking from hunch, rather than already having a huge collection of empirical data to work from - part of the point of putting this forward is to acknowledge that I'm not yet very good at "avalanche of empirical evidence"-style argument. But I would *greatly* appreciate anyone who suspects that they might be able to demonstrate evidence for or against this idea, presenting said evidence so I can solidify my reasoning. 

As a "step 2": assuming the evidence weighs in towards my notion, what would it take to develop a systematic approach to studying bureaucracy from the perspective of AI or even xenosapience, such that bureaucracies could be either "programmed" or communicated with directly by the human agents that comprise them (and ideally by the larger pool of human stakeholders that are forced to interact with them?)

Call for Anonymous Narratives by LW Women and Question Proposals (AMA)

20 [deleted] 09 September 2012 08:39AM

In another discussion going on right now, I posted this proposal, asking for feedback on this experiment. The feedback was positive, so here goes...

Original Post:

When these gender discussions come up, I am often tempted to write in with my own experiences and desires. But I generally don't because I don't want to generalize from one example, or claim to be the Voice of Women, etc. However, according to the last survey, I actually AM over 1% of the females on here, and so is every other woman. (i.e. there are less than 100 of us).

My idea is to put out a call for women on LessWrong to write openly about their experiences and desires in this community, and send them to me. I will anonymize them all, and put them all up under one post.

This would have a couple of benefits, including:

  • Anonymity allows for open expression- When you are in the vast minority, speaking out can feel like "swimming upstream," and so may not happen very much.

  • Putting all the women's responses in one posts helps figure out what is/is not a problem- Because of the gender ratio, most discussions on the topic are Men Talking About what Women Want, it can be hard to figure out what women are saying on the issues, versus what men are saying women say.

  • The plural of anecdote is data- If one woman says X, it is an anecdote, and very weak evidence. If 10% of women say X, it is much stronger evidence.

Note that with a lot of the above issues, one of the biggest problems in figuring out what is going on isn't purposeful misogyny or anything. Just the fact that the gender ratio is so skewed can make it difficult to hear women (think picking out one voice amongst ten). The idea I'm proposing is an attempt to work around this, not an attempt to marginalize men, who may also have important things to say, but would not be the focus of this investigation.

Even with a sample size of 10 responses (approximately the amount I would say is needed for this to be useful), according to the last survey, that is 10% of the women on this site. A sizable proportion, indeed.

 

In the following discussion, the idea was added that fellow LWers could submit questions to the Women of LW. The women can then use these as prompts in their narratives, if they like. If you are interested in submitting questions, please read the guidelines below in "Call for Questions" before posting.

If you are interested in submitting a narrative, please read the Call for Narrative section below.

 


 

Call for Narratives

RSVP -(ETA- We have reached the needed number of pre-commitments! You do not need to fill out the form, although you are welcome to, if you like) I think we need to have at least 6 people submitting narratives to provide both the scope and the anonymity to work. So before I ask women to spend their time writing these, I would like to make sure we will get enough submissions to publish. If you are going to write a narrative, fill out this (one-minute) form in the next couple days. If we get at least 6 women pre-committed to writing a narrative, we will move forward. I will PM or email you and let you know. If, in a week, we have not had at least 6 commitments, I will close the form.

Submissions- Feel free to submit, even if you did not RSVP. (that part is just to make sure we have minimum amount of people). Just send me a pm, dropbox link, or ask for my email. I'll add more information to this, as it gets worked out. 

Although the discussion that spurred this idea was about "creep" behaviors, please don't limit your responses to that subject only. Feel free to discuss any gender-related issues that you find relevant, especially responses to the questions that are posted in the thread below by your fellow LWers.

The anonymity is to provide you with the opportunity to express non-self-censored thoughts. It is ok if they are half-formed, stream-of-consciousness writings. My goal is to find out what the women on this site think, not nit-pick at the writing style. I don't want to limit submissions by saying that they have to have hours spent on formulating, organizing, and clarifying them. Write as much as you like. Don't worry about length. I will write tl;dr's if needed.

How I organize the submissions in the final post depends strongly on what is submitted to me. Separate out things that you think are identifiable to you, and I will put them in a section that is not affiliated with the rest of your submission.

Submissions are due Sept 25th!

Security- I am willing to work with people individually to make sure that their narratives aren't identifiable via writing style or little clues. Discussions that are obviously written by you (for example, talking about an incident many LWers know about) can be pulled out of your main narrative, and placed in a separate section. (reading the original exchange on the topic will clarify what I am trying to explain)

Verification- Submissions must be linked to active LW accounts (i.e. older than a week, more than 50 karma). This info will only be known to me. When possible, I would like to have validation (such as a link to a relevant post) that the account is of a female or transgendered user.  

 

 

Call for Questions

Feel free to ask questions you would like answered by the women of LW. To make everything easier for us, remember the following:

1) Put questions in response to the comment entitled "Question submissions"

2)Due to the nature of this experiment, all questions will automatically assumed to be operating under Crocker's Rules.

 3) Please only post one question per comment!

Upvote questions you would like to see answered. The questions with the highest amounts of upvotes are probably the most likely to be answered (based on my model of fellow LW Women).

How to deal with someone in a LessWrong meeting being creepy

16 Douglas_Reay 09 September 2012 04:41AM

One of the lessons highlighted in the thread "Less Wrong NYC: Case Study of a Successful Rationalist Chapter" is Gender ratio matters.

There have recently been a number of articles addressing one social skills issue that might be affecting this, from the perspective of a geeky/sciencefiction community with similar attributes to LessWrong, and I want to link to these, not just so the people potentially causing problems get to read them, but also so everyone else knows the resource is there and has a name for the problem, which may facilitate wider discussion and make it easier for others to know when to point towards the resources those who would benefit by them.

However before I do, in the light of RedRobot's comment in the "Of Gender and Rationality" thread, I'd like to echo a sentiment from one of the articles, that people exhibiting this behaviour may be of any gender and may victimise upon any gender.   And so, while it may be correlated with a particular gender, it is the behaviour that should be focused upon, and turning this thread into bashing of one gender (or defensiveness against perceived bashing) would be unhelpful.

Ok, disclaimers out of the way, here are the links:

Some of those raise deeper issues about rape culture and audience as enabler, but the TLDR summary is:

  1. Creepy behaviour is behaviour that tends to make others feel unsafe or uncomfortable.
  2. If a significant fraction of a group find your behaviour creepy, the responsibility to change the behaviour is yours.
  3. There are specific objective behaviours listed in the articles (for example, to do with touching, sexual jokes and following people) that even someone 'bad' at social skills can learn to avoid doing.
  4. If someone is informed that their behaviour is creeping people out, and yet they don't take steps to avoid doing these behaviours, that is a serious problem for the group as a whole, and it needs to be treated seriously and be seen to be treated seriously, especially by the 'audience' who are not being victimised directly.

EDITED TO ADD:

Despite the way some of the links are framed as being addressed to creepers, this post is aimed at least as much at the community as a whole, intended to trigger a discussion on how the community should best go about handling such a problem once identified, with the TLDR being "set of restraints to place on someone who is burning the commons", rather that a complete description that guarantees that anyone who doesn't meet it isn't creepy.  (Thank you to jsteinhardt for clearly verbalising the misinterpretation - for discussion see his reply to this post)

Summary of "How to Win Friends and Influence People"

18 Cosmos 30 June 2012 08:49PM

In the very back of Kaj's excellent How to Run a Successful Less Wrong Meetup Group booklet, he has a recommended reading section, including the classic book How to Win Friends and Influence People.

It just so happens that not only have I read the book myself, but I have written up a concise summary of the core advice here. Kaj suggested that I post this on the discussion section because others might find it useful, so here you go!

I suspect that more people are willing to read a summary of a book from the 1930s than an actual book from the 1930s. What I will say about reading the long-form text is that it can be more useful for internalizing these concepts and giving examples of them. It is far too easy to abstractly know what you need to do, much harder to actually take action on those beliefs...

[Link] Can We Reverse The Stanford Prison Experiment?

43 [deleted] 14 June 2012 03:41AM

From the Harvard Business Review, an article entitled: "Can We Reverse The Stanford Prison Experiment?"

By: Greg McKeown
Posted: June 12, 2012

Clicky Link of Awesome! Wheee! Push me!

Summary:

Royal Canadian Mounted Police attempt a program where they hand out "Positive Tickets" 

Their approach was to try to catch youth doing the right things and give them a Positive Ticket. The ticket granted the recipient free entry to the movies or to a local youth center. They gave out an average of 40,000 tickets per year. That is three times the number of negative tickets over the same period. As it turns out, and unbeknownst to Clapham, that ratio (2.9 positive affects to 1 negative affect, to be precise) is called the Losada Line. It is the minimum ratio of positive to negatives that has to exist for a team to flourish. On higher-performing teams (and marriages for that matter) the ratio jumps to 5:1. But does it hold true in policing?

According to Clapham, youth recidivism was reduced from 60% to 8%. Overall crime was reduced by 40%. Youth crime was cut in half. And it cost one-tenth of the traditional judicial system.


This idea can be applied to Real Life

The lesson here is to create a culture that immediately and sincerely celebrates victories. Here are three simple ways to begin:

1. Start your next staff meeting with five minutes on the question: "What has gone right since our last meeting?" Have each person acknowledge someone else's achievement in a concrete, sincere way. Done right, this very small question can begin to shift the conversation.

2. Take two minutes every day to try to catch someone doing the right thing. It is the fastest and most positive way for the people around you to learn when they are getting it right.

3. Create a virtual community board where employees, partners and even customers can share what they are grateful for daily. Sounds idealistic? Vishen Lakhiani, CEO of Mind Valley, a new generation media and publishing company, has done just that at Gratitude Log. (Watch him explain how it works here).

Lesswrong Community's How-Tos and Recommendations

25 EE43026F 07 May 2012 01:41PM

The Lesswrong community is often a dependable source of recommendations, network help, and advice. When I'm looking for a book or learning material on a topic I'll often try and search here to see what residents have found useful. Similarly, social advice, anecdotes and explanations as seen from the point of view of the community have regularly been insightful or eye-opening. The prototypical examples of such articles are, on top of my head :


http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/

http://lesswrong.com/lw/453/procedural_knowledge_gaps/

the topics of which are neatly listed on

http://lesswrong.com/lw/a08/topics_from_procedural_knowledge_gaps/

 

And lately

http://lesswrong.com/r/discussion/lw/c6y/why_do_people/

 

the latter prompted me to write this article. We don't keep track of such resources as far as I know. This probably belongs in the wiki as well.

 

Other potentially useful resources were:

 

http://lesswrong.com/lw/12d/recommended_reading_for_new_rationalists/

http://lesswrong.com/lw/2kk/book_recommendations/

http://lesswrong.com/lw/2ua/recommended_reading_for_friendly_ai_research/



math learning

http://lesswrong.com/lw/9qq/what_math_should_i_learn/


http://lesswrong.com/lw/8js/what_mathematics_to_learn/

http://lesswrong.com/lw/a54/seeking_education/


misc learning

http://lesswrong.com/lw/5me/scholarship_how_to_do_it_efficiently/

http://lesswrong.com/lw/4yv/i_want_to_learn_programming/

http://lesswrong.com/lw/3qr/i_want_to_learn_economics/

http://lesswrong.com/lw/3us/i_want_to_learn_about_education/

http://lesswrong.com/lw/8e3/which_fields_of_learning_have_clarified_your/


social

http://lesswrong.com/lw/6ey/learning_how_to_explain_things/

http://lesswrong.com/lw/818/how_to_understand_people_better/

http://lesswrong.com/lw/6tb/developing_empathy/


community

http://lesswrong.com/lw/929/less_wrong_mentoring_network/

http://lesswrong.com/lw/7hi/free_research_help_editing_and_article_downloads/


Employment

http://lesswrong.com/lw/43m/optimal_employment/

http://lesswrong.com/lw/2qp/virtual_employment_open_thread/


http://lesswrong.com/lw/38u/best_career_models_for_doing_research/

http://lesswrong.com/lw/4ad/optimal_employment_open_thread/

http://lesswrong.com/lw/626/job_search_advice/

http://lesswrong.com/lw/8cp/any_thoughts_on_how_to_locate_job_opportunities/

http://lesswrong.com/lw/7yl/more_shameless_ploys_for_job_advice/

http://lesswrong.com/lw/a93/existential_risk_reduction_career_network/

 

Entertainment

http://lesswrong.com/r/discussion/tag/recommendations/?sort=new

Social status hacks from The Improv Wiki

41 lsparrish 21 March 2012 02:56AM

I can't remember how I found this, just that I was amazed at how rational and near-mode it is on a topic where most of the information one usually encounters is hopelessly far.

LessWrong wiki link on the same topic: http://wiki.lesswrong.com/wiki/Status

The Improv Wiki

Status

Status is pecking order. The person who is lower in status defers to the person who is higher in status.

Status is party established by social position--e.g. boss and employee--but mainly by the way you interact. If you interact in a way that says you are not to be trifled with, the other person must adjust to you, then you are establishing high status. If you interact in a way that says you are willing to go along, you don't want responsibility, that's low status. A boss can play low status or high status. An employee can play low status or high status.

Status is established in every line and gesture, and changes continuously. Status is something that one character plays to another at a particular moment. If you convey that the other person must not cross you on what you're saying now, then you are playing high status to that person in that line. Your very next line might come out low status, as you suggest willingness to defer about something else.

If you analyze your most successful scenes, it's likely they involved several status changes between the players. Therefore, one path to great scenes is to intentionally change status. You can raise or lower your own status, or the status of the other player. The more subtly you can do this, the better the scene.

High-status behaviors

When walking, assuming that other people will get out of your path.

Making eye contact while speaking.

Not checking the other person's eyes for a reaction to what you said.

Having no visible reaction to what the other person said. (Imagine saying something to a typical Clint Eastwood character. You say something expecting a reaction, and you get--nothing.)

Speaking in complete sentences.

Interrupting before you know what you are going to say.

Spreading out your body to full comfort. Taking up a lot of space with your body.

Looking at the other person with your eyes somewhat down (head tilted back a bit to make this work), creating the feeling that you are a parent talking to a child.

Talking matter-of-factly about things that the other person finds displeasing or offensive.

Letting your body be vulnerable, exposing your neck and torso to the other person.

Moving comfortably and gracefully.

Keeping your hands away from your face.

Speaking authoritatively, with certainty.

Making decisions for a group; taking responsibility.

Giving or withholding permission.

Evaluating other people's work.

Speaking cryptically, not adjusting your speech to be easily understood by the other person (except that mumbling does not count). E.g. saying, "Chomper not right" with no explanation of what you mean or what you want the other person to do.

Being surrounded by an entourage, especially of people who are physically smaller than you.

A "high-status specialist" conveys in every word and gesture, "Don't come near me, I bite."

Low-status behaviors

When walking, moving out of other people's path.

Looking away from the other person's eyes.

Briefly checking the other person's eyes to see if they reacted positively to what you said.

Speaking in halting, incomplete sentences. Trailing off, editing your sentences as you got.

Sitting or standing uncomfortably in order to adjust to the other person and give them space. Pulling inward to give the other person more room. If you're tall, you might need to scrunch down a bit to indicate that you're not going to use your height against the other person.

Looking up toward the other person (head tilted forward a bit to make this work), creating the feeling that you are a child talking to a parent.

Dancing around your words (beating around the bush) when talking about something that will displease the other person.

Shouting as an attempt to intimidate the other person. This is low status because it suggests that you expect resistance.

Crouching your body as if to ward off a blow; protecting your face, neck, and torso.

Moving awkwardly or jerkily, with unnecessary movements.

Touching your face or head.

Avoiding making decisions for the group; avoiding responsibility.

Needing permission before you can act.

Adjusting the way you say something to help the other person understand; meeting the other person on their (cognitive) ground; explaining yourself. E.g. "Could you please adjust the chomper? That's the gadget on the kitchen counter immediately to the left of the toaster. If you just give it a slight rap on the top, that should adjust it."

A "low-status specialist" conveys in every word and gesture, "Please don't bite me, I'm not worth the trouble."

Raising another person's status

To raise another person's status is to establish them as high in the pecking order in your group (possibly just the two of you).

Ask their permission to do something.
Ask their opinion about something.
Ask them for advice or help.
Express gratitude for something they did.
Apologize to them for something you did.
Agree that they are right and you were wrong.
Defer to their judgement without requiring proof.
Address them with a fancy title or honorific (even "Mr." or "Sir" works very well).
Downplay your own achievement or attribute in comparison to theirs. "Your wedding cake is so much whiter than mine."
Do something incompetent in front of them and then apologize for it or act sheepish about it.
Mention a failure or shortcoming of your own. "I was supposed to go to an audition today, but I was late. They said I was wrong for the part anyway."
Compliment them in a way that suggests appreciation, not judgement. "Wow, what a beautiful cat you have!"
Obey them unquestioningly.
Back down in a conflict.
Move out of their way, bow to them, lower yourself before them.
Tip your hat to them.
Lose to them at something competitive, like a game (or any comparison).
Wait for them.
Serve them; do manual labor for them.

Tip: Whenever you bring an audience member on stage, always raise their status, never lower it.

Lowering another person's status

To lower another person's status is to attack or discredit their right to be high in the pecking order. Another word for "lowering someone's status" is "humiliating them."

Criticize something they did.
Contradict them. Tell them they are wrong. Prove it with facts and logic.
Correct them.
Insult them.
Give them unsolicited advice.
Approve or disapprove of something they did or some attribute of theirs. "Your cat has both nose and ear points. That is acceptable." Anything that sets you up as the judge lowers their status, even "Nice work on the Milligan account, Joe."
Shout at them.
Tell them what to do.
Ignore what they said and talk about something else, especially when they've said something that requires an answer. E.g. "Have you seen my socks?" "The train leaves in five minutes."
One-up them. E.g. have a worse problem than the one they described, have a greater past achievement than theirs, have met a more famous celebrity, earn more money, do better than them at something they're good at, etc.
Win: beat them at something competitive, like a game (or any comparison).
Announce something good about yourself or something you did. "I went to an audition today, and I got the part!"
Disregard their opinion. E.g. "You'd better not smoke while pumping gas, it's a fire hazard." Flick, light, puff, puff, pump, pump.
Talk sarcastically to them.
Make them wait for you.
When they've fallen behind you, don't wait for them to catch up, just push on and get further out of sync.
Disobey them.
Violate their space.
Beat them up. Beating them up verbally, not physically as in martial arts or how you learned UFC fighting in an gym, in front of other people, especially their wife, girlfriend, and/or children, is particularly status-lowering.
In a conflict, make them back down.
Taunt them. Tease them.

The basic status-lowering act

Laugh at them. (Not with them.)

The basic status-raising act

Be laughed at by them.

Second to that is laughing with them at someone else.

(Notice that those are primarily what comedians do.)


Note that behaviors that raise another person's status are not necessarily low-status behaviors, and behaviors that lower another person's status are not necessarily high-status behaviors. People at any status level raise and lower each other all the time. They can do so in ways that convey high or low status.

For example, shouting at someone lowers their status but is itself a low-status behavior.


Objects and environments also have high or low status, although this is seldom explored. So explore it. Make something cheap and inconsequential high status. (This fingernail clipping came from Graceland!) Or bring down the status of a high status item. (Casually toss a 2 carat diamond ring on your jewelry pile.)

Source: http://greenlightwiki.com/improv/Status
Retrieved 20 March 2012

TEDxLive Opportunities

7 [deleted] 20 February 2012 04:31AM

TED is live broadcasting one day of the TED conference to sites around the world. Entrance to the viewings should be free. I thought many LWers would be interested in this opportunity. It would also make a great meetup activity.

Here is the website with the info: http://www.ted.com/pages/tedxlive

>The idea for TEDxLive grew out of a question we at TED were asking ourselves: What would happen if we made the TED Conference more open, its impact more immediate and tangibly global? What would happen if communities around the world gathered to engage with -- and build on -- the TED Conference experience while it happened live?

You can find a local viewing by clicking on the "Find an Event" tab at the top.

If you think your city is too small to be hosting this, check anyway! Here in Columbus we have two viewings, one hosted by TEDxColumbus, and another hosted by TEDxOhioStateUniversity. They are at slightly different times-- Due to the time difference, they both decided to show the broadcast the next day, rather than exactly live.

The Downside- The broadcasts are all either on Wednesday or Thursday, and in about a week.

Awesome Idea- A chat room or Skype conversation of LWers from around the world watching the broadcast simultaneously (depending on whether other places are going with the "not quite live" idea or not).

I won't be able to go to most of this, due to work, but am interested in knowing if other people are planning to participate. (And if so, how was the experience?)

Utopian hope versus reality

23 Mitchell_Porter 11 January 2012 12:55PM

I've seen an interesting variety of utopian hopes expressed recently. Raemon's "Ritual" sequence of posts is working to affirm the viability of LW's rationalist-immortalist utopianism, not just in the midst of an indifferent universe, but in the midst of an indifferent society. Leverage Research turn out to be social-psychology utopians, who plan to achieve their world of optimality by unleashing the best in human nature. And Russian life-extension activist Maria Konovalenko just blogged about the difficulty of getting people to adopt anti-aging research as the top priority in life, even though it's so obvious to her that it should be.

This phenomenon of utopian hope - its nature, its causes, its consequences, whether it's ever realistic, whether it ever does any good - certainly deserves attention and analysis, because it affects, and even afflicts, a lot of people, on this site and far beyond. It's a vast topic, with many dimensions. All my examples above have a futurist tinge to them - an AI singularity, and a biotech society where rejuvenation is possible, are clearly futurist concepts; and even the idea of human culture being transformed for the better by new ideas about the mind, belongs within the same broad scientific-technological current of Utopia Achieved Through Progress. But if we look at all the manifestations of utopian hope in history, and not just at those which resemble our favorites, other major categories of utopia can be observed - utopia achieved by reaching back to the conditions of a Golden Age; utopia achieved in some other reality, like an afterlife.

The most familiar form of utopia these days is the ideological social utopia, to be achieved once the world is run properly, according to the principles of some political "-ism". This type of utopia can cut across the categories I have mentioned so far; utopian communism, for example, has both futurist and golden-age elements to its thinking. The new society is to be created via new political forms and new philosophies, but the result is a restoration of human solidarity and community that existed before hierarchy and property... The student of utopian thought must also take note of religion, which until technology has been the main avenue through which humans have pursued their most transcendental hopes, like not having to die.

But I'm not setting out to study utopian thought and utopian psychology out of a neutral scholarly interest. I have been a utopian myself and I still am, if utopianism includes belief in the possibility (though not the inevitability) of something much better. And of course, the utopias that I have taken seriously are futurist utopias, like the utopia where we do away with death, and thereby also do away with a lot of other social and psychological pathologies, which are presumed to arise from the crippling futility of the universal death sentence.

However, by now, I have also lived long enough to know that my own hopes were mistaken many times over; long enough to know that sometimes the mistake was in the ideas themselves, and not just the expectation that everyone else would adopt them; and long enough to understand something of the ordinary non-utopian psychology, whose main features I would nominate as reconciliation with work and with death. Everyone experiences the frustration of having to work for a living and the quiet horror of physiological decline, but hardly anyone imagines that there might be an alternative, or rejects such a lifecycle as overall more bad than it is good.

What is the relationship between ordinary psychology and utopian psychology? First, the serious utopians should recognize that they are an extreme minority. Not only has the whole of human history gone by without utopia ever managing to happen, but the majority of people who ever lived were not utopians in the existentially revolutionary sense of thinking that the intolerable yet perennial features of the human condition might be overthrown. The confrontation with the evil aspects of life must usually have proceeded more at an emotional level - for example, terror that something might be true, and horror at the realization that it is true; a growing sense that it is impossible to escape; resignation and defeat; and thereafter a permanently diminished vitality, often compensated by achievement in the spheres of work and family.

The utopian response is typically made possible only because one imagines that there is a specific alternative to this process; and so, as ideas about alternatives are invented and circulated, it becomes easier for people to end up on the track of utopian struggle with life, rather than the track of resignation, which is why we can have enough people to form social movements and fundamentalist religions, and not just isolated weirdos. There is a continuum between full radical utopianism and very watered-down psychological phenomena which hardly deserve that name, but still have something in common - for example, a person who lives an ordinary life but draws some sustenance from the possibility of an afterlife of unspecified nature, where things might be different, and where old wrongs might be righted - but nonetheless, I would claim that the historically dominant temperament in adult human experience has been resignation to hopelessness and helplessness in ultimate matters, and an absorption in affairs where some limited achievement is possible, but which in themselves can never satisfy the utopian impulse.

The new factor in our current situation is science and technology. Our modern history offers evidence that the world really can change fundamentally, and such further explosive possibilities as artificial intelligence and rejuvenation biotechnology are considered possible for good, tough-minded, empirical reasons, not just because they offer a convenient vehicle for our hopes.

Technological utopians often exhibit frustration that their pet technologies and their favorite dreams of existential emancipation aren't being massively prioritized by society, and they don't understand why other people don't just immediately embrace the dream when they first hear about it. (Or they develop painful psychological theories of why the human race is ignoring the great hope.) So let's ask, what are the attitudes towards alleged technological emancipation that a person might adopt?

One is the utopian attitude: the belief that here, finally, one of the perennial dreams of the human race can come true. Another is denial: which is sometimes founded on bitter experience of disappointment, which teaches that the wise thing to do is not to fool yourself when another new hope comes up to you and cheerfully asserts that this time really is different. Another is to accept the possibility but deny the utopian hope. I think this is the most important interpretation to understand.

It is the one that precedent supports. History is full of new things coming to pass, but they have never yet led to utopia. So we might want to scrutinize our technological projections more closely, and see whether the utopian expectation is based on overlooking the downside. For example, let us contrast the idea of rejuvenation and the idea of immortality - not dying, ever. Just because we can take someone who is 80 and make them biologically 20, is not the same thing as making them immortal. It just means that won't die of aging, and that when they do die, it will be in a way befitting someone 20 years old. They'll die in an accident, or a suicide, or a crime. Incidentally, we should also note an element of psychological unrealism in the idea of never wanting to die. Forever is a long time; the whole history of the human race is about 10,000 years long. Just 10,000 years is enough to encompass all the difficulties and disappointments and permutations of outlook that have ever happened. Imagine taking the whole history of the human race into yourself; living through it personally. It's a lot to have endured.

It would be unfair to say that transhumanists as a rule are dominated by utopian thinking. Perhaps just as common is a sort of futurological bipolar disorder, in which the future looks like it will bring "utopia or oblivion", something really good or something really bad. The conservative wisdom of historical experience says that both these expectations are wrong; bad things can happen, even catastrophes, but life keeps going for someone - that is the precedent - and the expectation of total devastating extinction is just a plunge into depression as unrealistic as the utopian hope for a personal eternity; both extremes exhibiting an inflated sense of historical or cosmic self-importance. The end of you is not the end of the world, says this historical wisdom; imagining the end of the whole world is your overdramatic response to imagining the end of you - or the end of your particular civilization.

However, I think we do have some reason to suppose that this time around, the extremes are really possible. I won't go so far as to endorse the idea that (for example) intelligent life in the universe typically turns its home galaxy into one giant mass of computers; that really does look like a case of taking the concept and technology with which our current society is obsessed, and projecting it onto the cosmic unknown. But just the humbler ideas of transhumanity, posthumanity, and a genuine end to the human-dominated era on Earth, whether in extinction or in transformation. The real and verifiable developments of science and technology, and the further scientific and technological developments which they portend, are enough to justify such a radical, if somewhat nebulous, concept of the possible future. And again, while I won't simply endorse the view that of course we shall get to be as gods, and shall get to feel as good as gods might feel, it seems reasonable to suppose that there are possible futures which are genuinely and comprehensively better than anything that history has to offer - as well as futures that are just bizarrely altered, and futures which are empty and dead.

So that is my limited endorsement of utopianism: In principle, there might be a utopianism which is justified. But in practice, what we have are people getting high on hope, emerging fanaticisms, personal dysfunctionality in the present, all the things that come as no surprise to a cynical student of history. The one outcome that would be most surprising to a cynic is for a genuine utopia to arrive. I'm willing to say that this is possible, but I'll also say that almost any existing reference to a better world to come, and any psychological state or social movement which draws sublime happiness from the contemplation of an expected future, has something unrealistic about it.

In this regard, utopian hope is almost always an indicator of something wrong. It can just be naivete, especially in a young person. As I have mentioned, even non-utopian psychology inevitably has those terrible moments when it learns for the first time about the limits of life as we know it. If in your own life you start to enter that territory for the first time, without having been told from an early age that real life is fundamentally limited and frustrating, and perhaps with a few vague promises of hope, absorbed from diverse sources, to sustain you, then it's easy to see your hopes as, not utopian hopes, but simply a hope that life can be worth living. I think this is the experience of many young idealists in "environmental" and "social justice" movements; their culture has always implied to them that life should be a certain way, without also conveying to them that it has never once been that way in reality. The suffering of transhumanist idealists and other radical-futurist idealists, when they begin to run aground on the disjunction between their private subcultural expectations and those of the culture at large, has a lot in common with the suffering of young people whose ideals are more conventionally recognizable; and it is entirely conceivable that for some generation now coming up, rebellion against biological human limitations will be what rebellion against social limitations has been for preceding generations.

I should also mention, in passing, the option of a non-utopian transhumanism, something that is far more common than my discussion so far would mention. This is the choice of people who expect, not utopia, but simply an open future. Many cryonicists would be like this. Sure, they expect the world of tomorrow to be a great place, good enough that they want to get there; but they don't think of it as an eternal paradise of wish-fulfilment that may or may not be achieved, depending on heroic actions in the present. This is simply the familiar non-utopian view that life is overall worth living, combined with the belief that life can now be lived for much longer periods; the future not as utopia, but as more history, history that hasn't happened yet, and which one might get to personally experience. If I was wanting to start a movement in favor of rejuvenation and longevity, this is the outlook I would be promoting, not the idea that abolishing death will cure all evils (and not even the idea that death as such can be abolished; rejuvenation is not immortality, it's just more good life). In the spectrum of future possibilities, it's only the issue of artificial intelligence which lends some plausibility to extreme bipolar futurism, the idea that the future can be very good (by human standards) or very bad (by human standards), depending on what sort of utility functions govern the decision-making of transhuman intelligence.

That's all I have to say for now. It would be unrealistic to think we can completely avoid the pathologies associated with utopian hope, but perhaps we can moderate them, if we pay attention to the psychology involved.

Talking to Children: A Pre-Holiday Guide

32 [deleted] 20 December 2011 09:54PM

Note: This is based on anecdotal evidence, personal experience (I have worked with children for many years. It is my full-time job.) and "general knowledge" rather than scientific studies, though I welcome any relevant links on either side of the issue.

 


 

The holidays are upon us, and I would guess that even though most of us are atheists, that we will still be spending time with our extended families sometime in the next week. These extended families are likely to include nieces and nephews, or other children, that you will have to interact with (probably whether you like it or not...)

Many LW-ers might not spend a lot of time with children in their day-to-day lives, and therefore I would like to make a quick comment on how to interact with them in a way that is conducive to their development. After all, if we want to live in a rationalist world tomorrow, one of the best ways to get there is by raising children who can become rationalist adults. 

PLEASE READ THIS LINK if there are any little girls you will be seeing this holiday season:

How To Talk to Little Girls: http://www.huffingtonpost.com/lisa-bloom/how-to-talk-to-little-gir_b_882510.html?ref=fb&src=sp&comm_ref=false


I know it's hard, but DON'T tell little girls that they look cute, and DON'T comment on their adorable little outfits, or their pony-tailed hair. The world is already screaming at them that the primary thing other people notice and care about for them is their looks. Ask them about their opinions, or their hobbies. Point them toward growing into a well-rounded adult with a mind of her own.

This does not just apply to little girls and their looks, but can be extrapolated to SO many other circumstances. For example, when children (of either gender) are succeeding in something, whether it is school-work, or a drawing, DON'T comment on how smart or skilled they are. Instead, say something like: "Wow, that was a really difficult math problem you just solved. You must have studied really hard to understand it!" Have your comments focus on complementing their hard work, and their determination.

By commenting on children's innate abilities, you are setting them up to believe that if they are good at something, it is solely based on talent. Conversely, by commenting on the amount of work or effort that went into their progress, you are setting them up to believe that they need to put effort into things, in order to succeed at them.


This may not seem like a big deal, but I have worked in childcare for many years, and have learned how elastic children's brains are. You can get them to believe almost anything, or have any opinion, JUST by telling them they have that opinion. Tell a kid they like helping you cook often enough, and they will quickly think that they like helping you cook.

For a specific example, I made my first charge like my favorite of the little-kid shows by saying: "Ooo! Kim Possible is on! You love this show!" She soon internalized it, and it became one of her favorites. There is of course a limit to this. No amount of saying "That show is boring", and "You don't like that show" could convince her that Wonderpets was NOT super-awesome.

Visual Map of US LW-ers

14 [deleted] 20 December 2011 07:40AM

Earlier this month, Metus did a post asking for LW-ers locations. I thought it would even more useful to have this information in visual format, so I created a Google map. You can access it on the website below. Unfortunately, I am terrible at this post-writing interface, so I can't get the image of the map to load onto here. You'll have to click the link to view it.

Original Posthttp://lesswrong.com/r/discussion/lw/8sm/where_do_you_live_meetup_planners_want_to_know/

(If you haven't filled out the poll yet, please do it! If more people submit their location info, I'll add them to the map.)

These are only for US locations that had a 5-digit zip-code, or a city name. I may or may not map the other countries. If someone else wants to volunteer, send me a message, and I'll add you as a collaborator to the map, so you can edit.

More "Personal" Introductions

9 [deleted] 01 December 2011 06:22AM

One of the things I loved about studying liberal arts is that you actually got to know your professors. They would discuss their personal experiences in a topic ("Here's what I did during the feminist movement.."), you might get slide shows from their vacation in the country of study, or even invited to their house for a group dinner. 

Going into engineering was rather jarring for me in that regard. The vast majority of professors would come to class, lecture on the topic, and that would be it. They might share what their specific field of study was, but they rarely shared any personal details. It actually made it harder for me to learn, because it was like "Who is this person who is talking to me?"

(I think a large part of this for me personally was because I am motivated by a desire to please, and so if I liked my professors, then I wouldn't want to inconvenience them by handing things in late, or bore them by giving them another sub-par paper to read. But that's another discussion...)

I've noticed that Less Wrong is similar in some ways. We may know about each other's views on particular topics, and general fields of study, but we know very little about each other as people, unless a personal topic happens to be related to a particular rationalist study. Even the intro thread set up here focuses mainly on non-personal information.

For example, a Generic Intro post right now would be something like: "I'm X years old. From place Y. The fields I study/want to study are Z. Here's what college/HS was/is like for me. I have akrasia." Pretty boring, right? INSTEAD, the things I would be interested in knowing about my fellow LWers include: "On my time off I enjoy underwater basketweaving and climbing Mt Kilamanjaro. I have 6 young daughters and a dog named Grrr. I love pesto. etc"

From a rational perspective, an argument could be made that it's easier to have constructive arguments that remain civil when you humanize the people you are speaking with. 

 


 

I was wondering how other LWers feel on the subject. Do you like that our discussions are un-hampered by personal data? Do you like the idea of providing personal intros? Do you not want to provide personalish information for safety reasons, or because you don't think it's anyone business?

If you think you might need help writing a personal intro, I wrote [a general guide](http://lesswrong.com/lw/8nq/more_personal_introductions/5d4e) on the topic in the comments below. 

Note: I predict there will be two types of response to this post. People discussing how they feel about this (Meta-Comments), and people giving personal introductions (Intros). To make navigating the responses easier, I am trying an experiment where I set up a meta-comment thread and a personal introduction thread. 

PLEASE PLACE COMMENTS ABOUT THIS IDEA IN META-COMMENT THREAD, AND COMMENTS INTRODUCING YOURSELF IN INTRO THREAD.

 

Edited to make it more clear to focus on personality, hobbies, likes/dislikes, and NOT on what you study, or school.
ETA- Added link to "How to Write Personal Intro" comment

Optimal User-End Internet Security (Or, Rational Internet Browsing)

1 [deleted] 09 September 2011 06:23PM

Hacking and Cracking, Internet security, Cypherpunk. I find these topics fascinating as well as completely over my head.

Yet, there are still some things that can be said to a layman, especially by the ever-poignant Randall Munroe:

Password Strength

Passwords Reuse

I'm guilty on both charges (reusing poorly formulated passwords, not stealing them).

These arguments may be just be the tip of the iceberg of a much larger problem that needs optimizing: Social Engineering, or mainly how it can be used against our interests (to quip Person 2, "It doesn't matter how much security you put on the box.  Humans are not secure."). I get the feeling that I'm not managing my risks on the Internet as well as I should.

So the questions I ask are: In what ways do our cognitive biases come into play when we surf the Internet and interact with others? Of which of these biases can actively we protect against, and how? I've enforced HTTPS when available, as well as kept my Internet use iconoclastic rather than typical, but I doubt that's a comprehensive list.

I don't know how usefully I can contribute, but I hope that many on Less Wrong can.

[Link] Study on Group Intelligence

9 atucker 15 August 2011 08:56AM

Full disclosure: This has already been discussed here, but I see utility in bringing it up again. Mostly because I only heard about it offline.

The Paper:

Some researchers were interested if, in the same way that there's a general intelligence g that seems to predict competence in a wide variety of tasks, there is a group intelligence c that could do the same. You can read their paper here.

Their abstract:

Psychologists have repeatedly shown that a single statistical factor—often called “general intelligence”—emerges from the correlations among people’s performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of “collective intelligence” exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group’s performance on a wide variety of tasks. This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.

Basically, groups with higher social sensitivity, equality in conversational turn-taking, and proportion of females are collectively more intelligent. On top of that, those effects trump out things like average IQ or even max IQ.

I theorize that proportion of females mostly works as a proxy for social sensitivity and turn-taking, and the authors speculate the same.

Some thoughts:

What does this mean for Less Wrong?

The most important part of the study, IMO, is that "social sensitivity" (measured by a test where you try and discern emotional states from someone's eyes) is such a stronger predictor of group intelligence. It probably helps people to gauge other people's comprehension, but based on the fact that people sharing talking time more equally also helps, I would speculate that another chunk of its usefulness comes from being able to tell if other people want to talk, or think that there's something relevant to be said.

One thing that I find interesting in the meatspace meetups is how in new groups, conversation tends to be dominated by the people who talk the loudest and most insistently. Often, those people are also fairly interesting. However, I prefer the current, older DC group to the newer one, and there's much more equal time speaking. Even though this means that I don't talk as much. Most other people seem to share similar sentiments, to the point that at one early meetup it was explicitly voted to be true that most people would rather talk more.

Solutions/Proposals:

Anything we should try doing about this? I will hold off on proposing solutions for now, but this section will get filled in sometime.

Are committed truthseekers lonelier?

8 spencerth 11 January 2011 09:42AM

People of a truthseeking bent - rationalists, unbiased scientists, inquisitive non-ideologues - are these types of people likely to be lonelier on average? Those who hold a particular set of positions, tastes, perspectives, worldviews, or preferences to be part of a group, rather than the other way around (being considered part of some group because they hold a particular set of positions) seem like they are at a significant advantage when it comes to the ability to make and keep friends, or at least find tolerant acquaintances compared to the typical truthseeker.

The truthseeker, by virtue of their ability to find, to a particular group they are currently part of or interacting with, uncomfortable truths, seems to put them in the unenviable position of, once they've found a particular uncomfortable truth, having to either keep quiet and have less-than-completely honest or more limited interactions, or speaking their mind and getting ostracized. Along with this, they're far less likely to engage in "false flattery", are more likely to focus on details and nuance (and hence be perceived negatively, due to an aversion to pedantry on certain subjects by some), far more likely to voice disagreement,  and far more likely to wind up being a person to defend something considered objectionable by the group (they'd defend the proverbial idiot who says the sun will rise tomorrow - since it will, regardless of the fact that an idiot says it.)

The truthseeker may also confuse their interlocutors, due to what may be perceived as "holding contradictory views" ("how can you think THAT if you also think THIS? You don't know what you're talking about"); they may be accused of being a "plant" from the "other side" ("if you think that particular thing, you must secretly be an X, so all that other stuff you said that I agree with must be a lie"); they may be thought of as a troll or prankster ("you're just saying that thing I consider objectionable to get a negative reaction out of me, but I know you really agree with me on that the way you [honestly] agree with me on all that other stuff"), or that you're playing devil's advocate for its own sake. These things all happen, but due to the (current) inability to know for sure another's motives, it may be easy to confuse the truthseeker with the idiot, the confused/self-contradictory, the plant, the troll, or the advocate, even though the truthseeker's ideas and motives have nothing to do with any of those. 

Based on limited observations coupled with a little speculation, I'd say that yes, truthseekers are likely to be lonelier on average. They're likely much rarer, so finding other committed truthseekers would be tough, and there's no guarantee they'd even like each other (for non-truthseeking-related reasons - like not liking the same subjective things (music, fashion, food, etc.)) My personal experience says that one can be professionally and personal well respected, considered extremely friendly, and still have no "real" friends; truthseekers are easy to love, but considered difficult to like.

Perhaps a simpler reason (in the typical case) is that the truthseeker is simply perceived by your typical person as a whole lot less fun.

 

Link: What does it feel like to be stupid?

7 Vladimir_Golovin 10 December 2010 07:43AM

What does it feel like to be stupid?

I had an arterial problem for a couple of years, which reduced blood supply to my heart and brain and depleted B vitamins from my nerves (to keep the heart in good repair). Although there is some vagueness as to the mechanisms, this made me forgetful, slow, and easily overwhelmed. In short I felt like I was stupid compared to what I was used to, and I was.

It was frightening at first because I knew something wasn't right but didn't know what, and very worrying for my career because I was simply not very good any more.

However, once I got used to it and resigned myself, it was great.

Full article:
http://www.quora.com/What-does-it-feel-like-to-be-stupid

Proposal for a structured agreement tool

6 DilGreen 30 September 2010 11:31PM

I hope this is a good place for this - comments/suggestions welcome - offers of collaboration more than welcome!

I envisage a kind of structured wiki, centred around the creation of propositions, which can be linked to allow communities of interest to rapidly come to fairly sophisticated levels of mutual understanding; the aim being to foster the development of strong groups with confidence in shared, conscious positions. This should allow significant confidence in collaboration.

Some aspects, in no particular order;

  • Propositions are made by users, and are editable by users - as in a wiki
  • Each proposition could be  templated - the inspiration for the template being the form adopted by Chris. Alexander et al in 'A Pattern Language', namely;
    1. TITLE (referenced)(confidence level)
    2. picture
    3. context - including links to other propositions within whose sphere this one might operate
    4. STATEMENT OF PROBLEM/PURPOSE OF PROPOSITION
    5. Discussion
    6. CONCLUSION - couched in parametric/generic/process based terms
    7. links to other propositions for which this proposition is the context
  • Some mechanism for users to make public their degree of acceptance of each proposition
  • Some mechanism for construction by individuals/groups of networks of propositions specific to particular users/groups  (in other words, the links referred to in 3. and 7. above might be different for different users/groups) These networks can work like Pattern Languages that address particular fields / ethical approaches / political or philosophical positions / projects
  • Some mechanism for assignment by users/groups of tiered structure to proposition networks (to allow for distinctions to be made between fundamental, large scale propositions and more detailed, peripheral ones)
  • Some mechanism for individual users to form associations with other users/established groups who are subscribing to the same propositions
  • Some mechanism for community voting/karma to promote individuals to assume stewardship of groups

Enough of these for now. Some imagined interactions might be more helpful;

  1. I stumble across the site (as I stumbled across LessWrong), and browse proposition titles. I come across one called 'Other people are real, just like me'. It contains some version of the argument for accepting that other humans are to be assumed to have roughly the same motivations, needs et al, as me, and the suggestion that this is a useful founding block for a rational morality. I decide to subscribe, fairly strongly. I am offered a tailored selection of related propositions, as identified by the groups that have included this proposition in their networks (without identification of said groups, I rather think) - I investigate these, and at some point, the system feels that my developing profile is beginning to match that of some group or groups - and offers me the chance to look at their 'mission statement' pages. I decide to come back another day and look at other propositions included in these groups' networks, before going any further. I decline to have my profile made public, so that the groups don't contact me.
  2. I come across some half-baked, but interesting proposition. As a registered user, but not the originator of the proposition, I have some choices;  I can comment on the proposition, hoping to engage in dialogue with the proposer that could be fruitful, or I can 'clone' (or 'fork') the proposition, and seek to improve it myself. Ultimately, the interest of other users will determine the influence and relevance of the proposition.
  3. I am a fundamentalist christian (!). I come across the site, and am appalled at its secular, materialist tone. I make a new proposition; 'The Bible is revealed truth, in all its glory' (or some such twaddle. Of course, I omit to specify which edition, and don't even consider the option of a language other than english - but hey, what do you expect?). Within days, I have assembled a wonderful active group of woolly minded people happily discussing the capacity of Noah's Ark, or whatever. The point here is that the platform is just that - a platform. Human community is a Good Thing.

  4. I am pushed upward by the group I am part of to some sort of moderator role. The system shows various other groups who agree more or less strongly with most of the propositions our group deems fundamental. I contact my opposite number in one of those, and we together make a new proposition which we believe could be a vehicle for discussions that could lead to a merger.
  5. I wish to write a business plan that is not a pile of dead tree gathering dust 6 weeks after it was presented to the board. I attempt to set out the aims of the business as fundamental propositions, and advertise this network to my colleagues, who suggest refinements. On this basis, we work up a description of the important policies and 'business rules' which define the enterprise. These remain accessible and editable , so that they can evolve along with the business.
  6. I am considering an open-source project. I set out the fundamental aims and characteristics of the tool I am proposing, and link them together. The system allows me to set myself up as a group. I sit back and wait for others to comment. Based on these comments, the propositions are refined, others added, relationships built with potential collaborators. At some point, we form a group, and the project gets under way. Throughout its life, the propositions are continually refined and added to. The propositions are a useful form of marketing, and save us a great deal of bother talking to people who want to know what/why/how.

Enough... Point 6 is almost recursive.......

 

There is more discursive (and older) material, here.

Thanks for reading, and please do comment.