Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Video using humor to spread rationality

-8 Gleb_Tsipursky 23 November 2016 02:18AM

[Link] Irrationality is the worst problem in politics

-14 Gleb_Tsipursky 21 November 2016 04:53PM

Communication is violent by nature

-22 Djini_Hendrix 14 November 2016 06:36AM

First of all I got your attention, there's an established agreement that I am going to deliver and you will receive. You are currently playing with me. You and I are communicating through a language, I am going to impose some views and you are going to keep listening to me. If you stop listening to me I will lose control. If you don't listen to me I can't touch all the points I want to transmit to you. You can't just live in your own head, you can't be an island. You have to be in tune with these things I am saying, you have to play the game. Because if you don't, I can't use a wide range of methods to make you do what I want. Because if you don't, I will not be able to trust that you will not do something that can hurt me. Because if you don't I won't have enough authority to punish you for disrespecting our agreements.

Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas

19 John_Maxwell_IV 12 June 2016 07:38AM

This is a response to ingres' recent post sharing Less Wrong survey results. If you haven't read & upvoted it, I strongly encourage you to--they've done a fabulous job of collecting and presenting data about the state of the community.

So, there's a bit of a contradiction in the survey results.  On the one hand, people say the community needs to do more scholarship, be more rigorous, be more practical, be more humble.  On the other hand, not much is getting posted, and it seems like raising the bar will only exacerbate that problem.

I did a query against the survey database to find the complaints of top Less Wrong contributors and figure out how best to serve their needs.  (Note: it's a bit hard to read the comments because some of them should start with "the community needs more" or "the community needs less", but adding that info would have meant constructing a much more complicated query.)  One user wrote:

[it's not so much that there are] overly high standards,  just not a very civil or welcoming climate . why write content for free and get trashed when I can go write a grant application or a manuscript instead?

ingres emphasizes that in order to revitalize the community, we would need more content.  Content is important, but incentives for producing content might be even more important.  Social status may be the incentive humans respond most strongly to.  Right now, from a social status perspective, the expected value of creating a new Less Wrong post doesn't feel very high.  Partially because many LW posts are getting downvotes and critical comments, so my System 1 says my posts might as well.  And partially because the Less Wrong brand is weak enough that I don't expect associating myself with it will boost my social status.

When Less Wrong was founded, the primary failure mode guarded against was Eternal September.  If Eternal September represents a sort of digital populism, Less Wrong was attempting a sort of digital elitism.  My perception is that elitism isn't working because the benefits of joining the elite are too small and the costs are too large.  Teddy Roosevelt talked about the man in the arena--I think Less Wrong experienced the reverse of the evaporative cooling EY feared, where people gradually left the arena as the proportional number of critics in the stands grew ever larger.

Given where Less Wrong is at, however, I suspect the goal of revitalizing Less Wrong represents a lost purpose.

ingres' survey received a total of 3083 responses.  Not only is that about twice the number we got in the last survey in 2014, it's about twice the number we got in 20132012, and 2011 (though much bigger than the first survey in 2009).  It's hard to know for sure, since previous surveys were only advertised on the LessWrong.com domain, but it doesn't seem like the diaspora thing has slowed the growth of the community a ton and it may have dramatically accelerated it.

Why has the community continued growing?  Here's one possibility.  Maybe Less Wrong has been replaced by superior alternatives.

  • CFAR - ingres writes: "If LessWrong is serious about it's goal of 'advancing the art of human rationality' then it needs to figure out a way to do real investigation into the subject."  That's exactly what CFAR does.  CFAR is a superior alternative for people who want something like Less Wrong, but more practical.  (They have an alumni mailing list that's higher quality and more active than Less Wrong.)  Yes, CFAR costs money, because doing research costs money!
  • Effective Altruism - A superior alternative for people who want something that's more focused on results.
  • Facebook, Tumblr, Twitter - People are going to be wasting time on these sites anyway.  They might as well talk about rationality while they do it.  Like all those phpBB boards in the 00s, Less Wrong has been outcompeted by the hot new thing, and I think it's probably better to roll with it than fight it.  I also wouldn't be surprised if interacting with others through social media has been a cause of community growth.
  • SlateStarCodex - SSC already checks most of the boxes under ingres' "Future Improvement Wishlist Based On Survey Results".  In my opinion, the average SSC post has better scholarship, rigor, and humility than the average LW post, and the community seems less intimidating, less argumentative, more accessible, and more accepting of outside viewpoints.
  • The meatspace community - Meeting in person has lots of advantages.  Real-time discussion using Slack/IRC also has advantages.

Less Wrong had a great run, and the superior alternatives wouldn't exist in their current form without it.  (LW was easily the most common way people heard about EA in 2014, for instance, although sampling effects may have distorted that estimate.)  But that doesn't mean it's the best option going forward.

Therefore, here are some things I don't think we should do:

  • Try to be a second-rate version of any of the superior alternatives I mentioned above.  If someone's going to put something together, it should fulfill a real community need or be the best alternative available for whatever purpose it serves.
  • Try to get old contributors to return to Less Wrong for the sake of getting them to return.  If they've judged that other activities are a better use of time, we should probably trust their judgement.  It might be sensible to make an exception for old posters that never transferred to the in-person community, but they'd be harder to track down.
  • Try to solve the same sort of problems Arbital or Metaculus is optimizing for.  No reason to step on the toes of other projects in the community.

But that doesn't mean there's nothing to be done.  Here are some possible weaknesses I see with our current setup:

  • If you've got a great idea for a blog post, and you don't already have an online presence, it's a bit hard to reach lots of people, if that's what you want to do.
  • If we had a good system for incentivizing people to write great stuff (as opposed to merely tolerating great stuff the way LW culture historically has), we'd get more great stuff written.
  • It can be hard to find good content in the diaspora.  Possible solution: Weekly "diaspora roundup" posts to Less Wrong.  I'm too busy to do this, but anyone else is more than welcome to (assuming both people reading LW and people in the diaspora want it).
  • EDIT 11/27/16 - Recently people have been arguing that social media generates relatively superficial discussions.  This plausibly undermines my "lost purpose" thesis.

ingres mentions the possibility of Scott Alexander somehow opening up SlateStarCodex to other contributors.  This seems like a clearly superior alternative to revitalizing Less Wrong, if Scott is down for it:

  • As I mentioned, SSC already seems to have solved most of the culture & philosophy problems that people complained about with Less Wrong.
  • SSC has no shortage of content--Scott has increased the rate at which he creates open threads to deal with an excess of comments.
  • SSC has a stronger brand than Less Wrong.  It's been linked to by Ezra Klein, Ross Douthat, Bryan Caplan, etc.

But the most important reasons may be behavioral reasons.  SSC has more traffic--people are in the habit of visiting there, not here.  And the posting habits people have acquired there seem more conducive to community.  Changing habits is hard.

As ingres writes, revitalizing Less Wrong is probably about as difficult as creating a new site from scratch, and I think creating a new site from scratch for Scott is a superior alternative for the reasons I gave.

So if there's anyone who's interested in improving Less Wrong, here's my humble recommendation: Go tell Scott Alexander you'll build an online forum to his specification, with SSC community feedback, to provide a better solution for his overflowing open threads.  Once you've solved that problem, keep making improvements and subfora so your forum becomes the best available alternative for more and more use cases.

And here's my humble suggestion for what an SSC forum could look like:

As I mentioned above, Eternal September is analogous to a sort of digital populism.  The major social media sites often have a "mob rule" culture to them, and people are increasingly seeing the disadvantages of this model.  Less Wrong tried to achieve digital elitism and it didn't work well in the long run, but that doesn't mean it's impossible.  Edge.org has found a model for digital elitism that works.  There may be other workable models out there.  A workable model could even turn in to a successful company.  Fight the hot new thing by becoming the hot new thing.

My proposal is based on the idea of eigendemocracy.  (Recommended that you read the link before continuing--eigendemocracy is cool.)  In eigendemocracy, your trust score is a composite rating of what trusted people think of you.  (It sounds like infinite recursion, but it can be resolved using linear algebra.)

Eigendemocracy is a complicated idea, but a simple way to get most of the way there would be to have a forum where having lots of karma gives you the ability to upvote multiple times.  How would this work?  Let's say Scott starts with 5 karma and everyone else starts with 0 karma.  Each point of karma gives you the ability to upvote once a day.  Let's say it takes 5 upvotes for a post to get featured on the sidebar of Scott's blog.  If Scott wants to feature a post on the sidebar of his blog, he upvotes it 5 times, netting the person who wrote it 1 karma.  As Scott features more and more posts, he gains a moderation team full of people who wrote posts that were good enough to feature.  As they feature posts in turn, they generate more co-moderators.

Why do I like this solution?

  • It acts as a cultural preservation mechanism.  On reddit and Twitter, sheer numbers rule when determining what gets visibility.  The reddit-like voting mechanisms of Less Wrong meant that the site deliberately kept a somewhat low profile in order to avoid getting overrun.  Even if SSC experienced a large influx of new users, those users would only gain power to affect the visibility of content if they proved themselves by making quality contributions first.
  • It takes the moderation burden off of Scott and distributes it across trusted community members.  As the community grows, the mod team grows with it.
  • The incentives seem well-aligned.  Writing stuff Scott likes or meta-likes gets you recognition, mod powers, and the ability to control the discussion--forms of social status.  Contrast with social media sites where hyperbole is a shortcut to attention, followers, upvotes.  Also, unlike Less Wrong, there'd be no punishment for writing a low quality post--it simply doesn't get featured and is one more click away from the SSC homepage.

TL;DR - Despite appearances, the Less Wrong community is actually doing great.  Any successor to Less Wrong should try to offer compelling advantages over options that are already available.

Cognitive Biases Affecting Self-Perception of Beauty

0 Bound_up 29 May 2016 06:32PM

I wrote an article for mass consumption on the biases which are at play in a hot-button social issue, namely, how people feel about their beauty.








It's supposed to be interesting to people who wouldn't normally care a whit for correcting their biases for the sake of epistemology.


EDIT: Text included below



Long-time friends Amy, Bailey, and Casey are having their weekly lunch together when Amy says “I don’t think I’m very beautiful.”

Have you ever seen something like this? Regardless, before moving on, try to guess what will happen next. What kind of future would you predict?

I’ve often seen such a scene. My experience would lead me to predict... 

“Of course you’re beautiful!” they reassure her. Granted, people sometimes say that just to be nice, but I’ll be talking about those times when they are sincere.

How can Bailey and Casey see Amy as beautiful when Amy doesn’t? Some great insight into beauty, perhaps?

Not at all! Consider what typically happens next.

“I only wish I was as beautiful as you, Amy,” Bailey reassures her.

The usual continuation of the scene reveals that Bailey is just as self-conscious as Amy is, and Casey’s probably the same. All people have this natural tendency, to judge their own appearance more harshly than they do others’.

So what’s going on?

If you were present, I’d ask you to guess what causes us to judge ourselves this way. Indeed, I have so asked from time to time, and found most people blame the same thing.

Think about it; what does everybody blame when people are self-conscious about their beauty?

We blame…

The media! The blasted media and the narrow standard of beauty it imposes.
There are two effects; the media is responsible for only one, and not the one we’re talking about.

Research suggests that the media negatively affects how we judge both ourselves and others. We tend to focus on how it affects our perception of ourselves, but the media affects how we judge others, too. More to the point, that’s not the effect we were talking about!

We were talking about a separate effect, where people tend to judge themselves one way and everyone else another. Is it proper to blame the media for this also? 

Picture what would happen if the media were to blame.

First, everyone assimilates the media’s standard of beauty. They judge beauty by that standard. That’s the theory. So far so good.

What does this cause? They look themselves over in the mirror. They see that they don’t fit the standard. Eventually they sigh, and give up. “I’m not beautiful,” they think.

Check. The theory fits.

But what happens when they look at other people?

Bailey looks at Amy. Amy doesn’t (as hardly anybody does) fit the standard of beauty. So…Bailey concludes that Amy isn’t beautiful?

That’s not what happens! Amy looks fine to Bailey, and vice versa! The media effect doesn’t look like this one. We might get our standard of beauty from the media, but the question remains, why do we hold ourselves to it morethan we do everyone else?

We need something that more fully explains why Amy judges herself one way and everyone else another, something mapping the territory of reality.

The Explanation

A combination of two things.

1. Amy’s beauty is very important to her.
2. She knows her looks better than others do.

Amy’s beauty affects her own life. Other people’s beauty doesn’t affect her life nearly as much.

Consider how Amy looks at other people. She sees their features and figure, whatever good and bad parts stand out, a balanced assessment of their beauty. She has no special reason to pay extra attention to their good or bad parts, no special reason to judge them any particular way at all. At the end of the day, it just doesn’t much matter to her how other people look.

Contrast that to how much her appearance matters to her. How we look affects how people perceive us, how we perceive ourselves, how we feel walking down the street. Indeed, researchers have found that the more beautifulwe are, the more we get paid, and the more we are perceived as honest and intelligent.

Like for most people, Amy’s beauty is a big deal to her. So which does she pay attention to, the potential gains of highlighting her good points, or the potential losses of highlighting her bad points? Research suggests that she will focus on losses. It’s called loss aversion.

Reason 1: Loss Aversion

We hate losing even more than we love winning. Loss aversion is when we value the same thing more or less based on if you’re going to gain it or if you risk losing it.

Say someone gives you $1000. They say you can either lose $400 of it now, or try to hold on to it all, 50-50 odds to keep it all or lose it all. What would you do?

Well, studies show about 61% of people in this situation choose to gamble on keeping everything over a sure loss.

Then suppose you get a second deal. You can either keep $600 of your $1000 now, or you can risk losing it all, 50-50 odds again. What would you do?

People tend to like keeping the $600 more in this deal, only 43% tend to gamble.

Do you see the trick?

Losing $400 out of $1000 is the same thing as keeping $600 out of $1000! So why do people like the “keeping” option over the “losing” option? We just tend to focus on avoiding losses, even if it doesn’t make sense.

Result for Amy? Given the choice to pay attention to what could make her look good, or to what could make her look bad…

Amy carefully checks on all her flaws each time she looks in the mirror. The balanced beauty assessment that Amy graciously grants others is lost when she views herself. She sees herself as less beautiful than everyone else sees her. 

Plus, whatever has your attention seems more important than what you’re not paying attention to. It’s calledattentional bias. It’s a natural fact that if you spend most of the time carefully examining your flaws, and only very little time appreciating your good points, the flaws will tend to weigh heaviest in your mind.

Now, the second reason Amy judges her own beauty under a harsher gaze.

Reason 2: Familiarity

Amy doesn’t just have more cause to look at her flaws, she has more ability to do so.
Who knows you like you? If you paid someone to examine flaw after flaw in you, they wouldn’t know where to look! They’d find one, and then hunt for the next one while all the beautiful parts of you kept getting in the way. There’s that balanced assessment we have when we judge each others beauty; there’s a limit to how judgmental we can be even if we’re trying!

Indeed, it takes years, a lifetime, even, to build up the blind spots to beauty, and the checklist of flaws Amy knows by heart. She can jump from one flaw to the next and to the next with an impressive speed and efficiency that would be fantastic if it wasn’t all aimed at tearing down the beauty before her.

Your intimate knowledge of your beauty could just as easily let you appreciate your subtle beauties as your subtle flaws, but thanks to loss aversion, your attention is dialed up to to ten and stuck on ruthless judgment.


And so it is. Amy’s loss aversion focuses her attention on flaws. This attentional bias makes her misjudge her beauty for the worse, the handiwork of her emotional self. Then her unique intimacy with her appearance lets her unforgiving judgments strike more overwhelmingly and more piercingly than could her worst enemy. Indeed, in this, she is her own worst enemy.

Since others don’t have the ability to criticize us like we can, and they don’t have any reason to pay special attention to our faults, their attention towards us is more balanced. They see the clearest good and bad things.

The Fix

How can Amy achieve a more natural, balanced view of her beauty? It’s a question which has troubled me at times, as even the most beautiful people I know are so often so down about their looks. How can it be? I’ve often been in that scene offering my assurances, and know well the feeling when my assurances are rejected, and my view of another’s beauty is knocked away and replaced with a gloomier picture. A sense of listless hopelessness advances as I search for a way to show them what I see. How can I say it any better than I already have? How can I make them see...?

If we can avoid the attentional bias on flaws, then we can make up for our loss aversion. We’ll always see ourselves more deeply than most, but we can focus on the good and bad. For every subtle flaw we endure a subtle loveliness we can turn to.

Next time examining her form and features in the mirror, Amy intentionally switches her attention to the appreciation of what she likes about herself. She spends as much time on her good points as her bad. She is beginning to see herself with the balance others naturally see her with.

All people can do the same. A balanced attention will counter our natural loss aversion, and let us see ourselves as others already do.

As you practice seeing with new eyes, let the perspective of others remind you what you’re looking for. Allow yourself to accept their perspective of you as valid, and probably more balanced than your own. Your goal to have a balanced perspective may take time, but take comfort in each of the little improvements along the way.

Questions to consider
• What would happen if only the effects of the media were in play without the effects of loss aversion? Or vice versa?
• How can you remember to balance your attention when you look in the mirror?
• What other mistakes might our loss aversion lead us to?
• How else might you achieve a more balanced perspective of yourself?
• Whom do you know that might benefit from understanding these ideas?

Living Metaphorically

24 lukeprog 28 November 2011 03:01PM

Part of the sequence: Rationality and Philosophy

In my last post, I showed that the brain does not encode concepts in terms of necessary and sufficient conditions. So, any philosophical practice which assumes this — as much of 20th century conceptual analysis seems to do — is misguided.

Next, I want to show that human abstract thought is pervaded by metaphor, and that this has implications for how we think about the nature of philosophical questions and philosophical answers. As Lakoff & Johnson (1999) write:

If we are going to ask philosophical questions, we have to remember that we are human... The fact that abstract thought is mostly metaphorical means that answers to philosophical questions have always been, and always will be, mostly metaphorical. In itself, that is neither good nor bad. It is simply a fact about the capacities of the human mind. But it has major consequences for every aspect of philosophy. Metaphorical thought is the principal tool that makes philosophical insight possible, and that constrains the forms that philosophy can take.

To understand how fundamental metaphor is to our thinking, we must remember that human cognition is embodied:

We have inherited from the Western philosophical tradition a theory of faculty psychology, in which we have a "faculty" of reason that is separate from and independent of what we do with our bodies. In particular, reason is seen as independent of perception and bodily movement...

The evidence from cognitive science shows that classical faculty psychology is wrong. There is no such fully autonomous faculty of reason separate from and independent of bodily capacities such as perception and movement. The evidence supports, instead, an evolutionary view, in which reason uses and grows out of such bodily capacities.

Consider, for example, the fact that as neural beings we must categorize things:

We are neural beings. Our brains each have 100 billion neurons and 100 trillion synaptic connections. It is common in the brain for information to be passed from one dense ensemble of neurons to another via a relatively sparse set of connections. Whenever this happens, the pattern of activation distributed over the first set of neurons is too great to be represented in a one-to-one manner in the sparse set of connections. Therefore, the sparse set of connections necessarily groups together certain input patterns in mapping them across to the output ensemble. Whenever a neural ensemble provides the same output with different inputs, there is neural categorization.

To take a concrete example, each human eye has 100 million light-sensing cells, but only about 1 million fibers leading to the brain. Each incoming image must therefore be reduced in complexity by a factor of 100. That is, information in each fiber constitutes a "categorization" of the information from about 100 cells.

Moreover, almost all our categorizations are determined by the unconscious associative mind — outside our control and even our awareness — as we interact with the world. As Lakoff & Johnson note, "Even when we think we are deliberately forming new categories, our unconscious categories enter into our choice of possible conscious categories."

continue reading »

Yudkowsky's brain is the pinnacle of evolution

-27 Yudkowsky_is_awesome 24 August 2015 08:56PM

Here's a simple problem: there is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are 3^^^3 people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person, Eliezer Yudkowsky, on the side track. You have two options: (1) Do nothing, and the trolley kills the 3^^^3 people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill Yudkowsky. Which is the correct choice?

The answer:

Imagine two ant philosophers talking to each other. “Imagine," they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle."

Humans are such a being. I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn't just human chauvinism either - I can support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants do.

How this relates to the trolley problem? There exists a creature as far beyond us ordinary humans as we are beyond ants, and I think we all would agree that its preferences are vastly more important than those of humans.

Yudkowsky will save the world, not just because he's the one who happens to be making the effort, but because he's the only one who can make the effort.

The world was on its way to doom until the day of September 11, 1979, which will later be changed to national holiday and which will replace Christmas as the biggest holiday. This was of course the day when the most important being that has ever existed or will exist, was born.

Yudkowsky did the same to the field of AI risk as Newton did to the field of physics. There was literally no research done on AI risk in the same scale that has been done in the 2000's by Yudkowsky. The same can be said about the field of ethics: ethics was an open problem in philosophy for thousands of years. However, Plato, Aristotle, and Kant don't really compare to the wisest person who has ever existed. Yudkowsky has come closest to solving ethics than anyone ever before. Yudkowsky is what turned our world away from certain extinction and towards utopia.

We all know that Yudkowsky has an IQ so high that it's unmeasurable, so basically something higher than 200. After Yudkowsky gets the Nobel prize in literature due to getting recognition from Hugo Award, a special council will be organized to study the intellect of Yudkowsky and we will finally know how many orders of magnitude higher Yudkowsky's IQ is to that of the most intelligent people of history.

Unless Yudkowsky's brain FOOMs before it, MIRI will eventually build a FAI with the help of Yudkowsky's extraordinary intelligence. When that FAI uses the coherent extrapolated volition of humanity to decide what to do, it will eventually reach the conclusion that the best thing to do is to tile the whole universe with copies of Eliezer Yudkowsky's brain. Actually, in the process of making this CEV, even Yudkowsky's harshest critics will reach such understanding of Yudkowsky's extraordinary nature that they will beg and cry to start doing the tiling as soon as possible and there will be mass suicides because people will want to give away the resources and atoms of their bodies for Yudkowsky's brains. As we all know, Yudkowsky is an incredibly humble man, so he will be the last person to protest this course of events, but even he will understand with his vast intellect and accept that it's truly the best thing to do.

Leaving LessWrong for a more rational life

33 [deleted] 21 May 2015 07:24PM

You are unlikely to see me posting here again, after today. There is a saying here that politics is the mind-killer. My heretical realization lately is that philosophy, as generally practiced, can also be mind-killing.

As many of you know I am, or was running a twice-monthly Rationality: AI to Zombies reading group. One of the bits I desired to include in each reading group post was a collection of contrasting views. To research such views I've found myself listening during my commute to talks given by other thinkers in the field, e.g. Nick Bostrom, Anders Sandberg, and Ray Kurzweil, and people I feel are doing “ideologically aligned” work, like Aubrey de Grey, Christine Peterson, and Robert Freitas. Some of these were talks I had seen before, or generally views I had been exposed to in the past. But looking through the lens of learning and applying rationality, I came to a surprising (to me) conclusion: it was philosophical thinkers that demonstrated the largest and most costly mistakes. On the other hand, de Grey and others who are primarily working on the scientific and/or engineering challenges of singularity and transhumanist technologies were far less likely to subject themselves to epistematic mistakes of significant consequences.

Philosophy as the anti-science...

What sort of mistakes? Most often reasoning by analogy. To cite a specific example, one of the core underlying assumption of singularity interpretation of super-intelligence is that just as a chimpanzee would be unable to predict what a human intelligence would do or how we would make decisions (aside: how would we know? Were any chimps consulted?), we would be equally inept in the face of a super-intelligence. This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understanding the logical implications of models which employ them. We may not be able to build intuition for how a super-intelligence thinks. Maybe—that's not proven either. But even if that is so, we will be able to reason about its intelligent behaviour in advance, just like string theorists are able to reason about 11-dimensional space-time without using their evolutionarily derived intuitions at all.

This post is not about the singularity nature of super-intelligence—that was merely my choice of an illustrative example of a category of mistakes that are too often made by those with a philosophical background rather than the empirical sciences: the reasoning by analogy instead of the building and analyzing of predictive models. The fundamental mistake here is that reasoning by analogy is not in itself a sufficient explanation for a natural phenomenon, because it says nothing about the context sensitivity or insensitivity of the original example and under what conditions it may or may not hold true in a different situation.

A successful physicist or biologist or computer engineer would have approached the problem differently. A core part of being successful in these areas is knowing when it is that you have insufficient information to draw conclusions. If you don't know what you don't know, then you can't know when you might be wrong. To be an effective rationalist, it is often not important to answer “what is the calculated probability of that outcome?” The better first question is “what is the uncertainty in my calculated probability of that outcome?” If the uncertainty is too high, then the data supports no conclusions. And the way you reduce uncertainty is that you build models for the domain in question and empirically test them.

The lens that sees its own flaws...

Coming back to LessWrong and the sequences. In the preface to Rationality, Eliezer Yudkowsky says his biggest regret is that he did not make the material in the sequences more practical. The problem is in fact deeper than that. The art of rationality is the art of truth seeking, and empiricism is part and parcel essential to truth seeking. There's lip service done to empiricism throughout, but in all the “applied” sequences relating to quantum physics and artificial intelligence it appears to be forgotten. We get instead definitive conclusions drawn from thought experiments only. It is perhaps not surprising that these sequences seem the most controversial.

I have for a long time been concerned that those sequences in particular promote some ungrounded conclusions. I had thought that while annoying this was perhaps a one-off mistake that was fixable. Recently I have realized that the underlying cause runs much deeper: what is taught by the sequences is a form of flawed truth-seeking (thought experiments favored over real world experiments) which inevitably results in errors, and the errors I take issue with in the sequences are merely examples of this phenomenon.

And these errors have consequences. Every single day, 100,000 people die of preventable causes, and every day we continue to risk extinction of the human race at unacceptably high odds. There is work that could be done now to alleviate both of these issues. But within the LessWrong community there is actually outright hostility to work that has a reasonable chance of alleviating suffering (e.g. artificial general intelligence applied to molecular manufacturing and life-science research) due to concerns arrived at by flawed reasoning.

I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good. One should work to develop one's own rationality, but I now fear that the approach taken by the LessWrong community as a continuation of the sequences may result in more harm than good. The anti-humanitarian behaviors I observe in this community are not the result of initial conditions but the process itself.

What next?

How do we fix this? I don't know. On a personal level, I am no longer sure engagement with such a community is a net benefit. I expect this to be my last post to LessWrong. It may happen that I check back in from time to time, but for the most part I intend to try not to. I wish you all the best.

A note about effective altruism…

One shining light of goodness in this community is the focus on effective altruism—doing the most good to the most people as measured by some objective means. This is a noble goal, and the correct goal for a rationalist who wants to contribute to charity. Unfortunately it too has been poisoned by incorrect modes of thought.

Existential risk reduction, the argument goes, trumps all forms of charitable work because reducing the chance of extinction by even a small amount has far more expected utility than would accomplishing all other charitable works combined. The problem lies in the likelihood of extinction, and the actions selected in reducing existential risk. There is so much uncertainty regarding what we know, and so much uncertainty regarding what we don't know that it is impossible to determine with any accuracy the expected risk of, say, unfriendly artificial intelligence creating perpetual suboptimal outcomes, or what effect charitable work in the area (e.g. MIRI) is have to reduce that risk, if any.

This is best explored by an example of existential risk done right. Asteroid and cometary impacts is perhaps the category of external (not-human-caused) existential risk which we know the most about, and have done the most to mitigate. When it was recognized that impactors were a risk to be taken seriously, we recognized what we did not know about the phenomenon: what were the orbits and masses of Earth-crossing asteroids? We built telescopes to find out. What is the material composition of these objects? We built space probes and collected meteorite samples to find out. How damaging an impact would there be for various material properties, speeds, and incidence angles? We built high-speed projectile test ranges to find out. What could be done to change the course of an asteroid found to be on collision course? We have executed at least one impact probe and will monitor the effect that had on the comet's orbit, and have on the drawing board probes that will use gravitational mechanisms to move their target. In short, we identified what it is that we don't know and sought to resolve those uncertainties.

How then might one approach an existential risk like unfriendly artificial intelligence? By identifying what it is we don't know about the phenomenon, and seeking to experimentally resolve that uncertainty. What relevant facts do we not know about (unfriendly) artificial intelligence? Well, much of our uncertainty about the actions of an unfriendly AI could be resolved if we were to know more about how such agents construct their thought models, and relatedly what language were used to construct their goal systems. We could also stand to benefit from knowing more practical information (experimental data) about in what ways AI boxing works and in what ways it does not, and how much that is dependent on the structure of the AI itself. Thankfully there is an institution that is doing that kind of work: the Future of Life institute (not MIRI).

Where should I send my charitable donations?

Aubrey de Grey's SENS Research Foundation.

100% of my charitable donations are going to SENS. Why they do not get more play in the effective altruism community is beyond me.

If you feel you want to spread your money around, here are some non-profits which have I have vetted for doing reliable, evidence-based work on singularity technologies and existential risk:

  • Robert Freitas and Ralph Merkle's Institute for Molecular Manufacturing does research on molecular nanotechnology. They are the only group that work on the long-term Drexlarian vision of molecular machines, and publish their research online.
  • Future of Life Institute is the only existential-risk AI organization which is actually doing meaningful evidence-based research into artificial intelligence.
  • B612 Foundation is a non-profit seeking to launch a spacecraft with the capability to detect, to the extent possible, ALL Earth-crossing asteroids.

I wish I could recommend a skepticism, empiricism, and rationality promoting institute. Unfortunately I am not aware of an organization which does not suffer from the flaws I identified above.

Addendum regarding unfinished business

I will no longer be running the Rationality: From AI to Zombies reading group as I am no longer in good conscience able or willing to host it, or participate in this site, even from my typically contrarian point of view. Nevertheless, I am enough of a libertarian that I feel it is not my role to put up roadblocks to others who wish to delve into the material as it is presented. So if someone wants to take over the role of organizing these reading groups, I would be happy to hand over the reigns to that person. If you think that person should be you, please leave a reply in another thread, not here.

EDIT: Obviously I'll stick around long enough to answer questions below :)

[Link] Social Psychology & Priming: Art Wears Off

1 GLaDOS 06 February 2013 10:08AM

Related to: Power of Suggestion

Social Psychology & Priming: Art Wears Off

by Steve Sailer

One of the most popular social psychology studies of the Malcolm Gladwell Era has been Yale professor John Bargh's paper on how you can "prime" students to walk more slowly by first having them do word puzzles that contain a hidden theme of old age by the inclusion of words like "wrinkle" and "bingo." The primed subjects then took one second longer on average to walk down the hall than the unprimed control group. Isn't that amazing! (Here's Gladwell's description of Bargh's famous experiment in his 2005 bestseller Blink.)

This finding has electrified the Airport Book industry for years: Science proves you can manipulate people into doing what you want them to! Why you'd want college students to walk slower is unexplained, but that's not the point. The point is that Science proves that people are manipulable.

Now, a large fraction of the buyers of Airport Books like Blink are marketing and advertising professionals, who are paid handsomely to manipulate people, and to manipulate them into not just walking slower, but into shelling out real money to buy the clients' products.

Moreover, everybody notices that entertainment can prime you in various ways. For instance, well-made movies prime how I walk down the street afterwards. For two nights after seeing the Coen Brothers' No Country for Old Men, I walked the quiet streets swiveling my head, half-certain that an unstoppable killing machine was tailing me. When I came out of Christopher Nolan's amnesia thriller Memento, I was convinced I'd never remember where I parked my car. (As it turned out, I quickly found my car. Why? Because I needed to. But it was fun for thirty seconds to act like, and maybe even believe, that the movie had primed me into amnesia.)

Now, you could say, "That's art, not marketing," but the distinction isn't that obvious to talented directors. Not surprisingly, directors between feature projects often tide themselves over directing commercials. For example, Ridley Scott made Blade Runner in 1982 and then the landmark 1984 ad introducing the Apple Mac at the 1984 Super Bowl.

So, in an industry in which it's possible, if you have a big enough budget, to hire Sir Ridley to direct your next TV commercial, why the fascination with Bargh's dopey little experiment?

One reason is that there's a lot of uncertainty in the marketing and advertising game. Nineteenth Century department store mogul John Wanamaker famously said that half his advertising budget was wasted, he just didn't know which half.

Worse, things change. A TV commercial that excited viewers a few years ago often strikes them as dull and unfashionable today. Today, Scott's 1984 ad might remind people subliminally, from picking up on certain stylistic commonalities, of how dopey Scott's Prometheus was last summer, or how lame the Wachowski Siblings 1984-imitation V for Vendetta was, and Apple doesn't need their computers associated with that stuff.

Naturally, social psychologists want to get in on a little of the big money action of marketing. Gladwell makes a bundle speaking to sales conventions, and maybe they can get some gigs themselves. And even if their motivations are wholly academic, it's nice to have your brother-in-law, the one who makes so much more money than you do doing something boring in the corporate world, excitedly forward you an article he read that mentions your work.

("Priming" theory is also the basis for the beloved concept of "stereotype threat," which seems to offer a simple way to close those pesky Gaps that beset society: just get everybody to stop noticing stereotypes, and the Gaps will go away!)

But why do the marketers love hearing about these weak tea little academic experiments, even though they do much more powerful priming on the job? I suspect one reason is because these studies are classified as Science, and Science is permanent. As some egghead in Europe pointed out, Science is Replicable. Once the principles of Scientific Manipulation are uncovered, then they can just do their marketing jobs on autopilot. No more need to worry about trends and fads.

But, how replicable are these priming experiments?

He then comments on and extensively quotes the Higher Education piece Power of Suggestion by Tom Bartlett, which I linked to at the start of my post. I'm skipping that to jump to the novel part part of Steve's post.

Okay, but I've never seen this explanation offered: successful priming studies stop replicating after awhile because they basically aren't science. At least not in the sense of having discovered something that will work forever.

Instead, to the extent that they ever did really work, they are exercises in marketing. Or, to be generous, art.

And, art wears off.

The power of a work of art to prime emotions and actions changes over time. Perhaps, initially, the audience isn't ready for it, then it begins to impact a few sensitive fellow artists, and they begin to create other works in its manner and talk it up, and then it become widely popular. Over time, though, boredom sets in and people look for new priming stimuli.

For a lucky few old art works (e.g., the great Impressionist paintings), vast networks exist to market them by helping audiences get back into the proper mindset to appreciate the old art (E.g., "Monet was a rebel, up against The Establishment! So, putting this pretty picture of flowers up on your wall shows everybody that you are an edgy outsider, too!").

So, let's assume for a moment that Bargh's success in the early 1990s at getting college students to walk slow wasn't just fraud or data mining for a random effect among many effects. He really was priming early 1990s college students into walking slow for a few seconds.

Is that so amazing?

Other artists and marketers in the early 1990s were priming sizable numbers of college students into wearing flannel lumberjack shirts or dancing the Macarena or voting for Ross Perot, all of which seem, from the perspective of 2013, a lot more amazing.

Overall, it's really not that hard to prime young people to do things. They are always looking around for clues about what's cool to do.

But it's hard to keep them doing the same thing over and over. The Macarena isn't cool anymore, so it would be harder to replicate today an event in which young people are successfully primed to do the Macarena.

So, in the best case scenario, priming isn't science, it's art or marketing.

Interesting hypothesis.

Simple friendliness: Plan B for AI

-16 turchin 09 November 2010 09:28PM

Friendly AI, as believes by Hanson, is doomed to failure, since if the friendliness system is too complicated, the other AI projects generally will not apply it. In addition, any system of friendliness may still be doomed to failure - and more unclear it is, the more chances it has to fail. By fail I mean that it will not be accepted by most successful AI project. Thus, the friendliness system should be simple and clear, so it can be spread as widely as possible. I roughly figured, what principles could form the basis of a simple friendliness:

1) Any one should understood that AI can be global risks and the friendliness of the system is needed. This basic understanding should be shared by maximum number of AI-groups (I think this is alrready done)

2) Architecture of AI should be such that it would use rules explicitly. (I.e. no genetic algorithms or neural networks)

3) the AI should obey commands of its creator, and clearly understand who is the creator and what is the format of commands.

4) AI must comply with all existing criminal an civil laws. These laws are the first attempt to create a friendly AI – in the form of state. That is an attempt to describe good, safe human life using a system of rules. (Or system of precedents). And the number of volumes of laws and their interpretation speaks about complexity of this problem - but it has already been solved and it is not a sin to use the solution.

5) the AI should not have secrets from their creator. Moreover, he is obliged to inform him of all his thoughts. This avoids rebel of AI.

6) Each self optimizing of AI should be dosed in portions, under the control of the creator. And after each step must be run a full scan of system goals and effectiveness.

7) the AI should be tested in a virtual environment (such as Second Life) for safety and adequacy.

8) AI projects should be registrated by centralized oversight bodies and receive safety certification from it.

Such obvious steps do not create absolutely safe AI (you can figure out how to bypass it out), but they make it much safer. In addition, they look quite natural and reasonable so they could be use by any AI project with different variations. Most of this steps are fallable. But without them the situation would be even worse. If each steps increase safety two times, 8 steps will increase it 256 times, which is good. Simple friendliness is plan B if mathematical FAI fails.

View more: Next