Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Why won't some people listen to reason?

2 Bound_up 02 February 2017 02:50AM

[Link] The humility argument for honesty

4 Benquo 05 February 2017 05:26PM

Civil resistance and the 3.5% rule

8 morganism 02 February 2017 06:53PM

Interesting, haven't seen anything data-driven like this before...

 

Civil resistance and the 3.5% rule.

https://rationalinsurgent.com/2013/11/04/my-talk-at-tedxboulder-civil-resistance-and-the-3-5-rule/

"no campaigns failed once they’d achieved the active and sustained participation of just 3.5% of the population—and lots of them succeeded with far less than that."

"Then I analyzed the data, and the results blew me away. From 1900 to 2006, nonviolent campaigns worldwide were twice as likely to succeed outright as violent insurgencies. And there’s more. This trend has been increasing over time—in the last fifty years civil resistance has become increasingly frequent and effective, whereas violent insurgencies have become increasingly rare and unsuccessful."

 

Data viz:

http://www.navcodata.org/

 

 

Interesting strategic viewpoint

http://politicalviolenceataglance.org/2016/11/15/how-can-we-know-when-popular-movements-are-winning-look-to-these-four-trends/

1. Size and diversity of participation.

2. Nonviolent discipline.

3. Flexible & innovative techniques. switching between concentrated methods like demonstrations and dispersed methods like strikes and stay-aways.

4. Loyalty shifts.
if erstwhile elite supporters begin to abandon the opponent, remain silent when they would typically defend him, and refuse to follow orders to repress dissidents, or drag their feet in carrying out day-to-day orders, the incumbent is losing his grip.

 

(observations from article above)

"The average nonviolent campaign takes about 3 years to run its course (that’s more than three times shorter than the average violent campaign, by the way)."

"The average nonviolent campaign is about eleven times larger as a proportion of the overall population as the average violent campaign.

"Nonviolent resistance campaigns are ten times more likely to usher in democratic institutions than violent ones."

 

 

 

original overview and links article:

https://www.theguardian.com/commentisfree/2017/feb/01/worried-american-democracy-study-activist-techniques

 

and a training site that has some exercises in group cohesion and communication tech, from Guardian.

https://www.trainingforchange.org/tools

 

edit: The article that got me looking, how to strike in a gig economy, and international reach

 

http://www.transnational-strike.info/2017/02/01/how-do-we-strike-when-our-boss-is-a-machine-a-software-or-a-chain-struggles-in-the-gig-economy/

[Link] The Mind of an Octopus,Adapted from Other Minds: The Octopus, the Sea and the Deep Origins of Consciousness

3 morganism 16 January 2017 09:07PM

[Link] I'm Not An Effective Altruist Because I Prefer...

5 ozymandias 28 December 2016 10:39PM

Be secretly wrong

28 Benquo 10 December 2016 07:06AM

"I feel like I'm not the sort of person who's allowed to have opinions about the important issues like AI risk."

"What's the bad thing that might happen if you expressed your opinion?"

"It would be wrong in some way I hadn't foreseen, and people would think less of me."

"Do you think less of other people who have wrong opinions?"

"Not if they change their minds when confronted with the evidence."

"Would you do that?"

"Yeah."

"Do you think other people think less of those who do that?"

"No."

"Well, if it's alright for other people to make mistakes, what makes YOU so special?"

A lot of my otherwise very smart and thoughtful friends seem to have a mental block around thinking on certain topics, because they're the sort of topics Important People have Important Opinions around. There seem to be two very different reasons for this sort of block:

  1. Being wrong feels bad.
  2. They might lose the respect of others.

Be wrong

If you don't have an opinion, you can hold onto the fantasy that someday, once you figure the thing out, you'll end up having a right opinion. But if you put yourself out there with an opinion that's unmistakably your own, you don't have that excuse anymore.

This is related to the desire to pass tests. The smart kids go through school and are taught - explicitly or tacitly - that as long as they get good grades they're doing OK, and if they try at all they can get good grades. So when they bump up against a problem that might actually be hard, there's a strong impulse to look away, to redirect to something else. So they do.

You have to understand that this system is not real, it's just a game. In real life you have to be straight-up wrong sometimes. So you may as well get it over with.

If you expect to be wrong when you guess, then you're already wrong, and paying the price for it. As Eugene Gendlin said:

What is true is already so. Owning up to it doesn't make it worse. Not being open about it doesn't make it go away. And because it's true, it is what is there to be interacted with. Anything untrue isn't there to be lived. People can stand what is true, for they are already enduring it.

What you would be mistaken about, you're already mistaken about. Owning up to it doesn't make you any more mistaken. Not being open about it doesn't make it go away.

"You're already "wrong" in the sense that your anticipations aren't perfectly aligned with reality. You just haven't put yourself in a situation where you've openly tried to guess the teacher's password. But if you want more power over the world, you need to focus your uncertainty - and this only reliably makes you righter if you repeatedly test your beliefs. Which means sometimes being wrong, and noticing. (And then, of course, changing your mind.)

Being wrong is how you learn - by testing hypotheses.

In secret

Getting used to being wrong - forming the boldest hypotheses your current beliefs can truly justify so that you can correct your model based on the data - is painful and I don't have a good solution to getting over it except to tough it out. But there's a part of the problem we can separate out, which is - the pain of being wrong publicly.

When I attended a Toastmasters club, one of the things I liked a lot about giving speeches there was that the stakes were low in terms of the content. If I were giving a presentation at work, I had to worry about my generic presentation skills, but also whether the way I was presenting it was a good match for my audience, and also whether the idea I was pitching was a good strategic move for the company or my career, and also whether the information I was presenting was accurate. At Toastmasters, all the content-related stakes were gone. No one with the power to promote or fire me was present. Everyone was on my side, and the group was all about helping each other get better. So all I had to think about was the form of my speech.

Once I'd learned some general presentations at Toastmasters, it became easier to give talks where I did care about the content and there were real-world consequences to the quality of the talk. I'd gotten practice on the form of public speaking separately - so now I could relax about that, and just focus on getting the content right.

Similarly, expressing opinions publicly can be stressful because of the work of generating likely hypotheses, and revealing to yourself that you are farther behind in understanding things than you thought - but also because of the perceived social consequences of sounding stupid. You can at least isolate the last factor, by starting out thinking things through in secret. This works by separating epistemic uncertainty from social confidence. (This is closely related to the dichotomy between social and objective respect.)

Of course, as soon as you can stand to do this in public, that's better - you'll learn faster, you'll get help. But if you're not there yet, this is a step along the way. If the choice is between having private opinions and having none, have private opinions. (Also related: If we can't lie to others, we will lie to ourselves.)

Read and discuss a book on a topic you want to have opinions about, with one trusted friend. Start a secret blog - or just take notes. Practice having opinions at all, that you can be wrong about, before you worry about being accountable for your opinions. One step at a time.

Before you're publicly right, consider being secretly wrong. Better to be secretly wrong, than secretly not even wrong.

(Cross-posted at my personal blog.)

Canons (What are they good for?)

9 Benquo 13 December 2016 09:34PM

People in the Effective Altruist and Rationalist intellectual communities have been discussing moving discourse back into the public sphere lately. I agree with this goal and want to help. There are reasons to think that we need not only public discourse, but public fora. One reason is that there's value specifically in having a public set of canonical writing that members of an intellectual community are expected to have read. Another is that writers want to be heard, and on fora where people can easily comment, it's easier to tell whether people are listening and benefiting from your writing.

This post begins with a brief review of the case for public discourse. For reasons I hope to make clear in an upcoming post, I encourage people who want to comment on that to click through to the posts I linked to by Sarah Constantin and Anna Salamon. For another perspective you can read my prior post on this topic, Be secretly wrong. The second section explores the case for a community canon, suggesting that there are three distinct desiderata that can be optimized for separately.

This is an essay exploring and introducing a few ideas, not advancing an argument.

Why public discourse?

People have been discussing moving discourse back into the public sphere lately. Sarah Constantin has argued that public criticism-friendly discussion is important for truth-seeking and creating knowledge capital:

There seems to have been an overall drift towards social networks as opposed to blogs and forums, and in particular things like:

  • the drift of political commentary from personal blogs to “media” aggregators like The AtlanticVox, and Breitbart
  • the migration of fandom from LiveJournal to Tumblr
  • The movement of links and discussions to Facebook and Twitter as opposed to link-blogs and comment sections

[...]

But one thing I have noticed personally is that people have gotten intimidated by more formal and public kinds of online conversation.  I know quite a few people who used to keep a “real blog” and have become afraid to touch it, preferring instead to chat on social media.  It’s a weird kind of locus for perfectionism — nobody ever imagined that blogs were meant to be masterpieces.  But I do see people fleeing towards more ephemeral, more stream-of-consciousness types of communication, or communication that involves no words at all (reblogging, image-sharing, etc.)  There seems to be a fear of becoming too visible as a distinctive writing voice.

For one rather public and hilarious example, witness Scott Alexander’s  flight from LessWrong to LiveJournal to a personal blog to Twitter and Tumblr, in hopes that somewhere he can find a place isolated enough that nobody will notice his insight and humor. (It hasn’t been working.)

[...]

A blog is almost a perfect medium for personal accountability. It belongs to you, not your employer, and not the hivemind.  The archives are easily searchable. The posts are permanently viewable. Everything embarrassing you’ve ever written is there.  If there’s a comment section, people are free to come along and poke holes in your posts. This leaves people vulnerable in a certain way. Not just to trolls, but to critics.

[...]

We talk a lot about social media killing privacy, but there’s also a way in which it kills publicness, by allowing people to curate their spaces by personal friend groups, and retreat from open discussions.   In a public square, any rando can ask an aristocrat to explain himself.  If people hide from public squares, they can’t be exposed to Socrates’ questions.

I suspect that, especially for people who are even minor VIPs (my level of online fame, while modest, is enough to create some of this effect), it’s tempting to become less available to the “public”, less willing to engage with strangers, even those who seem friendly and interesting.  I think it’s worth fighting this temptation.  You don’t get the gains of open discussion if you close yourself off.  You may not capture all the gains yourself, but that’s how the tragedy of the commons works; a bunch of people have to cooperate and trust if they’re going to build good stuff together.  And what that means, concretely, on the margin, is taking more time to explain yourself and engage intellectually with people who, from your perspective, look dumb, clueless, crankish, or uncool.

Some of the people I admire most, including theoretical computer scientist Scott Aaronson, are notable for taking the time to carefully debunk crackpots (and offer them the benefit of the doubt in case they are in fact correct.)  Is this activity a great ROI for a brilliant scientist, from a narrowly selfish perspective?  No. But it’s praiseworthy, because it contributes to a truly open discussion. If scientists take the time to investigate weird claims from randos, they’re doing the work of proving that science is a universal and systematic way of thinking, not just an elite club of insiders.  In the long run, it’s very important that somebody be doing that groundwork.

Talking about interesting things, with friendly strangers, in a spirit of welcoming open discussion and accountability rather than fleeing from it, seems really underappreciated today, and I think it’s time to make an explicit push towards building places online that have that quality.

In that spirit, I’d like to recommend LessWrong to my readers. For those not familiar with it, it’s a discussion forum devoted to things like cognitive science, AI, and related topics, and, back in its heyday a few years ago, it was suffused with the nerdy-discussion-nature. It had all the enthusiasm of late-night dorm-room philosophy discussions — except that some of the people you’d be having the discussions with were among the most creative people of our generation.  These days, posting and commenting is a lot sparser, and the energy is gone, but I and some other old-timers are trying to rekindle it. I’m crossposting all my blog posts there from now on, and I encourage everyone to check out and join the discussions there.

I agree that we need to move more discussion back into enduring public media so that we can make stable intellectual progress, and outsiders can catch up if they have something to contribute - especially if we're wrong about something they know about. And Anna Salamon's also suggested that common fora such as LessWrong are an especially important means of creating a single conversation:

We need to think about [...] everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better.

One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another. By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read. Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

We have lately ceased to have a "single conversation" in this way. Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such. There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence. Without such a locus, it is hard for conversation to build in the correct way. (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)

It seems to me, moreover, that Less Wrong used to be such a locus, and that it is worth seeing whether Less Wrong or some similar such place[3] may be a viable locus again. [...]

I suspect that most of the value generation from having a single shared conversational locus is not captured by the individual generating the value (I suspect there is much distributed value from having "a conversation" with better structural integrity / more coherence, but that the value created thereby is pretty distributed). Insofar as there are "externalized benefits" to be had by blogging/commenting/reading from a common platform, it may make sense to regard oneself as exercising civic virtue by doing so, and to deliberately do so as one of the uses of one's "make the world better" effort. (At least if we can build up toward in fact having a single locus.)

My initial thinking was that we should just move from the ephemeral medium of Facebook to people having their own personal blogs. The nice thing about a blogosphere as a mode of discourse is that community boundaries aren't a huge deal - if you think someone's unhelpful, there's nowhere you have to boot them out from - you just stop reading their stuff and linking them. But this interferes with the  "single canon" approach.

What are canons good for?

When I hear people talk about getting communities to read the same things, they often bring up very different sorts of benefits. So far I count three very different things they mean:

  1. Common basic skills and norms
  2. The shoulders of giants
  3. Synchronized discussions

Common basic skills and norms

Some have pointed to LessWrong's Sequences - a series of blog posts by Eliezer Yudkowsky on the art of human rationality - as an example of the kind of text that should be a community canon. I do think that the extended LessWrong community has benefited from internalizing the insights laid out in the Sequences. Lots of cognitive wasted motions common elsewhere seem less common in this community because we know better.

This kind of canon plays an analogous role to professional training among, say, engineers - or in a liberal arts education. You don't necessarily expect a liberally educated person to know each particular book you reference, but you do expect them to know what math, and music, and literature, and philosophy are like on the inside. Not every "liberal arts college" does this anymore, but some still do, and people who get it can recognize each other and have conversations that are simply unavailable between us and people without that background. Not because the material is impossible to communicate, but because there are way too many steps, and because it's not just about getting across a particular argument. It's about different ways of perceiving the world.

Similarly, if I'm talking to someone who has read and understood the Sequences, there are places we can go, quickly, that it might take a long time to explore with someone else who hasn't stumbled across the same insights elsewhere.

This benefit from having a canon requires a substantial investment. For this reason, taking an existing community and pushing a canon on it seems unlikely to work very well without a very large investment. Traditional Western academia did this for a while, but that depended on the authority of existing elite scholars, and a large, hierarchical system that had a near monopoly on literacy and concomitant employment. Judaism seems in some large part formed by its canon, but the process that knit together a tight literary core ended up interposing layers of commentary beginning with the written Talmud in between Jews and the text of the Bible itself.

Taking a canon and forming a community around it, composed of people who find it compelling, seems more tractable right now. LessWrong coalesced around the Sequences (and later, the CFAR workshops). Objectivism coalesced around The Fountainhead and Atlas Shrugged. My alma mater, St. John's College, reformed as a community around the New (Great Books) Program and the students and tutors it attracted.

This suggests to me that the thing to do is to figure out how to teach the skills and norms you want in your community, and see who joins.

The shoulders of giants

A second reason for a canon is so that we don't have to retreat old ground. This is not so much about the unarticulated, holistic seeing that comes from having read a corpus of text together, but from having common knowledge of specific accumulated insights.

This is why academics publish in journals, and typically begin papers by reviewing (and citing) prior work on the subject. The academic journal model does not really depend at all for published discourse on having a single uniform canon. Instead, it relies on common availability of prior sources. A norm of citing, linking, or otherwise directing the reader to prior work is more adaptable to this purpose, because it does not so fully exclude outsiders as a community where everyone is expected to have already learned the thing.

I've tried to follow something like these practices myself, linking to prior work on a subject that's influenced me when I'm aware of it. Much of the Rationalist and EA blogosphere works on this model at least sometimes. But I can think of one thing that could be useful for the academic journal model that doesn't yet exist in the Rationalist or EA communities: a stable archive of prior work, that brings the different sources together - blog posts, academic papers, and personal web pages that advance the Rationalist project. Right now, there's no search capability here, no Google Scholar equivalent to look up "works citing this one" or "works cited by this one." (If you want to advance this project, let me know if you want me to put you in touch with others interested in making this happen.)

Synchronized discussions

When a TV show becomes popular, people who like it often come together to watch it, or discuss each episode, sharing their reactions to it and speculating about characters' motivations or what will happen next. This sort of agenda-setting makes it much easier to have large, complex conversations about such things, while the text (in this case, the episode) is still fresh in everyone's mind. Serialized stories such as Harry Potter and the Methods of Rationality, or the original Harry Potter series, or Unsong, have a similar coordinating effect. If your community's largely following the same texts at the same time, then when you meet a stranger at a party, you don't get stuck talking about the weather.

Getting everyone looking at the same thing at the same time can also spark productive disagreement. If something will be out in public forever, you can put off commenting on it. But if now's the time for everyone to talk about it, now's your only chance to speak up if someone is wrong on the internet.

Tiers and volleys

The synchronization of conversations can be a powerful force for extracting additional value from the intellectual labor people are already doing, and getting them to share their perspectives more promptly and publicly. But if an intellectual community doesn't have people going off on their own, doing self-directed work of the kind that can lead to more academic journal style discourse, then it won't produce deep original work of the kind that may be needed to steer the world in a substantially better direction.

Creating a sort of community "TV show" and a forum for people to comment on it is the least expensive way to extract additional public value from the intellectual activity already going on. Slate Star Codex has to some degree taken over LessWrong's role as a community hub, and provides a good starting point - its author, Scott Alexander, was kind enough to link to a few posts in the attempted LessWrong renaissance, and perhaps will do so again if and when that or some similar effort shows substantially more progress. But I don't expect this on its own to lead to the kind of deep intellectual progress we need.

Some people are already doing work at something more like an academic tempo, doing a fair amount of research on their own, and sharing what they find afterwards. Building a better archive/repository of existing work seems like it could substantially increase the impact of the work people are already doing. And if done well, it could lead to an increase in truly generative, deep work - and maybe even more importantly, less progress lost to the sands of time.

I expect building something like academic journals for the community, and persuading more people to do this sort of intellectual labor, to be substantially slower work, at least if done right, and it will only be worth it if done right. It will require many people to invest substantial intellectual effort, though hopefully they'd want to think deeply about things anyway.

Creating a common conceptual vocabulary, skills, and norms, by contrast, can be very expensive. A full-blown liberal arts education is famously pricey. The Sequences took a year of Eliezer Yudkowsky's time, and I don't think he worked on much else. CFAR has several full-time employees who've been working at it for years. This approach - especially the high-touch version where you educate people in person - is to be used sparingly, when you have a strong reason to believe that you can produce a large improvement that way.

(Cross-posted at my personal blog.)

Traditions and Rationality.

7 NatashaRostova 10 December 2016 05:08AM

A couple months ago I read a post on facebook about how perhaps more young female virgins should sell their virginity, if they receive a lot of money. It was based on this article about a young women selling her virginity for 120k.


What bugs me is these cases are often lazy in assuming there aren't incredibly complex systems lurking behind these simple calculations. 

 


If you wanted to be rational about this, you could map your perception of this story to dollars, take the situation as well specified, and estimate what a women ought to do (or at least seriously consider) given those circumstances. For the sake of argument let's assume that the news story is totally accurate, and it's a real decision that is available to all young women. Given this, would this analysis robust?


[Edit: Daniel_Burfoot makes a fair point that I shouldn't cite facebook posts as they are supposed to have a semblance of privacy. Since my argument doesn't rely on the specific post made by EY, I abstracted it away. This is why his name is in the comments.]


About 53 years ago Karl Popper wrote about the hostility between tradition and rationality in an essay in Conjectures and Refutations. In a passage that could have come from Less Wrong he wrote “There is a traditional hostility between rationalism and traditionalism. Rationalists are inclined to adopt the attitude: 'I am not interested in tradition. I want to judge everything on its own merits; I want to find out its merits and demerits, and I want to do this quite independently of any tradition. I want to judge it with my own brain, and not with the brains of other people who lived long ago.'”


That's kinda the tone set by some rationalists. Actually, I think more often than not it's the right way to study certain problems in rationality. Does it always work though? I'm skeptical.


Popper framed this problem as rationalists vs. traditionalists. He didn't claim to know the answer or take a side, but did argue that rationalists were sometimes too dismissive of traditional without at least critically examining it. What even is tradition though? Well, about 31 years after Popper's article a Computer Scientist, R. G. Reynolds, wrote a paper on culture as an algorithm. I'm going to go out on a limb and say it's an accepted model for the crowd reading this argument. Based on my own casual observations of culture, it's easy (or at least feels easy) to intuitively understand why some cultural rules are formed. It's particularly nice when it's based on something hidden at the time, which we directly observe in the future, like how pork is forbidden by some religions, which we now know is due to trichinosis caused by parasites in pork.


Sometimes it's harder. The evolution of sexual norms is complicated. It appears to be the lowest level code both genetically and culturally. If you pull on a string you never really know what's going to happen. It seems a reasonable claim that the distributed filtering method of a cultural algorithm could, in theory, optimize over norms and dimensions that are too complicated for us to intuit or hold in our heads. I don't want to come across as too nihilistic though, once we figure out that female genital mutilation is horrible, we should encourage people to stop (that is its own challenge).


Sometimes these algorithms run crazy weird experiments. I was on vacation last year visiting the Yucatan state in Mexico, and saw the sacrificial wells of Chichen Itza. I don't remember their specific rules, but they'd drown a young virgin to encourage rain for their crops. What is creepy is that it is a very rational and reasonable experiment, even though they weren't acutely aware they were running an experiment. If killing a single person could have a low chance of improving the rain, well you need to do it or test it. At least until you're sure it doesn't work. And, hey, life is weird enough. If sacrificing people had some impact on the cosmos it wouldn't have been that much weirder than volcanoes. At least at the time. My point is there are some intensely complex dynamics at play that might be hidden to our brains.


Thinking through this stuff is hard sometimes, because we view ourselves as having an unclouded vision of sexuality as we contrast ourselves to basically everyone else who isn't a well educated person living on the West Coast of the U.S. in 2016 (and if not with us geographically, with us in spirit). And again, I'm not trying to take some post-colonialist (I can't believe I used that term) view that 'all cultures are equally valid.' If we view gay marriage or acceptance as an experiment, the prediction that “nothing bad will happen except lots of people will be happier, with some who won't be at first but will eventually move on” seems to have been the right prediction.


If everyone adopted the Less Wrong framework would sexuality drastically change? Or are we a heterogeneous group who self-selected into this because we are more capable than most to reprogram our brain? Or a hard-to-predict combination of the two?

I suspect, even if we've never considered it, we all have some hard barriers in our brain that we wouldn't cross. Most people seem to be programmed to find incest repulsive. Obviously some edge cases have existed that can override that programming, but I doubt most people could, even if they wanted to (whatever that means).


The point I'm trying to get to is we don't fully understand the limits of human rational analysis towards sex and other biological constraints. There could be strange societal unintended consequences If there was an experimental shift towards more young women selling their bodies. We don't know how their families would react on an aggregate scale.We can still ask if it's an experiment worth advocating for, as a society, but is it?


We don't know exactly why there was a cultural algorithm developed for us to want to protect our young daughters from prostitution. If a tradition intuitively sounds outdated and can be overridden by rationalist analysis, maybe it is, or maybe there is a level of complexity with societal equilibrium we are completely unable to predict. We shouldn't have hubris when dismissing tradition as clearly outdated, clearly wrong, clearly beneath us.


The true degree of our emotional disconnect

4 siIver 31 October 2016 07:07PM

If I said that human fears are irrational, because you are probably more afraid sleeping in an abandoned house than driving to work, I would hardly be covering new ground. Myself, I thought to have understood this well before finding LessWrong: some threats are programmed by evolution to be scary, so we are greatly afraid of them; some threats aren't, so we are a little bit afraid of those. Simple enough.

 

But is that actually true? Am I, in fact, afraid of those threats? Am I actually afraid, at all, of dying in travel, of Climate Change, nuclear war, or unfriendly AI?

 

The answers are no, a little bit, just barely, and nope, and the reason for that 'barely' has nothing to do with the actual scope of the problem, but rather with an ability to roughly visualize (accurately or not) the event due to its usage in media. As for Climate Change, the sole reason why I am somewhat afraid is that I've been telling myself for the better part of my life that it is by far humanities biggest problem.

 

In truth, the scope of a problem doesn't seem to have a small impact on our sensitivities; rather it seems to have none. And this is a symptom of a more far more fundamental problem. The inspiration for writing this came when I pondered the causes for Signaling. Kaj_Sotala opens his article The Curse of Identity with the following quote:

 

So what you probably mean is, "I intend to do school to improve my chances on the market". But this statement is still false, unless it is also true that "I intend to improve my chances on the market". Do you, in actual fact, intend to improve your chances on the market?

 

I expect not. Rather, I expect that your motivation is to appear to be the sort of person who you think you would be if you were ambitiously attempting to improve your chances on the market... which is not really motivating enough to actually DO the work.

 

The reason to do this, I realized, is not that the motivation of Signaling – to appear to be the sort of person who does certain stuff – is larger than I had thought, but because the motivation to do the thing it is based on is virtually non-existent outside the cognitive level. If I visualize a goal I have right now, then I don't seem to feel any emotional drive to be working on it. At all. It is really a bit scary.

 

The common approach to deal with Signaling seems to be either to overrule emotional instincts with cognitive choices, or to attempt to compromise, finding ways to reward status-seeking instincts with actions that also help pursuing its respective cognitive goal. But, if it is true that we are starting from zero, why not instead try to create emotional attachment, as I did with Climate Change?

 

I will briefly raise the question of whether being more afraid of significant threats is actually a good thing. I have heard the argument that it is bad, given that fear causes irrationality and hasty decision making, which I'd assess to be true in a very limited context, but not when applied to life decisions with sufficient time. As with every problem of map and territory, I think it would be nice if the degree to which one is afraid had some kind of correlation to reality, which often enough isn't the case. A higher amount of rational fear may also cause a decrease in irrational fear. Maybe. I don't know. If you have no interest in raising fear of rational threats, I'd advise skipping the final paragraph.

 

Take a moment to try and visualize what will happen in the case of unfriendly AI – or another X-risk of your choice. Do it in a concrete way. Think through the steps that might occur, that would result in your death. Would you have time to notice it? Would there be panic? An uprising? Chaos? You may be noticing now how hard it is to be afraid, even if you are trying, and even if the threat is so real. Or maybe you succeeded. Maybe it can be a source of motivation for you. Because the other way doesn't work. Attempting to establish a connection of a goal's end to an emotion reward fails due to the goal's distance. You want to achieve the goal, not the first step that would lead you there. But fear doesn't have this problem. Fear will motivate you immediately, without caring that the road is long.

[Link] The Non-identity Problem - Another argument in favour of classical utilitarianism

2 casebash 18 October 2016 01:41PM

View more: Next