Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The true degree of our emotional disconnect

4 siIver 31 October 2016 07:07PM

If I said that human fears are irrational, because you are probably more afraid sleeping in an abandoned house than driving to work, I would hardly be covering new ground. Myself, I thought to have understood this well before finding LessWrong: some threats are programmed by evolution to be scary, so we are greatly afraid of them; some threats aren't, so we are a little bit afraid of those. Simple enough.

 

But is that actually true? Am I, in fact, afraid of those threats? Am I actually afraid, at all, of dying in travel, of Climate Change, nuclear war, or unfriendly AI?

 

The answers are no, a little bit, just barely, and nope, and the reason for that 'barely' has nothing to do with the actual scope of the problem, but rather with an ability to roughly visualize (accurately or not) the event due to its usage in media. As for Climate Change, the sole reason why I am somewhat afraid is that I've been telling myself for the better part of my life that it is by far humanities biggest problem.

 

In truth, the scope of a problem doesn't seem to have a small impact on our sensitivities; rather it seems to have none. And this is a symptom of a more far more fundamental problem. The inspiration for writing this came when I pondered the causes for Signaling. Kaj_Sotala opens his article The Curse of Identity with the following quote:

 

So what you probably mean is, "I intend to do school to improve my chances on the market". But this statement is still false, unless it is also true that "I intend to improve my chances on the market". Do you, in actual fact, intend to improve your chances on the market?

 

I expect not. Rather, I expect that your motivation is to appear to be the sort of person who you think you would be if you were ambitiously attempting to improve your chances on the market... which is not really motivating enough to actually DO the work.

 

The reason to do this, I realized, is not that the motivation of Signaling – to appear to be the sort of person who does certain stuff – is larger than I had thought, but because the motivation to do the thing it is based on is virtually non-existent outside the cognitive level. If I visualize a goal I have right now, then I don't seem to feel any emotional drive to be working on it. At all. It is really a bit scary.

 

The common approach to deal with Signaling seems to be either to overrule emotional instincts with cognitive choices, or to attempt to compromise, finding ways to reward status-seeking instincts with actions that also help pursuing its respective cognitive goal. But, if it is true that we are starting from zero, why not instead try to create emotional attachment, as I did with Climate Change?

 

I will briefly raise the question of whether being more afraid of significant threats is actually a good thing. I have heard the argument that it is bad, given that fear causes irrationality and hasty decision making, which I'd assess to be true in a very limited context, but not when applied to life decisions with sufficient time. As with every problem of map and territory, I think it would be nice if the degree to which one is afraid had some kind of correlation to reality, which often enough isn't the case. A higher amount of rational fear may also cause a decrease in irrational fear. Maybe. I don't know. If you have no interest in raising fear of rational threats, I'd advise skipping the final paragraph.

 

Take a moment to try and visualize what will happen in the case of unfriendly AI – or another X-risk of your choice. Do it in a concrete way. Think through the steps that might occur, that would result in your death. Would you have time to notice it? Would there be panic? An uprising? Chaos? You may be noticing now how hard it is to be afraid, even if you are trying, and even if the threat is so real. Or maybe you succeeded. Maybe it can be a source of motivation for you. Because the other way doesn't work. Attempting to establish a connection of a goal's end to an emotion reward fails due to the goal's distance. You want to achieve the goal, not the first step that would lead you there. But fear doesn't have this problem. Fear will motivate you immediately, without caring that the road is long.

[Link] The Non-identity Problem - Another argument in favour of classical utilitarianism

2 casebash 18 October 2016 01:41PM

Map and Territory: a new rationalist group blog

8 gworley 15 October 2016 05:55PM

If you want to engage with the rationalist community, LessWrong is mostly no longer the place to do it. Discussions aside, most of the activity has moved into the diaspora. There are a few big voices like Robin and Scott, but most of the online discussion happens on individual blogs, Tumblr, semi-private Facebook walls, and Reddit. And while these serve us well enough, I find that they leave me wanting for something like what LessWrong was: a vibrant group blog exploring our perspectives on cognition and building insights towards a deeper understanding of the world.

Maybe I'm yearning for a golden age of LessWrong that never was, but the fact remains that there is a gap in the rationalist community that LessWrong once filled. A space for multiple voices to come together in a dialectic that weaves together our individual threads of thought into a broader narrative. A home for discourse we are proud to call our own.

So with a lot of help from fellow rationalist bloggers, we've put together Map and Territory, a new group blog to bring our voices together. Each week you'll find new writing from the likes of Ben Hoffman, Mike Plotz, Malcolm Ocean, Duncan Sabien, Anders Huitfeldt, and myself working to build a more complete view of reality within the context of rationality.

And we're only just getting started, so if you're a rationalist blogger please consider joining us. We're doing this on Medium, so if you write something other folks in the rationalist community would like to read, we'd love to consider sharing it through Map and Territory (cross-positing encouraged). Reach out to me on Facebook or email and we'll get the process rolling.

https://medium.com/map-and-territory

[Link] Biofuels a climate mistake

4 morganism 09 October 2016 09:16PM

[Link] Six principles of a truth-friendly discourse

4 philh 08 October 2016 04:56PM

[Link] Software for moral enhancement (kajsotala.fi)

6 Kaj_Sotala 30 September 2016 12:12PM

[Link] Sam Harris - TED Talk on AI

6 Brillyant 29 September 2016 04:44PM

Against Amazement

5 SquirrelInHell 20 September 2016 07:25PM

A Weird Trick To Manage Your Identity

2 Gleb_Tsipursky 19 September 2016 07:13PM

I’ve always been uncomfortable being labeled “American.” Though I’m a citizen of the United States, the term feels restrictive and confining. It obliges me to identify with aspects of the United States with which I am not thrilled. I have similar feelings of limitation with respect to other labels I assume. Some of these labels don’t feel completely true to who I truly am, or impose certain perspectives on me that diverge from my own.

 

These concerns are why it's useful to keep one's identity small, use identity carefully, and be strategic in choosing your identity.

 

Yet these pieces speak more to System 1 than to System 2. I recently came up with a weird trick that has made me more comfortable identifying with groups or movements that resonate with me while creating a System 1 visceral identity management strategy. The trick is to simply put the word “weird” before any identity category I think about.

 

I’m not an “American,” but a “weird American.” Once I started thinking about myself as a “weird American,” I was able to think calmly through which aspects of being American I identified with and which I did not, setting the latter aside from my identity. For example, I used the term “weird American” to describe myself when meeting a group of foreigners, and we had great conversations about what I meant and why I used the term. This subtle change enables my desire to identify with the label “American,” but allows me to separate myself from any aspects of the label I don’t support.

 

Beyond nationality, I’ve started using the term  “weird” in front of other identity categories. For example, I'm a professor at Ohio State. I used to become deeply  frustrated when students didn’t prepare adequately  for their classes with me. No matter how hard I tried, or whatever clever tactics I deployed, some students simply didn’t care. Instead of allowing that situation to keep bothering me, I started to think of myself as a “weird professor” - one who set up an environment that helped students succeed, but didn’t feel upset and frustrated by those who failed to make the most of it.

 

I’ve been applying the weird trick in my personal life, too. Thinking of myself as a “weird son” makes me feel more at ease when my mother and I don’t see eye-to-eye; thinking of myself as a “weird nice guy,” rather than just a nice guy, has helped me feel confident about my decisions to be firm when the occasion calls for it.

 

So, why does this weird trick work? It’s rooted in strategies of reframing and distancing, two research-based methods for changing our thought frameworks. Reframing involves changing one’s framework of thinking about a topic in order to create more beneficial modes of thinking. For instance, in reframing myself as a weird nice guy, I have been able to say “no” to requests people make of me, even though my intuitive nice guy tendency tells me I should say “yes.” Distancing refers to a method of emotional management through separating oneself from an emotionally tense situation and observing it from a third-person, external perspective. Thus, if I think of myself as a weird son, I don’t have nearly as much negative emotions during conflicts with my mom. It enables me to have space for calm and sound decision-making.

 

Thinking of myself as "weird" also applies to the context of rationality and effective altruism for me. Thinking of myself as a "weird" aspiring rationalist and EA helps me be more calm and at ease when I encounter criticisms of my approach to promoting rational thinking and effective giving. I can distance myself from the criticism better, and see what I can learn from the useful points in the criticism to update and be stronger going forward.

 

Overall, using the term “weird” before any identity category has freed me from confinements and restrictions associated with socially-imposed identity labels and allowed me to pick and choose which aspects of these labels best serve my own interests and needs. I hope being “weird” can help you manage your identity better as well!

Learning values versus learning knowledge

5 Stuart_Armstrong 14 September 2016 01:42PM

I just thought I'd clarify the difference between learning values and learning knowledge. There are some more complex posts about the specific problems with learning values, but here I'll just clarify why there is a problem with learning values in the first place.

Consider the term "chocolate bar". Defining that concept crisply would be extremely difficult. But nevertheless it's a useful concept. An AI that interacted with humanity would probably learn that concept to a sufficient degree of detail. Sufficient to know what we meant when we asked it for "chocolate bars". Learning knowledge tends to be accurate.

Contrast this with the situation where the AI is programmed to "create chocolate bars", but with the definition of "chocolate bar" left underspecified, for it to learn. Now it is motivated by something else than accuracy. Before, knowing exactly what a "chocolate bar" was would have been solely to its advantage. But now it must act on its definition, so it has cause to modify the definition, to make these "chocolate bars" easier to create. This is basically the same as Goodhart's law - by making a definition part of a target, it will no longer remain an impartial definition.

What will likely happen is that the AI will have a concept of "chocolate bar", that it created itself, especially for ease of accomplishing its goals ("a chocolate bar is any collection of more than one atom, in any combinations"), and a second concept, "Schocolate bar" that it will use to internally designate genuine chocolate bars (which will still be useful for it to do). When we programmed it to "create chocolate bars, here's an incomplete definition D", what we really did was program it to find the easiest thing to create that is compatible with D, and designate them "chocolate bars".

 

This is the general counter to arguments like "if the AI is so smart, why would it do stuff we didn't mean?" and "why don't we just make it understand natural language and give it instructions in English?"

View more: Next