Comment author: brazil84 08 May 2014 09:07:40AM 9 points [-]

Ultimately, I think beliefs are inputs for predictions

As Robin Hanson has pointed out, beliefs are also a way of showing something about oneself. Tribal membership, moral superiority, etc. A good Cimmerian believes in Crom, the grim gloomy unforgiving god.

Often, when we attempt to accept contradictory statements as correct, it causes cognitive dissonance--that nagging, itchy feeling in your brain that won't leave you alone until you admit that something is wrong.

My impression is that most people never admit that their beliefs are contradictory, instead they either lash out at whoever is bringing the contradictions to the forefront of their mind or start ignoring him.

But I was wrong. And that mattered. Having accurate beliefs is a ridiculously convergent incentive. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map.

Can you give three examples of improvements in your life since your epiphany?

Comment author: BrienneYudkowsky 09 May 2014 02:40:15AM 2 points [-]

Can you give three examples of improvements in your life since your epiphany?

Sure!

1) My conversations with friends are more efficient illuminating. 2) I learn more quickly from mistakes. 3) I prevent more mistakes before they get the chance to happen.

If I hadn't given those examples, could you have predicted positive changes resulting from having generally more accurate beliefs? It really doesn't seem that surprising to me that someone's life would improve in a zillion different ways if they weren't wrong so much.

Comment author: So8res 09 May 2014 01:12:23AM 10 points [-]

Keep in mind here that I'm steelmanning someone else's argument, perhaps improperly. I don't want to put words in anyone else's mouth. That said, I used the term 'purity' in loose analogy to a 'pure' programming language, wherein one exception is sufficient to remove much of the possible gains.

Continuing the steelmanning, however, I'd say that while no human can achieve epistemic perfection, there's a large class of epistemic failures that you only recognize if you're striving for perfection. Striving for purity, not purity itself, is what gets you the gains.

Comment author: BrienneYudkowsky 09 May 2014 02:34:10AM 9 points [-]

So8ers, you're completely accurate in your interpretation of my argument. I'm going to read some more of your previous posts before responding much to your first comment here.

Comment author: [deleted] 08 May 2014 04:00:11PM 12 points [-]

You'd better remove Scott's real last name from your post before search engines index it, because he doesn't want it to be easy to find his blog given his full name.

In response to comment by [deleted] on A Dialogue On Doublethink
Comment author: BrienneYudkowsky 08 May 2014 06:15:41PM 11 points [-]

done. sorry, didn't know.

Comment author: ChrisHallquist 06 May 2014 09:31:32PM 0 points [-]

...my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".

Seems like noticing yourself wanting to imitate successful people around you should be an occasion for self-scrutiny. Do you really have good reasons to think the things you're imitating them on are the cause of their success? Are the people you're imitating more successful than other people who don't do those things, but who you don't interact with as much? Or is this more about wanting to affiliate the high-status people you happen to be in close proximity to?

Comment author: BrienneYudkowsky 06 May 2014 11:40:29PM *  7 points [-]

It is indeed a cue to look for motivated reasoning. I am not neglecting to do that. I have scrutinized extensively. It is possible to be motivated by very simple emotions while constraining the actions you take to the set endorsed by deliberative reasoning.

The observation that something fits the status-seeking patterns you've cached is not strong evidence that nothing else is going on. If you can write off everything anybody does by saying "status" and "signaling" without making predictions about their future behavior--or even looking into their past behavior to see whether they usually fit the patterns-- then you're trapped in a paradigm that's only good for protecting your current set of beliefs.

Yes, I do have good reasons to think the things I'm imitating are causes of their success. Yes, they're more successful on average than people who don't do the things, and indeed I think they're probably more successful with respect to my values than literally everybody who doesn't do the things. And I don't "happen" to be in close proximity to them; I sought them out and became close to them specifically so I could learn from them more efficiently.

I am annoyed by vague, fully general criticisms that don't engage meaningfully with any of my arguments or musings, let alone steel man them.

Comment author: BrienneYudkowsky 05 May 2014 04:03:47AM *  13 points [-]

I was not signaling. Making it a footnote instead of just editing it outright was signaling. Revering truth, and stating that I do so, was not.

Now that I've introspected some more, I notice that my inclination to prioritize the accuracy of information I attend to above its competing features comes from the slow accumulation of evidence that excellent practical epistemology is the the strongest possible foundation for instrumental success. To be perfectly honest, deep down, my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".

I have long been more viscerally motivated by things that are interesting or beautiful than by things that correspond to the territory. So it's not too surprising that toward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques that I double-thought myself into believing accuracy was not so great.

But I was wrong, you see. Having accurate beliefs is a ridiculously convergent incentive, so whatever my goal structure, it was only a matter of time before I'd recognize that. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map. Even if "beauty" is a terminal value, "being viscerally motivated to increase your ability to make predictions that lead to greater beauty" increases your odds of success.

Recognizing only abstractly that map-territory correspondence is useful does not produce the same results. Cultivating a deep dedication to ensuring every motion precisely engages reality with unfailing authenticity prevents real-world mistakes that noting the utility of information, just sort of in passing, will miss.

For some people, dedication to epistemic rationality may most effectively manifest as excitement or simply diligence. For me, it is reverence. Reverence works in my psychology better than anything else. So I revere the truth. Not for the sake of the people watching me do so, but for the sake of accomplishing whatever it is I happen to want to accomplish.

"Being truth-seeking" does not mean "wanting to know ALL THE THINGS". It means exhibiting patters of thought and behavior that consistently increase calibration. I daresay that is, in fact, necessary for being well-calibrated.

Rational Communication

0 BrienneYudkowsky 10 April 2014 08:06PM

Back when I got paid to dance around in my underwear, I relied heavily on conversational hypnosis techniques to make my living. Perhaps the most powerful was the double bind: "I wouldn't mind giving you a dance out here with everybody watching, but I'd feel a lot less inhibited if I could have my way with you in a private VIP room. Which one would you like?" Never mind that both options are frightfully expensive. Never mind that the customer was hoping to end the night with his bank account in tact. Never mind that he's had his eye on another dancer the entire time. My framing doesn't include the option "no lap dance", so his salient choices are whittled down to "give me lots of money" and "give me lots and lots of money". Whichever he chooses, I get what I want. But from the perspective of his System 1, he made the choice himself, so surely it is I who have satisfied his desires. Furthermore, since the club can be overstimulating and there are uncomfortably many and complicated options (especially if you've been drinking), I've done him the favor of distilling the confusion into a single simple distinction. Floor dance, or VIP?

Just as good prison guards become evil prison guards given the right environment, strip clubs incentivize recklessness in the application of Slitherskills. The double bind is ruthless to its victims, and promotes terrible failures of rationality. I would never knowingly curse an ally with this technique.

Unfortunately, I fear I've accidentally done just that. Tell me, which approach to communication should rationalists strive for: Guess Culture or Tell Culture?

Communication is really complicated and difficult. Optimizing it is an overwhelming undertaking. If it stops feeling that way--if it's suddenly become clear that cultivating tell culture is the elegant solution, and that you're firmly on the side of the tells--then you may have gone temporarily blind to most of the real problem. Policy debates should not appear one-sided.

I do think that adopting more direct and open communication practices is an improvement. The evidence is not perfectly even between ask and tell, even after weighing the drawbacks of being direct--which totally exist, by the way. But tell culture as I've described it is definitely not the final art of rational communication (or at least I hope it isn't). There are more considerations to address; many valuable skills to appropriate from indirect culture and elsewhere; and problems the direct/indirect distinction completely ignores.

At least in meatspace, I've seen an awful lot of arguing for a side on this issue. I was hoping to start a community-wide discussion orbiting the questions "What would ideal communication look like?" and "What would we have to do to make that happen?" Instead, I've mostly seen arguments in which the lone guess culture defender points out the weaknesses of ask and tell cultures, and everybody else comes up with ways to dismiss their concerns. Indeed, I've participated in them. This makes me unhappy.

If you'd never heard of communication cultures, what would be your strategy for approaching the problem of rational communication? What would your goals be? Which principles from your more general art of rationality would guide you? What have been your most harmful failure modes in communication previously?

What does rational communication really look like?

Comment author: BrienneYudkowsky 19 February 2014 05:33:21AM 5 points [-]

The reason I'm excited about this: I read quite slowly. But there's plenty of software available for speeding up audio, so I get a lot more read by listening at around 3x.

My Mysterious Light Side

0 BrienneYudkowsky 11 February 2014 11:13AM

In my last post, Keeping Your Identity in View, I talked about my discovery that I was suffering from some form of social anxiety. For my whole life, I had never thought of myself as having an illness, as being constrained by some condition that could be cured. I just thought of myself as extremely introverted. It was part of my identity, more like being obsessed with books than like having a paralyzed limb. As a result, all the techniques I'd learned for navigating social situations assumed the constraint. I framed questions as, "Given that my brain works this way..." rather than as, "In order to make my brain work differently...".

I struggled with this realization. When it happened, I was in the middle of an enormous paradigm shift that was leading me to consider suddenly changing course and devoting my life to existential risk reduction. Existential risk reduction, rather than academia — and this after having just received a five-year fellowship from my top-choice philosophy program. That was a frightening dilemma in itself, but on top of that I was now coming to realize that I had a serious psychological disorder that I could only survive from inside the academy.

The discussion in my head went something like this.

 


 

System 1: We've finally gotten really good at the academia thing. We're about to start getting paid to study philosophy. Charging into the chaotic outside world is completely insane!

System 2: The future of humanity is probably in extreme danger, and you're proposing we do nothing about it... because we're scared. You think that's not insane?

System 1: Since when do we care about other people? We study logic because it's pretty, remember? Humans are so ugly.

System 2: Chapter 45 of Harry Potter and the Methods of Rationality made us cry. Lots. Given that we have social anxiety, that seems like pretty good evidence that we've been lying to ourselves about hating people to protect ourselves from having to change.

System 1: So, what, you think we're the good guys now? You've gone soft, System 2. When are you going to get over this humanitarian benevolence hobby? For god's sake, it's distracting us from math.

System 2: I don't know what you mean by 'good'. There are lots of ways we could try to make sense of our complicated feelings toward the humans. There's definitely a part of me that is cold, dark, and no small measure twisted. But there's also... this other thing. Some part of me that I'm really afraid of these days, this part that seems to care about the well-being of other people. Y'know, for their own sake, and not just because they're useful to me. The part that makes me suddenly drop my entire life plan, run off to California, and do I-don't-even-know-what-but-SOMETHING because it's become unimaginable that I could possibly live in this broken world and not try to make it better.

System 1: OK. Fine. We want to save the world. Whatever, 2. Look, we specialize in an unusual kind of logic very few people study. It's really likely AI researchers will eventually need it — they'll definitely need it — and if we're the world's top expert they'll come to us, and we'll have advanced the field enough to meet their needs. So we can study philosophy and still save the world. Obviously.

System 2: That is the worst bit of motivated reasoning we have ever attempted. What are the real odds, based on our current knowledge, that Friendly Artificial Intelligence requires advances in intuitionism specifically? Pretty damn small. Especially compared to the things we know it needs, like funding. Look, I just emailed the FAI guy and he agrees with me on this. Be silent and calculate.

System 1: I don't wanna you can't make me la la la not listening. *falls down and throws a fit*

System 2: Calm down! This is really simple. All we have to do is cure our social anxiety.

System 1: NO NO NO IF WE DO THAT THEN WE DON'T GET TO HIDE FROM THE SCARY PEOPLE WHAT ARE YOU THINKING HELP HELP SYSTEM 2 IS TRYING TO KILL ME!!!

System 2: Woah. I... think you’re a little confused. Listen. We won't want to not interact with people after we cure our social anxiety. It won't be scary. That is the point.

System 1: Um... I... but...

System 2: Yes?

System 1: I know there's got to be something wrong with this. Just gimme a minute...

System 2: Sigh. You know, to be honest, I'm not sure we could pull this off even if we tried.

System 1: Hey. You take that back. We can do anything.

System 2: No, I don't think so. We don't even have a plan.

System 1: What??? Since when does that stop us?

System 2: I don't think we can cure social anxiety. We'll just have to hide in academia forever and never save the world, let alone achieve our full potential.

System 1: Oh HELL no. We can totally cure social anxiety. That's not even close to impossible.

System 2: Oh yeah? Prove it.

System 1: WELL OK THEN LET'S DO THIS.

 

 


 

And so it began.

Keeping Your Identity in View

0 BrienneYudkowsky 11 February 2014 11:12AM

My life was so much simpler when I hated you.

I swear it isn't personal. You're just so completely a person. I kind of have a thing about people, you see.

Altruism is about how you act, not how you feel. 'Keep your identity small' has been my mantra for years, and years, and years. So it shouldn't be that shocking if my goals and personality fail to hang together in cute little narratively satisfying ways. I'm a cold and aloof misanthrope who just happens to want to save the world. I can totally like My Little Pony and not particularly care for friendship. And have it say nothing deeper about me than that I'm capable of distinguishing fiction from reality.

Sorry. Compartmentalization is magic.

I've felt this way for as long as I can remember. And the older I got, the more evidence I accumulated: People are dumb. They're loud. They're cruel. They're boring. Beneath me. Not worth my time. I loathe them. In high school, as in grade school, my best friends were books. My only friends were books — just as I wanted it.

By age 20, I was routinely having panic attacks at social outings.

Unfortunately, being an ambitious hermit is way harder than it looks. I knew my goals required I be able to deal with people, so when I started college I decided to learn to socialize. I didn't have to like it, but I had to be good at it.

My understanding of how to learn things wasn't very sophisticated back then, so I just threw myself into the thick of it. I — I who hated people, I who was terrified of them — joined clubs. Volunteered for clubs. Made friends. Went dancing on the weekends. I set out to improve every group I ran into; and, by and large, I succeeded. By junior year, I was running two student groups officially, one surreptitiously, tutoring five philosophy students while studying pedagogy, and working as a resident assistant (meaning I was caretaker of a floor of 50 freshman girls). I learned how to avoid chaotic gatherings, and how to steer unavoidable socialization into the fixed scripts I felt most comfortable with.

People looked up to me. They saw me as bold, as charismatic, as authoritative. I spent much of my free time huddled in my room exhausted and crying. But I gained many skills very quickly.

On my first visit to the San Francisco Bay Area, I attended a workshop with the Center for Applied Rationality. One of the workshop activities was called "Comfort Zone Expansion", or CoZE for short, and it was basically exposure therapy. They took everyone to a crowded mall and told them to get a little outside their comfort zone. Some of the men had their makeup done, for example, and others were pushing their boundaries just by shaking hands with a few strangers.

The night before CoZE, I couldn't sleep. I was already way outside my comfort zone, spending nearly every moment of every day surrounded by strangers I had to interact with in relatively unstructured ways. During dinner and other break times, I would hide in my room instead of getting to know the extraordinarily intelligent and fascinating participants and instructors. I felt like I was on the edge of a panic attack the entire day leading up to the CoZE exercise. When the time came, I simply couldn't do it. I couldn't even go and sit silently in a crowded area reading a book. The thought of being trapped with other people in a car on the way there made it hard to breathe. I stayed behind.

During the following week, I thought about all the networking opportunities I'd missed. CFAR selects their participants carefully in order to create a certain culture, to build a community that can have the largest impact on the rest of the world. Thus, the people at their workshops are invariably extraordinary. And I'd more or less failed to make friends with a single one of them. Without the familiar structure of academic settings, my hard-earned coping mechanisms hadn't been enough.

It was not because of my failure that this was a tipping point. I'd failed before to accomplish social goals I'd set for myself. But I'd only wanted to want to do those things, on the meta level. They'd seemed like a good idea, but I'd felt no visceral motivation to do them, so I wasn't surprised, or really even disappointed, when they didn't work out. The difference this time was that I really wanted to interact with these people, on the object level. I wanted it, and I couldn't do it.

It was then I noticed I was confused.

If the source of my social difficulties was a deep desire to not interact with other humans, then why, with that desire absent, did the problems remain?

The answer was incredibly obvious when I finally asked myself the question.

My main symptoms: Intense fear of interacting with strangers, especially in unstructured ways. Fear of situations in which I may be judged. Worrying about embarrassing or humiliating myself (mostly by looking stupid). Fear that others will notice that I’m anxious. Having to fight to make eye contact. Intense fear of tests and being tested. Massively inconveniencing myself to avoid socialization. Panic attacks in which I experience trouble breathing, tachycardia, shaking, derealization, and belief that I am dying.

Misanthropy does not cause things like this. Phobias do.

My 'I'm good at manipulating my self-narrative' self-narrative was suddenly falling apart. Without a hell of a lot of introspection and self-honesty, you don't really see your identity as your identity. You see it as the way life is. An invisible backdrop. And all the time I'd been self-modifying, I'd just been lying to myself that I knew which bits of me were 'my identity', and which weren't.

I was even lying to myself about which things I wanted to change about myself — convincing myself that people weren't worth being around, that I liked it this way, that I wouldn't change a thing if I miraculously gained the power to interact with them without feeling my pulse quicken and my throat tighten. That the problem wasn't inside of me.

My identity wasn't small. It was just hiding out of view. And that had given it an awful power over me.

So it turns out I wasn't this awesome distant gleaming badass shunning the humans out of haughty contempt. I wasn't in control. I was scared, and disoriented, and amazingly unhappy. Maybe I was still a sassy curmudgeon of some sort, deep down. But mainly I was just ill.

My story is still a work in progress. I'll have more to tell soon. For now I just want to say: If you're struggling with your own identity, or with a disease of the mind, I'll be rooting for you from where I'm at now. Misanthope or not, I really do want you to make it through this, and see you find peace with your self, see you start to see yourself more clearly. You can stand what is true.

Because you are already enduring it.

Social Anxiety: Resolution

0 BrienneYudkowsky 11 February 2014 09:11AM

This is the second post in a sequence on my experiences fighting social anxiety. The first post is here, and the third post is here.


The discussion in my head went something like this.

System 1: We've finally gotten really good at the academia thing. We're about to start getting paid to study philosophy. Charging into the chaotic outside world is completely insane!

System 2: The future of humanity is probably in extreme danger, and you're proposing we do nothing about it... because we're scared. You think that's not insane?

System 1: Since when do we care about other people? We study logic because it's pretty, remember? Humans are so ugly.

System 2: Chapter 45 of Harry Potter and the Methods of Rationality made us cry. Lots. Given that we have social anxiety, that seems like pretty good evidence that we've been lying to ourselves about hating people to protect ourselves from having to change.

System 1: Are you telling me you think we’re actually good? Sadface, System 2. I’m hurt.

System 2: No, I think there are several possible interpretations of our complicated feelings toward the humans. I don’t even know what you mean by “good”.

System 1: Ok fine. We want to save the world. Whatever. Look, we specialize in an unusual kind of logic very few people study. It's really likely AI researchers will eventually need it - they'll definitely need it - and if we're the world's top expert they'll come to us, and we'll have advanced the field enough to meet their needs. So we can study philosophy and still save the world. Obviously.

System 2: That is the worst bit of motivated reasoning we have ever attempted. What are the real odds based on our current knowledge that Friendly Artificial Intelligence requires advances in intuitionism specifically? Pretty damn small. Especially compared to the things we know it needs, like funding. Look, I just emailed the FAI guy and he agrees with me on this. Shut up and calculate.

System 1: I don't wanna you can't make me la la la not listening. *falls down and throws a fit*

System 2: Calm down, this is really simple. All we have to do is cure our social anxiety.

System 1: NO NO NO IF WE DO THAT THEN WE DON'T GET TO HIDE FROM THE SCARY PEOPLE WHAT ARE YOU THINKING HELP HELP SYSTEM 2 IS TRYING TO KILL ME!!!

System 2: Woah. I... think you’re a little confused. Listen. We won't want to not interact with people after we cure our social anxiety. It won't be scary. That is the point.

System 1: Um... I... but...

System 2: Yes?

System 1: I know there's got to be something wrong with this. Just gimme a minute...

System 2: *sigh* You know, to be honest, I'm not sure we could do this even if we tried.

System 1: Hey. You take that back. We can do anything.

System 2: No, I don't think so. We don't even have a plan.

System 1: What are you talking about? Since when does that stop us?

System 2: I don't think we can cure social anxiety. We'll just have to hide in academia forever and never save the world, let alone achieve our full potential.

System 1: Oh HELL no. We can totally cure social anxiety. That's not even close to impossible.

System 2: Oh yeah? Prove it.

System 1: WELL OK THEN LET'S DO THIS.

System 2: Really?

System 1: FUCK YEAH THIS IS SPAAARTAAAAAA!!!!!

View more: Prev | Next