Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JoshuaZ 04 July 2014 03:58:22PM 11 points [-]

Question: Daenarys rarely posts now and by her description part of that was due to the systematic downvoting. Has someone contacted her ourside LW to let her know this has happened?

Comment author: Swimmer963 04 July 2014 04:04:51PM 5 points [-]

I believe that she is aware of it thanks to someone sharing the link to this post on Facebook.

Comment author: Swimmer963 28 June 2014 11:13:38PM 1 point [-]

This seems to describe the exact kind of expertise that I'm developing as a critical care nurse. Cool! Someone's studying that!

Meetup : Upper Canada LW Megameetup: Ottawa, Toronto, Montreal, Waterloo, London

1 Swimmer963 28 June 2014 10:48PM

Discussion article for the meetup : Upper Canada LW Megameetup: Ottawa, Toronto, Montreal, Waterloo, London

WHEN: 18 July 2014 07:00:00PM (-0400)

WHERE: Ottawa, Canada

Hi all LWers and CFAR alumni in the eastern Canada region! We'll be hosting a megameetup in Ottawa, Canada, running from 7:00 pm on Friday, July 18th, until early afternoon on Sunday, July 20th. We have a house available and enough space for everyone to sleep on site for the duration. We'll be eating communally, and there will be lots of snacks stocked up at the house, but please plan on contributing some money to cover food costs.

Friday night will be a fun social. Saturday will have a schedule of talks, activities, and CFAR-style classes. Sunday, we will most likely have an outing to a park or beach, depending on weather.

If you would like to come to this meetup, please fill out the following Google Form for logistics purposes: https://docs.google.com/forms/d/1zAFz-2nFUfQ31aVW6nFl61gsmnsER7PVCSvlJjwU__E/viewform?usp=send_form

If you have any questions, you can message Swimmer963 and I will try to answer them.

Discussion article for the meetup : Upper Canada LW Megameetup: Ottawa, Toronto, Montreal, Waterloo, London

Comment author: efalken 19 June 2014 09:51:12PM 1 point [-]

Ever notice sci-fi/fantasy books written by young people have not just little humor, but absolutely zero humor (eg, Divergent, Eragon)?

Comment author: Swimmer963 19 June 2014 10:36:10PM 0 points [-]

I actually haven't read either Divergent or Eragon. I've been told that the fantasy book I wrote recently is funny, and I'm pretty sure I qualify as "young person."

On Terminal Goals and Virtue Ethics

58 Swimmer963 18 June 2014 04:00AM

Introduction

A few months ago, my friend said the following thing to me: “After seeing Divergent, I finally understand virtue ethics. The main character is a cross between Aristotle and you.”

That was an impossible-to-resist pitch, and I saw the movie. The thing that resonated most with me–also the thing that my friend thought I had in common with the main character–was the idea that you could make a particular decision, and set yourself down a particular course of action, in order to make yourself become a particular kind of person. Tris didn’t join the Dauntless cast because she thought they were doing the most good in society, or because she thought her comparative advantage to do good lay there–she chose it because they were brave, and she wasn’t, yet, and she wanted to be. Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’ 

(Tris did have a concept of some future world-outcomes being better than others, and wanting to have an effect on the world. But that wasn't the causal reason why she chose Dauntless; as far as I can tell, it was unrelated.)

My twelve-year-old self had a similar attitude. I read a lot of fiction, and stories had heroes, and I wanted to be like them–and that meant acquiring the right skills and the right traits. I knew I was terrible at reacting under pressure–that in the case of an earthquake or other natural disaster, I would freeze up and not be useful at all. Being good at reacting under pressure was an important trait for a hero to have. I could be sad that I didn’t have it, or I could decide to acquire it by doing the things that scared me over and over and over again. So that someday, when the world tried to throw bad things at my friends and family, I’d be ready.

You could call that an awfully passive way to look at things. It reveals a deep-seated belief that I’m not in control, that the world is big and complicated and beyond my ability to understand and predict, much less steer–that I am not the locus of control. But this way of thinking is an algorithm. It will almost always spit out an answer, when otherwise I might get stuck in the complexity and unpredictability of trying to make a particular outcome happen.


Virtue Ethics

I find the different houses of the HPMOR universe to be a very compelling metaphor. It’s not because they suggest actions to take; instead, they suggest virtues to focus on, so that when a particular situation comes up, you can act ‘in character.’ Courage and bravery for Gryffindor, for example. It also suggests the idea that different people can focus on different virtues–diversity is a useful thing to have in the world. (I'm probably mangling the concept of virtue ethics here, not having any background in philosophy, but it's the closest term for the thing I mean.)

I’ve thought a lot about the virtue of loyalty. In the past, loyalty has kept me with jobs and friends that, from an objective perspective, might not seem like the optimal things to spend my time on. But the costs of quitting and finding a new job, or cutting off friendships, wouldn’t just have been about direct consequences in the world, like needing to spend a bunch of time handing out resumes or having an unpleasant conversation. There would also be a shift within myself, a weakening in the drive towards loyalty. It wasn’t that I thought everyone ought to be extremely loyal–it’s a virtue with obvious downsides and failure modes. But it was a virtue that I wanted, partly because it seemed undervalued. 

By calling myself a ‘loyal person’, I can aim myself in a particular direction without having to understand all the subcomponents of the world. More importantly, I can make decisions even when I’m rushed, or tired, or under cognitive strain that makes it hard to calculate through all of the consequences of a particular action.

 

Terminal Goals

The Less Wrong/CFAR/rationalist community puts a lot of emphasis on a different way of trying to be a hero–where you start from a terminal goal, like “saving the world”, and break it into subgoals, and do whatever it takes to accomplish it. In the past I’ve thought of myself as being mostly consequentialist, in terms of morality, and this is a very consequentialist way to think about being a good person. And it doesn't feel like it would work. 

There are some bad reasons why it might feel wrong–i.e. that it feels arrogant to think you can accomplish something that big–but I think the main reason is that it feels fake. There is strong social pressure in the CFAR/Less Wrong community to claim that you have terminal goals, that you’re working towards something big. My System 2 understands terminal goals and consequentialism, as a thing that other people do–I could talk about my terminal goals, and get the points, and fit in, but I’d be lying about my thoughts. My model of my mind would be incorrect, and that would have consequences on, for example, whether my plans actually worked.

 

Practicing the art of rationality

Recently, Anna Salamon brought up a question with the other CFAR staff: “What is the thing that’s wrong with your own practice of the art of rationality?” The terminal goals thing was what I thought of immediately–namely, the conversations I've had over the past two years, where other rationalists have asked me "so what are your terminal goals/values?" and I've stammered something and then gone to hide in a corner and try to come up with some. 

In Alicorn’s Luminosity, Bella says about her thoughts that “they were liable to morph into versions of themselves that were more idealized, more consistent - and not what they were originally, and therefore false. Or they'd be forgotten altogether, which was even worse (those thoughts were mine, and I wanted them).”

I want to know true things about myself. I also want to impress my friends by having the traits that they think are cool, but not at the price of faking it–my brain screams that pretending to be something other than what you are isn’t virtuous. When my immediate response to someone asking me about my terminal goals is “but brains don’t work that way!” it may not be a true statement about all brains, but it’s a true statement about my brain. My motivational system is wired in a certain way. I could think it was broken; I could let my friends convince me that I needed to change, and try to shoehorn my brain into a different shape; or I could accept that it works, that I get things done and people find me useful to have around and this is how I am. For now. I'm not going to rule out future attempts to hack my brain, because Growth Mindset, and maybe some other reasons will convince me that it's important enough, but if I do it, it'll be on my terms. Other people are welcome to have their terminal goals and existential struggles. I’m okay the way I am–I have an algorithm to follow.

 

Why write this post?

It would be an awfully surprising coincidence if mine was the only brain that worked this way. I’m not a special snowflake. And other people who interact with the Less Wrong community might not deal with it the way I do. They might try to twist their brains into the ‘right’ shape, and break their motivational system. Or they might decide that rationality is stupid and walk away.

Comment author: Qiaochu_Yuan 17 June 2014 04:00:49AM *  9 points [-]

+1! I too am skeptical about whether I or most of the people I know really have terminal goals (or, even if they really have them, whether they're right about what they are). One of the many virtues (!) of a virtue ethics-based approach is that you can cultivate "convergent instrumental virtues" even in the face of a lot of uncertainty about what you'll end up doing, if anything, with them.

Comment author: Swimmer963 17 June 2014 05:22:00AM 2 points [-]

I'm not sure I'm prepared to make the stronger claim that I don't believe other people have terminal goals. Maybe they do. They know more about their brains than I do. I'm definitely willing to make the claim that people trying to help me rewrite my brain is not going to prove to be useful.

Comment author: Swimmer963 22 May 2014 03:23:35PM 4 points [-]

Awesome awesome awesome! This sounds super cool and I am noticing myself being actually sad that I wasn't there.

“Given your intelligence, I am surprised by your career choice. Can you tell me about that?”

It amuses me that someone who wasn't me was asked this question, and now I'm super curious as to who.

Comment author: Houshalter 17 May 2014 02:48:23AM 1 point [-]

I understand the difference. Perhaps I wasn't clear. You can't just call feelings "pointless" because they don't change anything.

Comment author: Swimmer963 17 May 2014 03:50:55PM 1 point [-]

You could argue that some feelings do change things and have an effect on actions; sometimes in a negative direction (i.e. anger leading to vengeance and war) sometimes in a positive direction (i.e. Gratitude resulting in kindness and help.) Anger in this example can be considered "pointless" not because it has no effect upon the world, but because it's effect is negative and not endorsed intellectually. I think that's the sense in which despair is pointless in the original example. It does have an effect on the world; it results in people NOT taking actions to make things better.

You could argue with the use of the word "pointless", I suppose.

Comment author: Swimmer963 10 May 2014 10:37:36AM *  3 points [-]

Thoughts on this:

Obviously it's possible to want multiple things and believe multiple things. My mind, at least, is best approximately as a society of sub-agents than as a single unified self. I think "System 1 vs System 2" is already too much of an approximation–my System 1 definitely isn't unified, and even my System 2 doesn't agree on a single set of beliefs.

Can you simultaneously want sex and not want it?

Yes, and even large amounts of luminosity haven't made this divide go away. I used to not want sex because it was unpleasant, but want to want it because it was a way to profess love and, damn it, I wanted to do that. The not-wanting-sex happened on a more basic, less endorsed level, leading to weird mental resistance and frustration whenever I overrode it and had sex anyway because it was a thing I ought to do. I now do almost the opposite–I listen to my System 1 instincts and don't have sex, but I'm not totally happy with this state of affairs. There's good evidence that humans can't change their sexual orientations, so I've accepted it for now, but if that status quo changed, I would have some rethinking to do, and might press a button to make it different. These are different 'file formats' of belief–System 2 verbal beliefs don't automatically propagate into System 1 visceral urges–but they're nevertheless contradictory, and years of thinking about and paying a lot of attention to the issue hasn't allowed me to resolve that.

Another example: I want kids. By that, I mean that seeing a baby makes me feel all warm and fuzzy inside; that I daydream about it; that the first thought that comes when I see or learn many things is "I'm going to teach this to my kids!" I'm also fairly sure that having kids now is not the correct thing to do. It may not be the correct thing to do for a few years. In this case, System 2 rules win out, while System 1 whispers quietly in the background that why don't I have a baby already, and hey, you could put up with some unpleasantness and have a baby in nine months. I'm sure as hell not going to change my System 1, but there is or is not an instrumentally rational thing to do, and what my System 1 wants is only a small part of the calculation. So, if all the other variables push me in the other direction, I might end up not having kids for a long time–and having a mental contradiction for the same length of time.

Is this inevitable? Maybe, maybe not. But it certainly seems to be the default, even for people who spend a lot of time thinking about their beliefs.

Ottawa meetup: Applied Rationality Series, Value of Information

3 Swimmer963 05 May 2014 03:48PM

The sixth talk in the Ottawa Applied Rationality series will take place on Tuesday, May 20th at 7:00 pm, at the Canal Royal Oak in Ottawa, Canada. These events are run through the Ottawa Skeptics meetup group. See link here: http://www.meetup.com/Ottawa-Skeptics/events/181263842/

The usual format consists of an approximately 15 minute talk on the topic of the day, followed by semi-structured exercises, followed by beers and unstructured discussion. Previous topics have included "Rational Debating", "Bayes", "Calibration", "Rationality Dojo" (a review session), and "Goal Factoring." 

If you are not from Ottawa, but are interested in running meetups in your area, send me a PM and I can give you the PowerPoints that I use for these talks.

 

View more: Next