All of Calien's Comments + Replies

Calien30

Would move into one if it was where I wanted to live, but I'm tied to Canberra for the next couple of years. If Melbourne did this I'd be really tempted.

Calien00

I like the sound of that strategy, although here I must admit I'm inexperienced in actually using it.

Calien30

I'm reminded of Ozy's posts on radical acceptance, specifically this one.

Calien10

gjm's interpretation is what I was going for. Chronological age only! (Warning: link to TVTropes) I wasn't sure how to keep the same form and still have it flow nicely.

Calien20

I want to grow old and not die with you.

0Lumifer
/blinks I don't want to grow old.
Calien00

Want children in maybe ten years, might work on me.

Calien10

Some things that may or may not be obvious:

There may well be a few rationalists in your area you don't know about, who would likely turn up to a meetup if you announced one on LessWrong. I fit that description when some random people I'd never met started a regular meetup in my city. (A second, borderline case: The guy in my math tutorial who noticed I was reading Thinking Fast and Slow, turned out to read LessWrong and HPMOR, and who I mentioned the local meetups to and dragged along.)

If there's an established group in a nearish area, such that you're not... (read more)

0[anonymous]
I think if I put a LW meetup on, well, Meetup, I might draw some people out of the woodwork. In the description, I'd give a brief explanation of the word "rationalist" (as "Boise Rationalists" will probably catch more eyes than "Boise LessWrong Meetup"). I'd also include links to this site, HPMOR, and Khaneman's books, and a blurb about eventual goals. What's your opinion on that strategy? I know that when I moved to the area, I browsed Meetup for interesting groups and attended several. Talking to others, I've learned that they did the same. If I'd seen "Boise Rationalists," I would have been interested. There's a group that's only a seven-hour drive from my location, so I could make that a few times a year. I hadn't thought of the EA angle! That might be more palatable to some newcomers.
Calien10

Scenario 2 sound like it would be bad for me as well as scenario 1. I'm fairly uncomfortable talking about weight goals with most people - it feels like it would be saying I'm too fat or something negative like that, so unless they've revealed a similar problem to me I don't go there. So in that situation I'd expect to feel insulted. It's not a failure mode that I fall into any more, but where I was expecting that scenario to go is "When you read all the posts your brain goes; yeah this is too hard, I feel bad, I want chocolate. And at the end of the ... (read more)

0Elo
there is probably a gender variability on this issue. The paper seems to suggest a specific hypothesis as to why: your brain does the "I got all the congratulations, I must be done" process and this causes you to not try as hard on your goal. I don't know how true that theory is, but it seems reasonable. I am keen for future research in the area.
Calien00

Thank you for the insight.

I just have to become the person they would do that thing for - and my self is flexible in ways most people couldn't imagine.

To all those who've read some HPMoR, I find it interesting that that's basically how Quirrel describes his and Harry's... differences from most people.

Calien00

From the title of the post, I thought it would be about how not signing up gives you certainty. I've read someone who doesn't want to sign up say that dying in a normal way would give their family peace of mind.

In terms of whether it's a benefit, if it does motivate you then it's a good Dark Arts way to stop putting off signing up. However, cryonics companies changing their image to take advantage of it strikes me as a really bad idea for the reasons in Ander's post.

Calien00

You'd have to want to signal very strongly to overcome the inconvenience of doing the paperwork and forking over cold hard cash. Self-signalling seems to be a plausible motivation, but I'm not sure how much benefit you'd get from being able to tell other people about it. Not to mention the opposite pressure that most people have because they have to convince their close family members to respect their wishes.

Calien70

Today, I was using someone else's computer and typed "lesswrong" into the search/address bar. Apparently the next most popular search is "lesswrong cult". I started shrieking with laughter, getting a concerned reaction from the owner, which doesn't help our image much.

9IlyaShpitser
Eliezer wants to be a guru. No one calls him on it. There is an enormous amount of unhealthy hero worship. What did you expect, exactly? -- Yvain on EY.
0[anonymous]
I am completely unsurprised.
Calien00

Evan - I am also involved in effective altruism, and am not a utilitarian. I am a consequentialist and often agree with the utilitarians in mundane situations, though.

drethelin - What would be an example of a better alternative?

Calien00

Proponents of both have the same attitude of "this is a thing that people ocassionally give lip service to, that we're going to follow to a more logical conclusion and actually act on".

Calien00

Is your rule about distances actually a base part of your ethics, or is it a heuristic based on you not having much to do with them? I'm assuming that you take it somewhat figuratively, e.g. if you have family in another country you're still invested in what happens to them.

Do you care whether the unknown people are suffering more? If donating $X does more than donating Y hours of your time, does that concern you?

0Elo
Its more of a heuristic. Any ethic that used a specific measurement of distance in its raw calculation would be odd. There might somewhere be a line where on one side I might care about a person, and on the other I might not. Where someone could stand on the line exactly. That would be mostly silly. most of my family lives within a few suburbs of me. I have a few cousins who have been living in England for a few years; I barely even know what they are doing with their lives any more. (I wouldn't excommunicate someone for being far away, but I wouldn't try as hard as someone living in the same city as me) My grandmother keeps in touch with the cousins far away but I don't think its a requirement for me to do, and I am sure they also don't feel like they have to keep up with my life either. There is also the case of warm fuzzy utilons; where I can know that my intended impact hit the nail on the head; where I might otherwise find it difficult to know if $X made the intended impact. Its kinda like outsourcing making an impact to someone else in letting them use that $ for what they feel is right. I don't necessarily feel like I can trust others with my effectiveness desires. Does this make sense? I can try to explain it again if you point out what isn't making sense...
Calien00

If everyone did that, there's a non-negligible chance the human race will die out before bringing about a Singularity. I care about a reasonably nice society with nebulous traits that I value existing, so I consider that a bad outcome. But I do worry about whether it's right to have children who may well posess my far-higher-than-average (or simply higher than most people are willing to admit?) aversion to death.

(If under reflection, someone would prefer not to become immortal if they had the chance, then their preference is by far the most important consideration. So if I knew my future kids wouldn't be too fazed by their own future deaths, I'd be fine with bringing them into the world.)

1Fivehundred
I'm not saying everyone should do it. I'm maybe saying that there are too many people in the world already who are in senseless danger. On the other hand, it might be ethical to have children that will be more rational and useful than 99% of the rest.
Calien10

Data point: Assuming there are any gendered pronouns in the examples, I find it weirder when the same one is used consistently for the entire article.

Calien20

Has anyone gotten their parents into LessWrong yet? (High confidence that some have, but I haven't actually observed it.)

Calien30

This reminds me of a CBT technique for reducing anxiety: when you're worried about what will happen in some situation, make a prediction, and then test it.

Calien670

In-group fuzzes acquired, for science!

Calien00

I've also used the "think of yourself as multiple agents" trick at least since my first read of HPMOR, and noticed some parallels. In stressful situations it takes the form of rational!Calien telling me what to do, and I identify with her and know she's probably right so I go along with it. Although if I'm under too much pressure I end up paralysed as Brienne describes, and there may be hidden negative consequences as usual.

Calien10

Also two redundant sentences:

I have a few ideas so far. The aim of these techniques is to limit the influence motivators have on our selection of altruistic projects, even if we allow or welcome them once we're onto implementing our plans.

The aim of these techniques is to limit the influence of motivators have when we are deciding which actions to take, even if we allow or welcome then once we’re onto implementing our plans.

Calien10

Hi, I'm another former lurker. I will be there!

Calien20

Hi LW. I'm a longtime lurker and a first-year student at ANU, studying physics and mathematics. I arrived at Less Wrong three years ago through what seems to be one of the more common routes: being a nerd (math, science, SF, reputation as weird, etc.), having fellow nerds (from a tiny US-based forum) recommend HPMOR, and following EY's link to Less Wrong.