Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Ritual 2012: A Moment of Darkness

36 Post author: Raemon 28 December 2012 09:09AM

This is the second post of the 2012 Ritual Sequence. The Introduction post is here.


This is... the extended version, I suppose, of a speech I gave at the Solstice.

The NYC Solstice Weekprior celebration begins bright and loud, and gradually becomes somber and poignant. Our opening songs are about the end of the world, but in a funny, boisterous manner that gets people excited and ready to sing. We gradually wind down, dimming lights, extinguishing flames. We turn to songs that aren’t sad but are more quiet and pretty.

And then things get grim. We read Beyond the Reach of God. We sing songs about a world where we are alone, where there is nothing protecting us, and where we somehow need to survive and thrive, even when it looks like the light is failing.

We extinguish all but a single candle, and read an abridged version of the Gift We Give to Tomorrow, which ends like this:



Once upon a time,
far away and long ago,
there were intelligent beings who were not themselves intelligently designed.

Once upon a time,
there were lovers, created by something that did not love.

Once upon a time,
when all of civilization was a single galaxy,

A single star.
A single planet.
A place called Earth.

Once upon a time.



And then we extinguish that candle, and sit for a moment in the darkness.

This year, I took that time to tell a story.

It’s included in the 2012 Ritual Book. I was going to post it at the end of the sequence. But I realized that it’s actually pretty important to the “What Exactly is the Point of Ritual?” discussion. So I’m writing a more fleshed out version now, both for easy reference and for people who don’t feel like hunting through a large pdf to find it.

It’s a bit longer, in this version - it’s what I might have said, if time wasn’t a constraint during the ceremony.



 

A year ago, I started planning for tonight. In particular, for this moment, after the last candle is snuffed out and we’re left alone in the dark with the knowledge that our world is unfair and that we have nobody to help us but each other.

I wanted to talk about death.

My grandmother died two years ago. The years leading up to her death were painful. She slowly lost her mobility, until all she could do was sit in her living room and hope her family would come by to visit and talk to her.

Then she started losing her memory, so she had a hard time even having conversations at all. We tried to humor her, but there’s only so many times you can repeat the same thought in a five minute interval before your patience wears thin, and it shows, no matter how hard you try.

She lost her rationality, regressing into a child who would argue petulantly with my mother about what to eat, and when to exercise, and visit her friends. She was a nutritionist, she knew what she was supposed to eat and why. She knew how to be healthy. And she wanted to be healthy. But she lost her ability to negotiate her near term and long term desires on her own.

Eventually even deciding to eat at all became painful. Eventually even forming words became exhausting.

Eventually she lost not just her rationality, but her agency. She stopped making decisions. She lay on her bed in the hospital, not even having the strength to complain anymore. My mother got so excited on days when she argued petulantly because at least she was doing *something*.

She lost everything that I thought made a person a person, and I stopped thinking of her as one.

Towards the end of her life, I was visiting her at the hospital. I was sitting next to her, being a dutiful grandson. Holding her hand because I knew she liked that. But she seemed like she was asleep, and after 10 minutes or so I got bored and said “alright, I’m going to go find Mom now. I’ll be back soon.”

And she squeezed my hand, and said “No, stay.”

Those two words were one of the last decisions she ever made. One of the last times she had a desire about how her future should be. She made an exhausting effort to turn those desires into words and then breath those words into sounds so that her grandson would spend a little more time with her.

And I was so humiliated that I had stopped believing that inside of this broken body and broken mind was a person who still desperately wanted to be loved.

She died a week or two later.

Her funeral was a Catholic Mass. My mom had made me go to Mass as a child. It always annoyed me. But in that moment, I was so grateful to be able to hold hands with a hundred people, for all of us to speak in unison, without having to think about it, and say:

“Our father, who art in heaven, hallowed by thy name. Thy kingdom come, thy will be done, on earth as it is in heaven. Give us this day our daily bread, and forgive us of our trespasses, as we forgive those who trespass against us. And lead us not into temptation, but deliver us from evil.”

I’m not sure if having that one moment of comforting unity was worth 10 years of attending Catholic mass.

It’s a legitimately hard question. I don’t know the answer.

But I was still so frustrated that this comforting ritual was all based on falsehoods. There’s plenty of material out there you can use to create a beautiful secular funeral, but it’s not just about having pretty or powerful words to say. It’s about about knowing the words already, having them already be part of you and your culture and your community.

Because when somebody dies, you don’t have time or energy for novelty. You don’t want to deal with new ideas that will grate slightly against you just because they’re new. You want cached wisdom that is simple and beautiful and true, that you share with others, so that when something as awful as death happens to you, you have tools to face it, and you don’t have to face it alone.

I was thinking about all that, as I prepared for this moment.

But my Grandmother’s death was a long time ago. I wanted the opportunity to process it in my own way, in a community that shared my values. But it wasn’t really a pressing issue that bore down on me. Dealing with death felt important, but it was a sort of abstract importance.

And then, the second half of this year happened.

A few months ago, an aspiring rationalist friend of mine e-mailed me to tell me that a relative died. They described the experience of the funeral, ways in which it was surprisingly straightforward, and other ways in which it was very intense. My friend had always considered themselves an anti-deathist, but it was suddenly very real to them. And it sort of sank in for me too - death is still a part of this world, and our community doesn’t really have ways to deal with it.

And then, while I was still in the middle of the conversation with that friend, I learned that another friend had lost somebody, that same day.

Later, I would learn that a coworker of mine  also lost somebody that day as well.

Death was no longer abstract. It was real, painfully real, even if I myself didn’t know the people who died. My friends were hurting, and I felt their pain.

I wandered off into the night to sing my Stonehenge song by myself. It’s not quite good enough at what I needed it for - I’m not a skilled enough songwriter to write that song, yet. But it’s the only song I know of that attempts to do what I needed. To grimly acknowledge this specific adversary, to not offer any false hope about the inevitability of our victory, but to nonetheless march onward, bitterly determined that not quite so many people will die tomorrow as today.

I came back inside. I chatted with another friend about the experience. She offered me what comfort she could. She attempted to offer some words to the effect of “well, death has a purpose sometimes. It helps you see the good things -”

Gah, I thought.

What’s interesting is that I’m not actually that much of an anti-deathist. I think our community’s obsession with eliminating death without regard for the consequences is potentially harmful. I think there are, quite frankly, worse things in the world. If I had to choose between my Grandmother not dying, and my Grandmother not having to gradually lose everything she thought made her her until her own grandson forgot that she was a person, spending her days wracked with pain, I would probably choose the latter.

But still, I’ve come to accept that death is bad, unequivocally bad, even if some things are worse. And I had sort of forgotten, since I’m often at odds with other Less Wrongers about this, how big the gulf was between us and the rest of the world.

I didn’t hold it against my friend. She meant well, and having someone to talked to helped.

A week later, a friend of hers died.

A week after that, another friend of mine lost somebody.

A week after that, it wasn’t a direct friend of a friend who died, but a local activist was murdered a few blocks from someone’s house, and they cancelled plans with me because they were so upset.

Then a hurricane hit New York. Half the city went dark. While it was unrelated, at least one of my friends experienced a death, of sorts, that week. And even if none of my friends were directly hurt by Hurricane Sandy, you couldn’t escape the knowledge that there were people who weren’t so lucky.

And I went back to my notes I had written for this moment and stared and them and thought...



...fuck.

Winter was coming and I didn’t know what to do. Death is coming, and our community isn’t ready. I set out to create a holiday about death and... it turns out that’s a lot of responsibility, actually.

This was important, this was incredibly important and so incredibly hard to handle correctly. We as a community - the New York community, at least - need a way to process what happened to us this year, but what happened to each of us is personal and even though most of share the same values we all deal with death in our own way and… and... and somehow after all of that, after taking a moment to process it, we need to climb back out of that darkness and end the evening feeling joyful and triumphant and proud to be human, without resorting to lies.





…there’s a lot I don’t know how yet, about what to do, or what to say.

But here’s what I do know:

My grandmother died. But she lived to her late eighties. She had a family of 5 children who loved her. She had a life full of not just fun and travel and adventure but of scientific discovery. She was a dietitian. She helped do research on diabetes. She was an inspiration to women at a time when a woman being a researcher was weird and a big deal. When I say she had a long, full life, I’m not just saying something nice sounding.

My grandmother won at life, by any reasonable standard.

Not everyone gets to have that, but my grandmother did. She was the matriarch of a huge extended family that all came home for Christmas eve each year, and sang songs and shared food and loved each other. She died a few weeks after Christmas, and that year, everyone came to visit, and honestly it was one of the best experiences of my life.

In the dead of winter, each year, two dozen of people came to Poughkeepsie, to a big house sheltered by a giant cottonwood tree, and were able to celebrate *without* worrying about running out of food in the spring. At the darkest time of the year, my mother ran lights up a hundred foot tall pine tree that you could see for miles.

We were able to eat because hundreds of miles away, mechanical plows tilled fields in different climates, producing so much food that we literally could feed the entire world if we could solve some infrastructure and economic problems.

We were able to drive to my grandmother’s house because other mechanical plows crawled through the streets all night, clearing the ice and snow away.

Some of us were able to come to my grandmothers house from a thousand miles away, flying through the sky, higher than ancient humans even imagined angels might live.

And my Grandmother died in her late eighties, but she also *didn’t* die when she was in her 70s and the cancer first struck her.  Because we had chemotherapy, and host of other tools to deal with it.

And the most miraculous amazing thing is that this isn’t a miracle. This isn’t a mystery. We know how it came to be, and we have the power to learn to understand it even better, and do more.

In this room, right now, are people who take this all seriously. Dead seriously, who don’t just shout “Hurrah humanity” because shouting things together in a group is fun.

We have people in this room, right now, who are working on fixing big problems in the medical industry. We have people in this room who are trying to understand and help fix the criminal justice system. We have people in this room who are dedicating their lives to eradicating global poverty. We have people in this room who are literally working to set in motion plans to optimize *everything ever*. We have people in this room who are working to make sure that the human race doesn’t destroy itself before we have a chance to become the people we really want to be.

And while they aren’t in this room, there are people we know who would be here if they could, who are doing their part to try and solve this whole death problem once and for all.

And I don’t know whether and how well any of us are going to succeed at any of these things, but...

God damn, people. You people are amazing, and even if only one of you made a dent in some of the problems you’re working on, that... that would just be incredible.

And there are people in this room who aren’t working on anything that grandiose. People who aren’t trying to solve death or save the world from annihilation or alleviate suffering on a societal level. But who spend their lives making art. Music. Writing things sometimes.

People who fill their world with beauty and joy and enthusiasm, and pies and hugs and games and… and I don’t have time to give a shout out to everyone in the room but you all know who you are.

This room is full of people who spend their lives making this world less ugly, less a sea of blood and violence and mindless replication. People who are working to make tomorrow brighter than today, in one way or another.

And I am so proud to know all of you, to have you be a part of my life, and to be a part of yours.

I love you.

You make this world the sort of place I’d want to keep living, forever, if I could.

The sort of world I’d want to take to the stars.

 


Comments (136)

Comment author: pleeppleep 24 December 2012 08:07:55PM *  15 points [-]

God, I hope I'm not the only one who cried at this.

Comment author: Raemon 24 December 2012 08:26:23PM *  3 points [-]

You are not.

By now I've started to loose track - did you attend the actual event or not? I wasn't sure how well the speech would translate to text, even with some additional polishing. I deliberately didn't rehearse it much so that it was particularly raw that evening. Curious how it holds up here.

Comment author: pleeppleep 24 December 2012 10:01:55PM 1 point [-]

Didn't come this year I'm afraid, but the text was pretty touching. It was very well written and excellently conveyed the pride you place in humanity.

Comment author: bbleeker 29 December 2012 03:10:10PM 2 points [-]

I did too. *hugs*

Comment author: Ezekiel 25 December 2012 11:39:08AM 14 points [-]

Correct me if I'm wrong, but it looks like you're talking about anti-deathism (weak or strong) as if it was a defining value of the LessWrong community. This bothers me.

If you're successful, these rituals will become part of the community identity, and I personally would rather LW tried to be about rationality and just that as much as it can. Everything else that correlates with membership - transhumanism, nerdiness, thinking Eliezer is awesome - I would urge you not to include in the rituals. It's inevitable that they'd turn up, but I wouldn't give them extra weight by including them in codified documents.

As an analogy, one of the things that bugged me about Orthodox Judaism was that it claims to be about keeping the Commandments, but there's a huge pile of stuff that's done just for tradition's sake, that isn't commanded anywhere (no, not even in the Oral Lore or by rabbinical decree).

Comment author: AdeleneDawner 25 December 2012 05:38:49PM 9 points [-]

What would a ritual that's just about rationality and more complex than a group recitation of the Litany of Tarsky look like?

Comment author: RobbBB 31 December 2012 02:11:04AM 3 points [-]

Religious groups confess their sins. A ritual of rational confession might involve people going around a circle raising, examining, and discussing errors they've made and intend to better combat in the future (perhaps with a theme, like a specific family of biases).

You can also sing songs that are generically about decision theory, metaethics, and epistemology, rather than about specific doctrines. You'd have to write 'em first, though.

Comment author: arborealhominid 31 December 2012 03:22:08AM *  2 points [-]

I genuinely don't know how I feel about the "rational confession" idea. On the one hand, the idea of "confession of sins" squicks me out a bit, even though I enjoy other rituals; it reminds me too much of highly authoritarian/groupthink-y religions. On the other hand, having a place to discuss one's own biases and plan ways to avoid them sounds seriously useful, and would probably be a helpful tradition to have.

Comment author: RobbBB 31 December 2012 03:44:14AM 1 point [-]

It sounds like you like the content, but not the way I framed it. That's fine. I only framed it like a religious ritual to better fit Adelene's question; in practice we may not even want to think of it as a 'ritual,' i.e., we may not want to adorn or rigidify it beyond its recurrence as a practice.

Comment author: MugaSofer 25 December 2012 10:41:18PM 7 points [-]

Correct me if I'm wrong, but it looks like you're talking about anti-deathism (weak or strong) as if it was a defining value of the LessWrong community. This bothers me.

Well, it depends what you mean by "defining value". The LW community includes all sorts of stuff that simply becomes much more convincing/obvious/likely when you're, well, more rational. Atheism, polyamory, cryonics ... there's quite a few of these beliefs floating around. That seems like it's as it should be; if rationality didn't cause you to change your beliefs, it would be meaningless, and if those beliefs weren't better correlated with reality, it would be useless.

Comment author: Ezekiel 26 December 2012 12:17:33AM 6 points [-]

As of now, there is no evidence that the average LessWronger is more rational than the average smart, educated person (see the LW poll). Therefore, a lot of LWers thinking something is not any stronger evidence for its truth than any other similarly-sized group of smart, educated people thinking it. Therefore, until we get way better at this, I think we should be humble in our certainty estimates, and not do mindhacky things to cement the beliefs we currently hold.

Comment author: MugaSofer 26 December 2012 01:26:20AM -1 points [-]

Who said anything about mindhacking? I'm just saying that we should expect rationalists to believe some of the same things, even if nonrationalists generally don't believe these things. Considering the whole point of this site is to help people become more rational, recognize and overcome their biases etc. I'm not sure what you're doing here if you don't think that actually, y'know, happens.

Comment author: Ezekiel 26 December 2012 01:33:10AM 3 points [-]

Who said anything about mindhacking?

Raemon did. It's a ritual, deliberately styled after religious rituals, some of the most powerful mindhacks known.

Comment author: MugaSofer 26 December 2012 01:48:17AM 1 point [-]

I ... didn't get the impression that this was intended to mindhack people into moving closer to LessWrong consensus.

Comment author: Ezekiel 26 December 2012 11:07:41AM 5 points [-]

Oh, sorry, neither did I. I'm not trying to accuse Raemon of deliberate brainwashing. But getting together every year to sings songs about, say, existential risk will make people more likely to disregard evidence showing that X-risk is lower than previously thought. Same for every other subject.

Comment author: MugaSofer 26 December 2012 03:07:05PM -1 points [-]

Ah, I guess it was the use of "deliberately" that confused me. Now I come to think of it, this is mentioned as a possible risk in the article, and dismissed as much less powerful than, y'know, talking about it all the damn time.

Comment author: TheOtherDave 26 December 2012 01:04:44AM 8 points [-]

I'm especially intrigued that you list polyamory among the beliefs that become more "convincing/obvious/likely" with greater rationality. Just to clarify: on your view, is the fact that I have no particular desire to have more than one lover in my life evidence that I am less rational than I would be if I desired more lovers? Why ought I believe that?

Comment author: Raemon 26 December 2012 01:07:53AM 13 points [-]

Not speaking for them, but what I do actually think is that there is some portion of the population that would gravitate towards polyamory, but don't because of cached thinking, so increasing rationality would increase the number of polyamorous people.

Comment author: Alicorn 26 December 2012 01:14:19AM *  15 points [-]

It came up in conversation with my second cousin today that I have four boyfriends who all know about each other and get along and can have as many girlfriends as they want. My second cousin had never heard of anything like this, but it sounded immediately sensible and like a better way of doing things to him. Just being in a position to learn that an option exists will increase your odds of doing it.

Comment author: TheOtherDave 26 December 2012 04:41:39AM 1 point [-]

Well, sure; I agree.

Let me put it this way: it's one thing to say "some people like X, some people don't like X, and rationalists are more likely to consider what they actually want and how to achieve it without giving undue weight to social convention." It's a different thing to say "rational people like X, and someone's stance towards X is significant evidence of their rationality."

This community says the second thing rather unambiguously about X=cryonics and X=atheism. So when cryonics, atheism, and polyamory are grouped together, that seems like significant evidence that the second thing is also being said about X=polyamory.

So I figured it was worth clarifying.

Comment author: RobbBB 31 December 2012 09:31:15AM 2 points [-]

Two more points:

  1. It's possible for a trait to be strong evidence both for extreme rationality and for extreme irrationality. (Some traits are much more commonly held among the extremely reasonable and the extremely unreasonable than among 'normals;' seriously preparing for apocalyptic scenarios, for instance. Perhaps polyamory is one of these polarizing traits.)

  2. Sometimes purely irrational behaviors are extremely strong evidence for an agent's overall rationality.

Comment author: RobbBB 31 December 2012 02:21:14AM *  1 point [-]

It's a different thing to say "rational people like X, and someone's stance towards X is significant evidence of their rationality."

But those are only different if your 'significant' qualifier in 'significant evidence' is much stronger than your 'more likely' threshold. In other words, the difference is only quantitative. If the rate of polyamory is significantly higher among rationalists than among non-rationalists, then that's it; the question is resolved; polyamory just is evidence of rationality. This is so even if nearly all polyamorous people are relatively irrational. It's also so even if polyamory is never itself a rational choice; all that's required is a correlation.

EDIT: Suppose, for instance, that there are 20 rationalists in a community of 10,020; and 2 of the rationalists are polyamorous; and 800 of the non-rationalists are polyamorous. Then, all else being equal, upon meeting a poly person P a perfect Bayesian who knew the aforementioned facts would need to strongly update in favor of P being a rationalist, even knowing that only 2 of the 802 poly people in the community are rationalists.

Comment author: TheOtherDave 31 December 2012 11:00:25AM -1 points [-]

Yup, all of that is certainly true.
Similarly, there is likely some number N such that my weight being in or above the Nth percentile of the population is evidence of rationality (or of being a rationalist; the terms seem to be being used interchangeably here).

So, I started out by observing that there seemed to be a property that cryonics and atheism shared with respect to this community, which I wasn't sure polyamory also shared, which is why I made the initial comment.
I was in error to describe the property I was asking about as being primarily about evidence, and I appreciate you pointing that out.

In retrospect, I think what I'm observing is that within this community atheism and cryonics have become group markers of virtue, in a way that having a weight above the abovementioned Nth percentile is not a group marker of virtue (though it may be very strong evidence of group membership). And what I was really asking was whether polyamory was also considered a group marker of virtue.

Looking at the flow of this discussion (not just in this branch) and the voting patterns on it, I conclude that yes, it is.

Comment author: RobbBB 31 December 2012 07:06:19PM *  -1 points [-]

We also have to be careful again about whether by 'mark of virtue' we mean an indicator of virtue (because polyamory might correlate with virtue without being itself virtuous), or whether by 'mark of virtue' we mean an instance of virtue.

In other words, all of this talk is being needlessly roundabout: What we really want to know, I think, is whether polyamory is a good thing. Does it improve most people's lives? How many non-polyamorous people would benefit from polyamory? How many non-polyamorous people should rationally switch to polyamory, given their present evidence? And do people (or rationalists) tend to accept polyamory for good reasons? Those four questions are logically distinct.

Perhaps the last two questions are the most relevant, since we're trying to determine not just whether polyamorous people happen to win more or be rationalists more often, but whether their polyamory is itself rationally motivated (and whether their reasons scale to the rest of the community). So I think the question you intend to ask is whether polyamorous people (within the LessWrong community, at a minimum) have good reason to be polyamorous, and whether the non-polyamorous people have good reason to be non-polyamorous.

This question is very analogous to the sort of question we could ask about cryonics. Are the LessWrongers who don't want to be frozen being irrational -- succumbing to self-deception, say? Or are the more cryonics-happy LessWrongers being irrational? Or are they both being rational, and they just happen to have different core preferences?

Comment author: TheOtherDave 31 December 2012 08:17:37PM 2 points [-]

I agree that "whether polyamory (or cryonics, or whatever) is a good thing" is a thing we want to know. Possibly even the thing we really want to know, as you suggest.

When you unpack the question in terms of improving lives, benefiting people, etc. you're implicitly adopting a consequentialist stance, where "is polyamory a good thing" equates to "does polyamory have the highest expected value"? I endorse this completely.

In my experience, it has a high positive expected value for some people and a high negative expected value for others, and the highest EV strategy is figure out which sort of person I am and act accordingly.

This is very similar to asking whether a homosexual sex life has the highest expected value, actually, or (equivalently) whether a homosexual sex life is a good thing: it definitely is for some people, and definitely is not for others, and the highest-EV strategy is to pick a sex life that corresponds to the sort of person I am.

All of that said, I do think there's a difference here between unpacking "is polyamory a good thing?" as "does polyamory has the highest expected value?" (the consequentialist stance) and unpacking it as "is polyamory the a characteristic practice of virtuous people?" (the virtue-ethicist stance).

Perhaps what I mean, when I talk about markers of virtue, is that this community seems to be adopting a virtue-ethics rather than a consequentialist stance on the subject.

Comment author: RobbBB 01 January 2013 12:27:15AM *  1 point [-]

We agree on the higher-level points, so as we pivot toward object-level discussion and actually discuss polyamory, I insist that we begin by tabooing 'polyamory,' or stipulating exactly what we mean by it. For instance, by 'Polyamory is better than monamory for most people.' we might mean:

  • Most people have a preference for having multiple simultaneous romantic/sexual partners.
  • Most people have such a preference, and would live more fulfilling lives if they acknowledged it.
  • Most people would live more fulfilling lives if they attempted to have multiple romantic/sexual partners.
  • Most people would live more fulfilling lives if they actually had multiple romantic/sexual partners.
  • Most people are capable of having multiple romantic/sexual partners if they try, and would live more fulfilling lives in that event.
  • Most people would live more fulfilling lives if they at least experimented once with having multiple romantic/sexual partners.
  • Most people would live more fulfilling lives if they were sometimes willing to have multiple romantic/sexual partners.
  • Some conjunction or disjunction of the above statements.

More generally, we can distinguish between 'preference polyamory' (which I like to call polyphilia: the preference for, or openness to, having multiple partners, whether or not one actually has multiple partners currently) and 'behavioral polyamory' (which I call multamory: the actual act of being in a relationship with multiple people). We can then cut it even finer, since dispositions and behaviors can change over time. Suppose I have a slight preference for monamory, but am happy to be in poly relationships too. And, even more vexingly, maybe I've been in poly relationships for most of my life, but I'm currently in a mono relationship (or single). Am I 'polyamorous'? It's just an issue of word choice, but it's a complex one, and it needs to be resolved before we can evaluate any of these semantic candidates utilitarianly.

And even this is too coarse-grained, because it isn't clear what exactly it takes to qualify as a 'romantic/sexual' partner as opposed to an intimate friend. Nor is it clear what it takes to be a 'partner;' it doesn't help that 'sexual partner' has an episodic character in English, while 'romantic partner' has a continuous character.

As for virtue ethics: In my experience, ideas like 'deontology,' 'consequentialism,' and 'virtue ethics' are hopeless confusions. The specific kinds of arguments characteristic of those three traditions are generally fine, and generally perfectly compatible with one another. There's nothing utilitarianly unacceptable about seriously debating whether polyamory produces good character traits and dispositions.

Comment author: BerryPick6 31 December 2012 09:07:06PM 0 points [-]

I know how a consequentialist (at least, one operating with the intention of maximizing 'human values') would unpack these questions, and I know how we could theoretically look at facts and give answers to ze's questions.

But how, on earth, would "is polyamory the characteristic of virtuous people" get unpacked? What does "virtuous" mean here and what would it look like for something or someone to be "virtuous"?

I know you probably didn't mean to get dragged into a conversation about Virtue Ethics, but I've seen it mentioned on LW a few times and have always been very curious about its local version.

Comment author: MugaSofer 31 December 2012 08:45:11PM 0 points [-]

Of course, if polyamory turns out to be the best thing for almost all people, or at least lesswrongers, then a consequentialist would behave the same way.

Comment author: loup-vaillant 01 January 2013 01:14:00PM 1 point [-]

Nitpick: while a significant fraction of rational people are not polyamorous, polyamory could still be better evidence for rationality than atheism. That's because there is so much atheists around, many of which became atheists for the wrong reasons (being raised as such, rebellion…).

Let's try some math with a false dichotomy approximation: someone could be Rational (or not), pOlyamorous (or not), and Atheist (or not). We want to measure how much evidence pOlyamory and Atheism are evidence for Rationality, given Background information B. Those are:

  • Atheism gives 10 log(P(A|RB)÷P(A|¬RB)) decibels of evidence for Rationality
  • pOlyamory gives 10 log(P(O|RB)÷P(O|¬RB)) decibels of evidence for rationality

Now imagine that B tells the following: "Among the 6 billion people on Earth, about 1 billion are atheists, 10 millions are rational, and 1 million is polyamorous. Every rational people are atheists, and 5% of them are polyamorous".

So:

  • P(A|RB) = 1
  • P(A|¬RB) = 990,000,000÷5,990,000,000 = 99÷599 ~= 0.17
  • P(O|RB) = 0.05
  • P(O|¬RB)) = 500,000÷5,999,500,000 = 5÷59995 ~= 8.3×10⁻⁵

Applying the two formulas above, Atheism gives about 8 decibels of evidence for rationality. Polyamory on the other hand, gives about 28. And rationality itself, P(R|B), starts at about -28. pOlyamory is enough to seriously doubt the irRationality of someone, while Atheism doesn't even raise it above the "should think about it" threshold.

If this is not intuitive, keep in mind that according to B, only 1% of Atheists are Rational, while a whooping 50% of pOlyamorous people are. Well, with those made up numbers anyway. Real numbers are most probably less extreme than that. But I still expect to find more rationalists among a polyamorous sample than among an atheist sample.

Comment author: TheOtherDave 01 January 2013 06:48:20PM 0 points [-]

Yes, that's true.
My reply to Robb elsewhere in this thread when he made a similar point is relevant here as well.

Comment author: Raemon 26 December 2012 03:50:11PM 0 points [-]

I agree. I wouldn't have worded the original comment that way.

Comment author: someonewrongonthenet 27 December 2012 07:50:52PM *  -2 points [-]

They really aught not to, though, Living forever, like polyamory, is a preference which hinges strictly on a person's utility function. It's perfectly possible for a rational agent to not want to live forever, or be polyamorous.

Even if someone considers polyamory and cryonics morally wrong... in this community we often use rational and bayesian interchangeably, but let's revert to the regular definition for a moment. People who condemn polyamory or cryonics based on cached thoughts are not rational in the true English sense of the word (rational - having reason or justification for belief) but they are not any less epistemically bayesian...it's not like they have a twisted view of reality itself.

Atheism...well that's a proposition about the truth, so you could argue that it says something about the individual's rationality. Trouble is, since God is so ill defined, atheism is poorly defined by extension. So you'd get someone like Einstein claiming not to be an atheist on mostly aesthetic grounds.

Because of our semantic idiocy atheism implies adeism as well, even though deists, atheists, and pantheists have otherwise identical models about observable reality...so I'd hesitate to say that deism/pantheism imply irrationality.

Edit: Also, let's not confuse intelligence with bayesian-ness. Intelligence correlates with all the beliefs mentioned above largely because it confers resistance to conformity, and that's the real reason that polyamory and atheism is over-represented at lesswrong. Cryonics...I think that's a cultural artifact of the close affiliation with the singularity institute.

Comment author: RobbBB 31 December 2012 02:27:14AM *  2 points [-]

They really aught not to, though, Living forever, like polyamory, is a preference which hinges strictly on a person's utility function. It's perfectly possible for a rational agent to not want to live forever, or be polyamorous.

But we're talking about probability, not possibility. It's possible for a mammal to be bipedal; but evidence for quadrupedalism is still evidence for being a mammal. Similarly, it's possible to be irrational and polyamorous; but if the rate of polyamory is greater among rationalists than among non-rationalists, then polyamory is evidence of rationality, regardless of whether it directly causally arises from any rationality-skill. The same would be true if hat-wearing were more common among rationalists than among non-rationalists. It sounds like you're criticizing a different attitude than is TheOtherDave.

Comment author: fubarobfusco 27 December 2012 09:43:45PM 2 points [-]

Intelligence correlates with all the beliefs mentioned above largely because it confers resistance to conformity, and that's the real reason that polyamory and atheism is over-represented at lesswrong.

Regarding polyamory, it could also be founder effect — given that several of the top contributors are openly poly, that both men and women are among them, and so on.

Comment author: army1987 27 December 2012 11:28:39PM 1 point [-]

Alicorn used to be mono, and I think so did Eliezer; and the fraction of poly respondents was about the same in the last two surveys, which... some part of my brain tells me is evidence against your hypothesis, but now that I think about it I'm not sure why.

Comment author: MugaSofer 26 December 2012 01:29:54AM *  2 points [-]

Well, I'm not especially poly myself, but it seems to me rationalists are more likely to look at monogamy and seriously consider the possibility it's suboptimal.

Comment author: army1987 26 December 2012 11:52:14PM 4 points [-]

BTW, there are plenty of monogamists who think it's immoral for anyone to have a sexual relationship with someone without also committing to not have sex with anyone else, whereas I'd guess there aren't many poly people who think it's immoral for other people to have monogamous relationships.

Comment author: TheOtherDave 27 December 2012 01:02:10AM 3 points [-]

I suspect it depends somewhat on how I phrase the question.

Even in my own American urban poly-friendly subculture, I expect a significant percentage of poly folk would agree that there exist a great many monogamous relationships right now that are immoral, which would not be immoral were they polygamous, because they involve people who ought to be/would be happier if they were/are naturally polygamous. I'm not sure what numbers they'd put around "many", though. I know several who would put it upwards of 50%, but I don't know how representative they are.

I therefore suspect that some (but I don't know how many) of them would, if they were coherent about their understanding of evidence, reluctantly agree that being in a monogamous relationship is evidence of immorality.

But I agree that there are few if any poly folk who would agree (other than as a signaling move) that monogamous relationshjps are definitionally immoral.

Comment author: someonewrongonthenet 27 December 2012 08:06:45PM *  -2 points [-]

That's pretty silly. The suffering from jealousy and the stress of having to think through all those difficult issues would make polyamory a net loss for many people.

If you wanted to put them down, you might have a case for calling such people weak or stupid for being unable to deal with emotions or think about these issues...or you might say that they are wise, and they are picking their battles and investing those emotional/intellectual resources into things that matter more to them.

Of course, I think you'd be completely justified in calling the belief that polyamory is immoral as a utilitarian net evil.

Comment author: SaidAchmiz 27 December 2012 12:17:13AM 2 points [-]

How many monogamists hold such opinions but not due to religiosity (or the unexamined remnants of former religiosity)?

Comment author: Desrtopa 27 December 2012 01:22:01AM 3 points [-]

Well, quite a lot aren't aware of the existence of polyamory at all. If they think that a person who's in a sexual relationship with someone would necessarily feel betrayed if they knew that person was also having sex with someone else, they would be likely to consider it immoral even without a religious basis.

Numerically, though, I have no idea.

Comment author: army1987 27 December 2012 01:24:08PM *  1 point [-]

I dunno -- but if you mean “the unexamined remnants of former religiosity” on a societal level¹ rather than on an individual level, then I guess that's the main reason for the overwhelming majority of such people to hold such opinions. There might also be a few people who know that monogamy can curb the spread of STDs and lack a clear distinction between terminal and instrumental values, and/or (possibly incorrectly²) believe that monogamy is “natural” (i.e. it was the norm in the EEA) and commit the naturalistic fallacy, though.


  1. i.e., a society used to have a memeplex, originating from religion, which included the idea that “one can only (romantically) love one person at a time”; that society has since shed most of that memeplex, but not that particular idea, which is still part of the intersubjective truth -- even among individuals who were never religious in the first place.

  2. “Possibly” meaning that I don't know myself, because I haven't looked into that yet -- not that I've seen all the available evidence and concluded it doesn't definitely point one way or another.

Comment author: MugaSofer 27 December 2012 02:15:58AM *  -1 points [-]

This sounds true, but I'm not sure how it's relevant to my comment beyond my use of the word "polyamory".

Comment author: army1987 27 December 2012 01:46:13PM 2 points [-]

I think I wanted to show how people who are monogamous usually are because of a cached belief, whereas people who are polyamorous usually are because they've thought about both possibilities and concluded one is better.

Comment author: V_V 31 December 2012 12:31:58AM *  7 points [-]

Then you failed. Consider the following variant of your argument:

"there are plenty of non child molesters who think it's immoral for any adult to have a sexual relationship with a child, whereas I'd guess there aren't many child molesters who think it's immoral for other adults to have relationships exclusively with adults."

"I think I wanted to show that people who are not child molesters usually are because of a cached belief, whereas people who are child molesters usually are because they've thought about both possibilities and concluded one is better."

Comment author: [deleted] 31 December 2012 01:08:53AM 3 points [-]

That's distressingly convincing.

Comment author: army1987 31 December 2012 01:32:33AM 1 point [-]

Why was that downvoted to -2? Technically that's correct (though by “show” I didn't mean ‘rigorously prove’, I meant ‘provide one more piece of evidence’ -- but yeah, the second paragraph of your comment is evidence for the third, though priors are different in the two cases).

Comment author: V_V 31 December 2012 01:52:27AM 3 points [-]

Why was that downvoted to -2?

"Let us not speak of them, but look, and pass."

the second paragraph of your comment is evidence for the third, though priors are different in the two cases

I don't think so. The existence of a widespread moral prohibition against some uncommon behavior, which is not matched by a claim of immorality of the typical behavior by those who defend the uncommon behavior, is not evidence that the widespread moral prohibition is a "cached belief" (that is, a meme maintaned only due to tradition and intellectual laziness). People in the majority group could well have pondered the uncommon behavior and decided they had good reason to consider it immoral.

Comment author: MugaSofer 27 December 2012 03:18:42PM -1 points [-]

Ah. Very true.

Comment author: TheOtherDave 26 December 2012 04:32:59AM 1 point [-]

More likely than the typical person on the street? Sure, agreed. As are contrarians, I'd expect.

Comment author: MugaSofer 26 December 2012 03:31:00PM -1 points [-]

Yup. Just like technophiles are more likely to embrace the Singularity.

Comment author: Qiaochu_Yuan 26 December 2012 01:47:31AM *  -1 points [-]

Yes, for reasons that have already been described, but it's weak evidence, and other things you know about yourself presumably screen it off.

Comment author: NancyLebovitz 25 December 2012 06:38:12PM 0 points [-]

I thought there was a rule about not breaking tradition, even if the tradition isn't otherwise supported. No?

Comment author: Ezekiel 25 December 2012 08:04:24PM 0 points [-]

The line that people tend to quote there is "מנהג ישראל דין הוא" (the custom of Israel is law), but most people have never looked up its formal definition. Its actual halachic bearing is much too narrow to justify (for example) making kids sing Shabbat meal songs.

Comment author: homunq 30 December 2012 10:50:07PM 3 points [-]

"well, death has a purpose sometimes. It helps you see the good things..."

I don't find this repugnant. Your friend clearly would not kill their grandmother in order to learn a lesson. I think they are simply looking for the silver lining, and looking away from the horror. This is a fair strategy in this case, because the fact is that nobody was able to prevent your grandmother's death. Being rational means being able to look at the world as it is, but it doesn't mean you're never allowed to stop staring at the worst parts.

Comment author: Raemon 31 December 2012 07:29:48PM 0 points [-]

I don't find it repugnant and I'm actually fine with people who deal with death that way, in many cases. But it really wasn't what I needed to hear at that moment.

Comment author: Ritalin 26 December 2012 02:11:54AM 3 points [-]

We have people in this room, right now, who are working on fixing big problems in the medical industry. We have people in this room who are trying to understand and help fix the criminal justice system. We have people in this room who are dedicating their lives to eradicating global poverty. We have people in this room who are literally working to set in motion plans to optimize everything ever. We have people in this room who are working to make sure that the human race doesn’t destroy itself before we have a chance to become the people we really want to be.

Ah. So it's not all about existential risk, it's also about making existence itself more worthwhile... I'm going to be shameless here and ask directly; do any of you guys work on something where they could use the help of a Junior Engineer in.. well, it's very generic, we do a bit of a jack-of-all-trades-master-of-none here... Because I'm about to finish uni/college, and the job market is monstrously tight over here, and, well where would be better opportunities to learn proper engineering than under rationalists? I'm especially interested in things related to developing "developing countries" (especially those with governments and societies hostile to freethinking) and things related to sustainable energy and infrastructure (all at the same time would be glorious, but one out of three ain't bad).

Comment author: [deleted] 29 December 2012 12:53:32AM 2 points [-]

By default, this should be your approach.

I am also a junior engineer. Working a normal job and donating what I can to SI.

Took me a long-ass time to find a job, I know that feel. Keep looking.

Comment author: Raemon 27 December 2012 04:39:15PM 0 points [-]

I think it may be best to post this to the general discussion (especially if you don't have a local meetup group where you can be collaborating with people you know more personally). I don't have a good answer to your question off the bat, but I think it's an important question hope someone here can help you more specifically.

Comment author: Ritalin 27 December 2012 07:01:13PM 0 points [-]

You mean the Open Thread?

Comment author: jsalvatier 28 December 2012 09:58:44PM 0 points [-]

The discussion section is probably better.

Comment author: jbash 28 December 2012 02:51:39PM 2 points [-]

Does anybody in your group have children? It doesn't seem to me that what you have in your ritual book would serve them very well. Even ignoring any possible desire to "recruit" the children themselves, that means that adults who have kids will have an incentive to leave the community.

Maybe it's just that I personally was raised with zero attendance at anything remotely that structured, but it's hard for me to imagine kids sitting through all those highly abstract stories, many of which rely on lots of background concepts, and being anything but bored stiff (and probably annoyed). Am I wrong?

Even if they could sit through it happily, there's the question of whether having them chant things they don't understand respects their agency or promotes their own growth toward reasoned examination of the world and their beliefs about it. Especially when, as somebody else has mentioned, the ritual includes stuff that's not just "rationalism". Could there be more to help them understand how to get to the concepts, so that they could have a reasonable claim not to just be repeating "scripture"?

Or am I just worrying about something unreal?

Comment author: TheOtherDave 28 December 2012 04:51:49PM 3 points [-]

Maybe it's just that I personally was raised with zero attendance at anything remotely that structured, but it's hard for me to imagine kids sitting through all those highly abstract stories, many of which rely on lots of background concepts, and being anything but bored stiff (and probably annoyed). Am I wrong?

(shrugs) You're not wrong, but I'm not sure you're right either.

In my own case, growing up as an Orthodox Jew involved sitting through lots of highly abstract ritual observances that relied on lots of background concepts (and frequently being bored stiff and annoyed). And if a rationalist group is only as successful at retaining the involvement of parents and their kids as Orthodox Judaism is, dayenu. (Which is to say: that would be sufficient.)

More generally, I suspect that it's perfectly possible to involve kids in something that structured, it just requires giving the kids roles they can engage with in that structure.

Comment author: Raemon 28 December 2012 03:39:49PM 1 point [-]

A comment I made in the introduction article:

There are people in the NYC community who expect to have kids soon, and friends in San Francisco with kids who might potentially come to one in the future.

This whole experience was inspired by my family's Christmas Eve celebration, which was inherently designed for children. I customized it for the people who make up our community now, but will definitely be revising things as kids become part of the equation.

This may well involve splitting off into multiple events that kids don't participate in. (For example, we might have the "fun" part of the evening end with some activity for kids, and then they go to bed, and then older people do the more serious sections). How exactly to handle it depends on how many kids are coming, how old, etc. We'll cross the bridges when we come to them, but yeah, they're coming.

More generally, each community that wants this will need to customize it for their own needs. Daenerys' event in Ohio didn't end up having singing or litanies at all, instead being built around custom vows and affirmations.

Comment author: shminux 24 December 2012 10:39:02PM 2 points [-]

Having lost parents and grandparents in the last several years, I appreciate your sentiment. But, as much as I would want to live forever, I am not sure that eternal individual life is good for humanity as a whole, at least without some serious mind hacking first. Many other species, like, say, salmon, have a fixed lifespan, so intelligent salmon would probably not worry about individual immortality. It seems to me that associating natural death of an individual with evil is one of those side effects of evolution humans could do without. That said, I agree that suffering and premature death probably has no advantage for the species as a whole and ought to be eliminated, but I cannot decide for sure if fixed lifespan is such a bad idea.

Comment author: Raemon 24 December 2012 10:44:11PM 6 points [-]

I actually mostly agree with you. Or at least, that the answer is not terribly obvious. I didn't expound upon it during the ceremony (partly due to time, and partly because one of the most important aspects of the moment was to give a time for anti-deathists to grieve for people they lost, who's death they were unable to process among peers who shared their beliefs.)

But in the written up version here, I thought it was important that I make my views clear, and included the bit about me not actually being that much of an anti-deathist. I think the current way people die is clearly supoptimal, and once you remove it as an anchor I'm not sure if people should die after 100 years, a thousand years, or longer or at all. But I don't think it's as simple an idea as "everybody gets to live forever."

Comment author: SaidAchmiz 25 December 2012 01:47:35AM *  8 points [-]

The obvious answer is "Everyone dies if and when they feel like it. If you want to die after 100 years, by all means; if you feel like living for a thousand years, that's fine too; totally up to you."

In any case that seems to me to be much more obvious than "we (for some value of 'we') decide, for all of humanity, how long everyone gets to live".

In other words, I don't think there's a fact of the matter about "if people should die after 100 years, a thousand years, or longer or at all". The question assumes that there's some single answer that works for everyone. That seems unlikely. And the idea that it's OK to impose a fixed lifespan on someone who doesn't want it is abhorrent.

Additionally — this is re: shminux's comment, but is related to the overall point — "Good for humanity as a whole" and "advantage for the species as a whole" seem like nonsensical concepts in this context. Humanity is just the set of all humans. There's no such thing as a nebulous "good for humanity" that's somehow divorced from what's good for any or every individual human.

Comment author: homunq 30 December 2012 10:34:53PM 0 points [-]

If resources are limited and population has reached carrying capacity — even if those numbers are many orders of magnitude larger than today — then each living entity would get to have one full measure of participating in the creation of a new living entity, and then enough time after that such that the average time of participating in life-creation was the same as the average of birth and death. So with sexual reproduction, you'd get to have two kids, and then when your second kid is as old as you were when your first kid was born, it would be your turn to die. I suspect in that world I would decide to have my second kid eventually and thus I'd end up dying when my age was somewhere in the 3 digits.

Obviously, that solution is "fair and stable", not "optimal". I'm not arguing that that's how things should work — and I can easily imagine ways to change it that I'd view as improvements — but it's a nice simple model of how things could be stable.

Comment author: SaidAchmiz 31 December 2012 12:17:28AM 0 points [-]

Well, that model may be stable (I haven't actually thought it through sufficiently to judge, but let's grant that it is) — but how exactly is it "fair"? I mean, you're assuming a set of values which is nowhere near universal in humanity, even. I'm really not even sure what your criteria here are for fairness (or, for that matter, optimality).

My problem with what you describe is the same as my problem with what shminux says in some of his comments, and with a sort of comment that people often make in similar discussions about immortality and human lifespan. Someone will describe a set of rules, which, if they were descriptive of how the universe worked, would satisfy some criteria under discussion (e.g. stability), or lack some problem under discussion (e.g. overpopulation).

Ok. But:

  • Those rules are not, in fact, descriptive of how the universe works (or else we wouldn't be having this discussion). Do you think they should be?
  • If so, how do we get from here to there? Are we modifying the physical laws of the universe somehow? Are we putting enforced restrictions in place?
  • Who enforces these restrictions? Who decides what they are in the first place? Why those people? What if I disagree? (i.e. are you just handwaving away all the sociopolitical issues inherent in attempts to institute a system?)

For instance, you say that "each living entity would get to have" so-and-so in terms of lifespan. What does that mean? Are you suggesting that the DNA of every human be modified to cause spontaneous death at some predetermined age? Aside from the scientific challenge, there are... a few... moral issues here. Perhaps we'll just kill people at some age?

What I am getting at is that you can't just specify a set of rules that would describe the ideal system when in reality, getting from our current situation to one where those rules are in place would require a) massive amounts of improbable scientific work and social engineering, and b) rewriting human terminal values. We might not be able to do the former, and I (and, I suspect, most people, at least in this community) would strongly object to the latter.

Comment author: pleeppleep 25 December 2012 02:48:18AM *  0 points [-]

In other words, I don't think there's a fact of the matter about "if people should die after 100 years, a thousand years, or longer or at all". The question assumes that there's some single answer that works for everyone. That seems unlikely.

Not necessarily true. The question posits the existence of an optimal outcome. It just neglects to mention what, exactly, said outcome would be optimal to. It would probably be necessary to determine the criteria a system that accounts for immortality has to meet to satisfy us before we start coming up with solutions.

The obvious answer is "Everyone dies if and when they feel like it. If you want to die after 100 years, by all means; if you feel like living for a thousand years, that's fine too; totally up to you."

A limited distribution of resources somewhat complicates the issue, and even with nanotechnology and fusion power there would still be the problem of organizing a system that isn't inherently self-destructive.

I think I agree with the spirit of your answer. "We can't possibly figure out how to do that and in any case doing so wouldn't feel right, so we'll let the people involved sort it out amongst themselves.," but there are a lot of problems that can arise from that. There would probably need to be some sort of system of checks and balances, but then that would probably deteriorate over time and has the potential to turn the whole thing upside down in itself. I doubt you'll ever be able to really design a system for all humanity.

And the idea that it's OK to impose a fixed lifespan on someone who doesn't want it is abhorrent.

To you, perhaps. Well, and me. You're intuitions on the matter are not universal, however. Far from it, as our friends's comments show.

My main problems (read: ones that don't rest entirely on feelings of moral sacredness) with such an idea would be the dangerous vulnerability of the system it describes to power grabs, its capacity to threaten my ambitions, and the fact that, if implemented, it would lead to a world that's all around boring (I mean, if you can fix the life spans then you already know the ending. The person dies. Why not just save yourself the trouble and leave them dead to begin with?)

Comment author: pleeppleep 24 December 2012 11:50:40PM *  1 point [-]

Note: Not trying to attack your position, just curious.

but I cannot decide for sure if fixed lifespan is such a bad idea.

Fixed by whom, might I ask?

It seems to me that associating natural death of an individual with evil is one of those side effects of evolution humans could do without.

You seem to be implying that designed death is worse. How do you figure?

Comment author: shminux 25 December 2012 01:34:07AM 1 point [-]

Fixed by whom, might I ask?

Superhappy aliens, FAI, United Nations... There are multiple possibilities. One is that you stay healthy for, say, 100 years, then spawn once blissfully and stop existing (salmon analogy). Humans' terminal values are adjusted in a way that they don't strive for infinite individual lifespan.

You seem to be implying that designed death is worse. How do you figure?

I don't. Suffering is bad, finite individual existence is not necessarily so.

Comment author: Multiheaded 25 December 2012 11:58:22AM *  8 points [-]

Humans' terminal values are adjusted in a way...

No proposal that includes these words is worth considering. There's no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone's souls. That's literally what terminal values are all about; you can only trade off between them, not optimize them away whenever it would seem expedient to!

If it's a terminal value for most people to suffer and grieve over the loss of individual life - and they want to suffer and grieve, and want to want to - a sensible utilitarian would attempt to change the universe so that the conditions for their suffering no longer occur, instead of messing with this oh-so-inconvenient, silly, evolution-spawned value. Because if we were to mess with it, we'd be messing with the very complexity of human values, period.

Comment author: shminux 25 December 2012 06:22:46PM 1 point [-]

There's no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone's souls.

A statement like that needs a mathematical proof.

If it's a terminal value for most people to suffer and grieve over the loss of individual life

"If" indeed. There is little "evolution-spawned" about it (not that it's a good argument to begin with, trusting the "blind idiot god"), a large chunk of this is cultural. If you dig a bit deeper into the reasons why people mourn and grieve, you can usually find more sensible terminal values. Why don't you give it a go.

Comment author: arborealhominid 31 December 2012 03:29:59AM 1 point [-]

I agree with what you're saying, but just to complicate things a bit: what if humans have two terminal values that directly conflict? Would it be justifiable to modify one to satisfy the other, or would we just have to learn to live with the contradiction? (I honestly don't know what I think.)

Comment author: Multiheaded 31 December 2012 03:44:23AM *  1 point [-]

(I honestly don't know what I think.)

Ah... If you or I knew what to think, we'd be working on CEV right now, and we'd all be much less fucked than we currently are.

Comment author: MugaSofer 25 December 2012 01:23:59PM 2 points [-]

Humans' terminal values are adjusted in a way that they don't strive for infinite individual lifespan.

If human terminal values need to be adjusted for this to be acceptable to them, then it is immoral by definition.

Comment author: shminux 25 December 2012 06:14:47PM 1 point [-]

Looks like you and I have different terminal meta-values.

Comment author: MugaSofer 25 December 2012 10:32:42PM -1 points [-]

Unless you own a time machine and come from a future where salmon-people rule the earth, I seriously doubt that. If you're a neurotypical human, then you terminally value not killing people. Mindraping them into doing it themselves continues to violate this preference, unless all you actually care about is people's distress when you kill them, in which case remind me never to drink anything you give me.

Comment author: shminux 26 December 2012 12:00:34AM *  2 points [-]

Typical mind fallacy?

Comment author: MugaSofer 26 December 2012 01:21:31AM *  -1 points [-]

... are you saying I'm foolish to assume that you value human life? Would you, in fact, object to killing someone if they wouldn't realize? Yes? Congratulations, you're not a psychopath.

Comment author: Kaj_Sotala 31 December 2012 10:09:06AM 0 points [-]

Everyone who voluntarily joins the military is a psychopath?

Comment author: MugaSofer 31 December 2012 04:11:49PM -3 points [-]

Tell you what. Instead of typing out the answer to that,* I'm going to respond with a question: how do you think people who join the military justify the fact that they will probably either kill or aid others in killing?

*(I do have an answer in mind, and I will post it, even if your response refutes it.)

Comment author: wedrifid 26 December 2012 07:14:45AM 0 points [-]

Unless you own a time machine and come from a future where salmon-people rule the earth, I seriously doubt that. If you're a neurotypical human, then you terminally value not killing people.

"Neurotypical"... almost as powerful as True!

Comment author: MugaSofer 26 December 2012 03:36:49PM *  -1 points [-]

Seems like a perfectly functional Schelling point to me. Besides, I needed a disclaimer for the possibility that he's actually a psychopath or, indeed, an actual salmon-person (those are still technically "human", I assume.)

Comment author: Kawoomba 26 December 2012 07:26:57AM -3 points [-]

Neurotypical, that's the tyranny of some supposedly existing elusive majority which has always (ever since living on trees) and will always (when colonizing the Canis Major Dwarf Galaxy) terminally value essentially the same things (such as strawberry ice cream, not killing people).

If your utility function differs, it is wrong, while theirs is right. (I'd throw in some reference to a divine calibration, but that would be overly sarcastic.)

Comment author: MugaSofer 26 December 2012 03:44:37PM 1 point [-]

I may be confused by the sarcasm here. Could you state your objection more clearly? Are you arguing "neurotypical" is not a useful concept? Are you accusing me of somehow discriminating against agents that implement other utility functions? Are you objecting to my assertion that creating an agent with a different utility function is usually instrumentally bad, because it is likely to attempt to implement that utility function to the exclusion of yours?

Comment author: Kawoomba 26 December 2012 06:37:24PM 0 points [-]

Are you accusing me of somehow discriminating against agents that implement other utility functions?

Yes, here's your last reply to me on just that topic:

Except that humans share a utility function, which doesn't change. (...) Cached thoughts can result in actions that, objectively, are wrong. They are not wrong because this is some essential property of these actions, morality is in our minds, but we can still meaningfully say "this is wrong" just was we can say "this is a chair" or "there are five apples".

Also:

The fact that morality is acted upon in different ways (due to your "layers" or simply mistaken beliefs about the world) doesn't change the fact that it is there, underneath [emphasis mine], and that this is the standard we work by to declare something "good" or "bad". We aren't perfect at it, but we can make a reasonable attempt.

It is bizarre to me how you believe there is some shared objective morality - "underneath" - that is correct because it is "typical" (hello fallacious appeal to majority), and that outliers that have a different utility function have false values.

Even if there are shared elements (even across e.g. large vague categories such as Chinese values and Western values), such as surmised by CEV_humankind (probably an almost empty set), that does not make anyone's own morality/value function wrong, it merely makes it incongruent with the current cultural majority views. Hence the "tyranny of some supposedly existing elusive majority".

Comment author: BerryPick6 25 December 2012 06:35:47PM 1 point [-]

I'm really curious to know what you mean by 'terminal meta-values'. Would you mind expanding a bit, or pointing me in the direction of a post which deals with these things?

Comment author: shminux 26 December 2012 01:59:34AM 1 point [-]

Say, whether it is ever acceptable to adjust someone's terminal values.

Comment author: MugaSofer 26 December 2012 02:09:00AM *  4 points [-]

No, I'm perfectly OK with adjusting terminal values in certain circumstances. For example, turning a Paperclipper into an FAI is obviously a good thing.

EDIT: Of course, turning an FAI into a Paperclipper is obviously a bad thing, because instead of having another agent working towards the greater good, we have an agent working towards paperclips, which is likely to get in the way at some point. Also, it's likely to feel sad when we have to stop it turning people into paperclips, which is a shame.

Comment author: pleeppleep 25 December 2012 02:25:12AM *  1 point [-]

Superhappy aliens, FAI, United Nations... There are multiple possibilities. One is that you stay healthy for, say, 100 years, then spawn once blissfully and stop existing (salmon analogy). Humans' terminal values are adjusted in a way that they don't strive for infinite individual lifespan.

Possible outcome; better than most; boring. I don't think that's really something to strive for, but my values are not yours, I guess. Also, I'm assuming we're just taking whether an outcome is desirable into account, not its probability of actually coming about.

I don't. Suffering is bad, finite individual existence is not necessarily so.

Did you arrive at this from logical extrapolation of your moral intuitions, or is this the root intuition? At this point I'm just curious to see how your moral values differ from mine.

Comment author: shminux 25 December 2012 03:09:36AM 1 point [-]

Good question. Just looking at some possible worlds where individual eternal life is less optimal than finite life for the purposes of species survival. Yet where personal death is not a cause of individual anguish and suffering.

Comment author: Yvain 25 December 2012 09:46:37PM *  1 point [-]

Sorry, read this wrong.

Comment author: Kawoomba 25 December 2012 09:57:30PM 0 points [-]

In general: Depends on your terminal values.

In particular: Probably not much of a decision (advanced dementia versus legal death). As you know, in terms of preserving the functional capacity of the original human's cognition, both lead to the same result, albeit at different speeds. (In addition, even for cryonic purposes, it would be vastly better to conserve a non-degenerated brain.)