MatthewB comments on The Fundamental Question - Less Wrong

43 Post author: MBlume 19 April 2010 04:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (277)

You are viewing a single comment's thread. Show more comments above.

Comment author: MatthewB 21 April 2010 02:22:57AM 6 points [-]

It may just be me, but why do you need to find someone to follow?

I have always found that forging my own path through the wilderness to be far more enjoyable and yield far greater rewards that following a path, no matter how small or large that path may be.

Comment author: PeerInfinity 21 April 2010 02:36:51PM 10 points [-]

Well, one reason why I feel that I need someone to follow is... severe underconfidence in my ability to make decisions on my own. I'm still working on that. Choosing a person to follow, and then following them, feels a whole lot easier than forging my own path.

I should mention again that I'm not actually "following" Eliezer in the traditional sense. I used his value system to bootstrap my own value system, greatly simplifying the process of recovering from christianity. But now that I've mostly finished with that (or maybe I'm still far from finished?), I am, in fact, starting to think independently. It's taking a long time for me to do this, but I am constantly looking for things that I'm doing or believing just because someone else told me to, and then reconsidering whether these things are a good idea, according to my current values and beliefs. And yes, there are some things I disagree with Eliezer about (the "true ending" to TWC, for example), and things that I disagree with SIAI about ("we're the only place worth donating to", for example). I'll probably start writing more about this, now that I'm starting to get over my irrational fear of posting comments here.

Though part of me is still worried about making SIAI look bad. And I'm still worried that the stuff I've already posted may end up harming SIAI's mission (and my mission) more than it could possibly have helped. Though of course it would be a bad idea to try to hide problems that need to be examined and dealt with. And the idea of deliberately trying to hide information just feels wrong. It feels like Dark Arts. I should also mention that the idea of deliberately not saying things, in order to avoid making the group look bad, isn't actually something I was told by anyone from SIAI, I think it was a bad habit I brought with me from christianity.

Comment author: Nick_Tarleton 21 April 2010 02:58:20PM *  4 points [-]

And the idea of deliberately trying to hide information just feels wrong. It feels like Dark Arts.

If by 'dark arts' you mean 'non-rational methods of persuasion', such things may be ethically questionable (in general; not volunteering information you aren't obligated to provide almost certainly isn't) but are not (categorically) wrong. Rational agents win.

Comment author: khafra 21 April 2010 03:46:58PM 12 points [-]

I like the way steven0461 put it:

...promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever. Specifically, don’t do it to yourself.

Comment author: PeerInfinity 21 April 2010 04:59:52PM 1 point [-]

I think I agree with both khafra and Nick.

I like this quote, and I've used it before in conversations with other people.

Comment author: RobinZ 21 April 2010 02:45:32PM 2 points [-]

I think it's worth distinguishing between "underconfidence" and "lack of confidence" - the former implies the latter (although not absolutely), but under some circumstances you are justified in questioning your competence. Either way, it sounds like you're working on both ends of that balance, which is good.

Though part of me is still worried about making SIAI look bad. And I'm still worried that the stuff I've already posted may end up harming SIAI's mission (and my mission) more than it could possibly have helped. Though of course it would be a bad idea to try to hide problems that need to be examined and dealt with. And the idea of deliberately trying to hide information just feels wrong. It feels like Dark Arts. I should also mention that the idea of deliberately not saying things, in order to avoid making the group look bad, isn't actually something I was told by anyone from SIAI, I think it was a bad habit I brought with me from christianity.

I think this is good thinking.

Comment author: PeerInfinity 21 April 2010 04:57:13PM 2 points [-]

good point about underconfidence versus lack of confidence, thanks

Comment author: MatthewB 22 April 2010 07:17:39AM 0 points [-]

That puts it into an understandable context... I can't quite understand about the having to shake off Christian Beliefs. I was raised with a tremendously religious mother, but about the age of 6 I began to question her beliefs and by 14 was sure that she was stark raving mad to believe what she did. So, I managed to keep from being brainwashed to begin with.

I've seen the results of people who have been brainwashed and who have not managed to break completely free from their old beliefs. Most of them swung back and forth between the extremes of bad belief systems (From born-again Christian to Satanist, and back, many times)... So, what you are doing is probably best for the time being, until you learn the tools needed to step off into the wilderness by yourself.

Comment author: PeerInfinity 22 April 2010 03:18:19PM 11 points [-]

In my case, I knew pretty much from the beginning that something was seriously wrong. But since every single person I had ever met was a christian (with a couple of exceptions I didn't realize until later), I assumed that the problem was with me. The most obvious problem, at least for me, was that none of the so-called christians was able to clearly explain what a christian is, and what it is that I need to do in order to not go to hell. And the people who came closest to being able to give a clear explanation, they were all different from each other, and the answer changed if I asked different questions. So I guess I was... partly brainwashed. I knew that there was something really important I was supposed to do, and that people's souls were at stake (a matter of infinite utility/anti-utility!) but noone was able to clearly explain what it was that I was supposed to do. But they expected me to do it anyway, and made it sound like there was something wrong with me for not instinctively knowing what it was that I was supposed to do. There's lots more I could complain about, but I guess I had better stop now.

So it was pretty obvious that I wasn't going to be able to save anyone's soul by converting them to christianity by talking to them. And I was also similarly unqualified for most of the other things that christians are supposed to do. But there was still one thing I saw that I could do: living as cheaply as possible, and donating as much money as possible to the church so that the people who claim to actually know what they're doing can just get on with doing it. And just being generally helpful when there was some simple everyday thing I could be helpful with.

Anyway, it wasn't until I went to university that I actually met any atheists who openly admitted to being atheists. Before then, I had heard that there was such a thing as an atheist, and that these were the people whose souls we were supposed to save by converting them to christianity, but Pascal's Wager prevented me from seriously considering becoming an atheist myself. Even if you assign a really tiny probability to christianity being true, converting to atheism seemed like an action with an expected utility of negative infinity. But then I overheard a conversation in the Computer Science students' lounge. That-guy-who-isn't-all-that-smart-but-likes-to-sound-smart-by-quoting-really-smart-people was quoting Eliezer Yudkowsky. Almost immediately after that conversation, I googled the things he was talking about. I discovered Singularitarianism. An atheistic belief system, based entirely on a rational, scientific worldview, to which Pascal's Wager could be applied. (there is an unknown probability that this universe can support an infinite amount of computation, therefore there is an unknown probability that actions can have infinite positive or negative utility.) I immediately realized that I wanted to convert to this belief system. But it took me a few weeks of swinging back and forth before I finally settled on Singularitarianism. And since then I haven't had any desire at all to switch back to christianity. Though I was afraid that, because of my inability to stand up to authority figures, someone might end up convincing me to convert back to christianity against my will. Even now, years later, there are scary situations, when dealing with an authority figure who is a christian, part of me still sometimes thinks "OMG maybe I really was wrong about all this!"

Anyway, I'm still noticing bad habits from christianity that I'm still doing, and I'm still working on fixing this. Also, I might be oversensitive to noticing things that are similar between christianity and Singularitarianism. For example, the expected utility of "converting" someone to Singularitarianism. Though in this case you're not guaranteeing that one soul is saved, you're slightly increasing the probability that everyone gets "saved", because there is now one more person helping the efforts to help us achieve a positive Singularity.

Oh, and now, after reading LW, I realize what's wrong with Pascal's Wager, and even if I found out for certain that this universe isn't capable of supporting an infinite amount of computation, I still wouldn't be tempted to convert back to christianity.

Random trivia: I sometimes have dreams where a demon, or some entirely natural thing that for some reason is trying to look like a demon, is trying to trick or scare me into converting back to christianity. And then I discover that the "demon" was somehow sent by someone I know, and end up not falling for it. I find this amusingly ironic.

As usual, there's lots more I could write about, but I guess I had better stop writing for now.

Comment author: cousin_it 23 April 2010 08:26:44AM *  19 points [-]

But it took me a few weeks of swinging back and forth before I finally settled on Singularitarianism.

Here's a quote from an old revision of Wikipedia's entry on The True Believer that may be relevant here:

A core principle in the book is Hoffer's insight that mass movements are interchangeable; he notes fanatical Nazis later becoming fanatical Communists, fanatical Communists later becoming fanatical anti-Communists, and Saul, persecutor of Christians, becoming Paul, a fanatical Christian. For the true believer the substance of the mass movement isn't so important as that he or she is part of that movement.

And from the current revision of the same article:

Hoffer quotes extensively from leaders of the Nazi and communist parties in the early part of the 20th Century, to demonstrate, among other things, that they were competing for adherents from the same pool of people predisposed to support mass movements. Despite the two parties' fierce antagonism, they were more likely to gain recruits from their opposing party than from moderates with no affiliation to either.

Can't recommend this book enough, by the way.

Comment author: PeerInfinity 23 April 2010 06:17:26PM *  13 points [-]

Thanks for the link, and the summary. Somehow I don't find that at all surprising... but I still haven't found any other cause that I consider worth converting to.

At the time I converted, Singularitarianism was nowhere near a mass movement. It consisted almost entirely of the few of us in the SL4 mailing list. But maybe the size of the movement doesn't actually matter.

And it's not "being part of a movement" that I value, it's actually accomplishing something important. There is a difference between a general pool of people who want to be fanatical about a cause, just for the emotional high, and the people who are seriously dedicated to the cause itself, even if the emotions they get from their involvement are mostly negative. This second group is capable of seriously examining their own beliefs, and if they realize that they were wrong, they will change their beliefs. Though as you just explained, the first group is also capable of changing their minds, but only if they have another group to switch to, and they do this mostly for social reasons.

Seriously though, the emotions I had towards christianity were mostly negative. I just didn't fit in with the other christians. Or with anyone else, for that matter. And when I converted to Singularitarianism, I didn't exactly get a warm welcome. And when I converted, I earned the disapproval of all the christians I know. Which is pretty much everyone I have ever met in person. I still have not met any Singularitarian, or even any transhumanist, in person. And I've only met a few atheists. I didn't even have much online interaction with other transhumanists or Singularitarians until very recently. I tried to hang out in the SL4 chatroom a few years ago, but they were openly hostile to the way I treated Singularitarianism as another belief system to convert to, another group to be part of, rather than... whatever it is that they thought they were doing instead. And they didn't seem to have a high opinion of social interaction in general. Or maybe I'm misremembering this.

Anyway, I spent my first approximately 7 years as a Singularitarian in almost complete isolation. I was afraid to request social interaction for the sake of social interaction, because somehow I got the idea that every other Singularitarian was so totally focused on the mission that they didn't have any time at all to spare to help me feel less lonely, and so I should either just put up with the loneliness or deal with it on my own, without bothering any of the other Singularitarians for help. The occasional attempt I made to contact some of the other Singularitarians only further confirmed this theory. I chose the option of just putting up with the loneliness. That may have been a bad decision.

And just a few weeks ago, I found out that I'm "a valued donor", to SIAI. Though I'm still not sure what this means. And I found out that other Singularitarians do, in fact, socialize just for the sake of socializing. And I found out that most of them spend several hours a day "goofing off". And that they spend a significant percentage of their budget on luxuries that technically they could do without, without having a significant effect on their productivity. And that most of them live generally happy, productive, and satisfying lives. And that it was silly of me to feel guilty for every second and every penny that I wasted on anything that wasn't optimally useful for the mission. In addition to the usual reasons why feeling guilty is counterproductive

Anyway, things are finally starting to get better now, and I don't think I'll accomplish anything by complaining more.

Also, most of this was probably my own fault. It turns out that everyone living at the SIAI house was totally unaware of my situation. And this is mostly my fault, because I was deliberately avoiding contacting them, because I was afraid to waste their time. And wasting the time of some one who's trying to save the universe is a big no-no. I was also afraid that if I tried to contact them, then they would ask me to do things that I wasn't actually able to do, but wouldn't know for sure that I wasn't able to do, and would try anyway because I felt like giving up wasn't an option. And it turns out this is exactly what happened. A few months ago I contacted Michael Vassar, and he started giving me things to help with. I made a terrible mess out of trying to arrange the flights for the speakers at the 2009 Singularity Summit. And then I went back to avoiding any contact with SIAI. Until Adelene Dawner talked to them for me, without me asking her to. Thanks Ade :)

Um... one other thing I just realized... well, actually Adelene Dawner just mentioned it in Wave, where I was writing a draft of this post... the reason why I haven't been trying to socialize with people other than Singularitarians is... I was afraid that anyone who isn't a Singularitarian would just write off my fanaticism as general insanity, and therefore any attempt to socialize with non-Singularitarians would just end up making the Singularitarian movement look bad... I already wrote about how this is a bad habit I carried with me from christianity. It's strange that I hadn't actually spent much time thinking about this, I just somehow wrote it off as not an option, to try to socialize with non-Singularitarians, and ended up just not thinking about it after that. I still made a few careful attempts at socializing with non-Singularitarians, but the results of these experiments only confirmed my suspicions.

Oh, and another thing I just realized: Confirmation Bias. These experiments were mostly invalid, because they were set up to detect confirming evidence of my suspicions, but not set up to be able to falsify them. oops. I made the same mistake with my suspicions that normal people wouldn't be able to accept my fanatical Singularitarianism, my suspicions that the other Singularitarians are all so totally focused on the mission that they don't have any time at all for socializing, and also my suspicions that my parents wouldn't be able to accept my atheism. yeah, um, oops. So I guess it would be really silly of me to continue blaming this situation on other people. Yes, it may have been theoretically possible for someone else to notice and fix these problems, but I was deliberately taking actions that ended up preventing them from having a chance to do so.

There's probably more I could say, but I'll stop writing now.

Comment author: PeerInfinity 23 April 2010 08:10:25PM 8 points [-]

um... after reviewing this comment, I realize that the stuff I wrote here doesn't actually count as evidence that I don't have True Believer Syndrome. Or at least not conclusive evidence.

oh, and did I mention yet that I also seem to have some form of Saviour Complex? Of course I don't actually believe that I'm saving the world through my own actions, but I seem to be assigning at least some probability that my actions may end up making the difference between whether our efforts to achieve a positive Singularity succeed or fail.

but... if I didn't believe this, then I wouldn't bother donating, would I?

Do other people manage to believe that their actions might result in making the difference between whether the world is saved or not, without it becoming a Saviour Complex?

Comment author: cousin_it 24 April 2010 05:17:31AM *  5 points [-]

PeerInfinity, I don't know you personally and can't tell whether you have True Believer Syndrome. I'm very sorry for provoking so many painful thoughts... Still. Hoffer claims that the syndrome stems from lack of self-esteem. Judging from what you wrote, I'd advise you to value yourself more for yourself, not only for the faraway goals that you may someday help fulfill.

Comment author: PeerInfinity 24 April 2010 10:18:06PM *  3 points [-]

no need to apologise, and thanks for pointing out this potential problem.

(random trivia: I misread your comment three times, thinking it said "I know you personally can't tell whether you have True Believe Syndrome")

as for the painful thoughts... It was a relief to finally get them written down, and posted, and sanity-checked. I made a couple attempts before to write this stuff down, but it sounded way too angry, and I didn't dare post it. And it turns out that the problem was mostly my fault after all.

oh, and yeah, I am already well aware that I have dangerously low self-esteem. but if I try to ignore these faraway goals, then I have trouble seeing myself as anything more valuable than "just another person". Actually I often have trouble even recognizing that I qualify as a person...

also, an obvious question: are we sure that True Believer Syndrome is a bad thing? or that a Saviour Complex is a bad thing?

random trivia: now that I've been using the City of Lights technique for so long, I have trouble remembering not to use a plural first-person pronoun when I'm talking about introspective stuff... I caught myself doing that again as I checked over this comment.

Comment author: cousin_it 26 April 2010 05:52:17AM 4 points [-]

also, an obvious question: are we sure that True Believer Syndrome is a bad thing? or that a Saviour Complex is a bad thing?

I'm pretty sure of that. Not because of what it does to your goals, but because of what it does to you.

Comment author: Jack 24 April 2010 10:24:46PM *  4 points [-]

also, an obvious question: are we sure that True Believer Syndrome is a bad thing?

Say it was the case that promoting a singularity was a bad idea and that, in particular, SIAI did more harm than good. If someone had compelling evidence of this and presented it to you would you be capable of altering your beliefs and behavior in accordance with this new data? I take it the True Believer would not and that we can all agree with would be a bad thing.

Comment author: cupholder 23 April 2010 08:35:18PM 3 points [-]

Maybe instead of imagining your actions as having some probability of 'making the difference,' try thinking of them as slightly boosting the probability of a positive singularity?

At any rate, the survival of someone wheeled in through the doors of a hospital might depend on the EMTs, the nurses, the surgeons, the lab techs, the pharmacists, the janitors and so on and so on. I'd say they're all entitled to take a little credit without being accused of having a savior complex!

Comment author: PeerInfinity 23 April 2010 08:43:41PM 1 point [-]

um... can you please explain what the difference is, between "having some probability X of making the difference between success and failure, of achieving a positive Singularity" and "boosting the probability of a positive Singularity, by some amount Y"? To me, these two statements seem logically equivalent. Though I guess they focus on different details...

oh, I just noticed one obvious difference: X is not equal to Y

Comment author: cupholder 23 April 2010 09:27:33PM 1 point [-]

Though I guess they focus on different details...

Yeah, what I wrote was intended as an alternative way of thinking about the situation that might make you feel better, rather than an accusation of wrongness.

Comment author: AdeleneDawner 23 April 2010 06:24:38PM 2 points [-]

Yes, it may have been theoretically possible for someone else to notice and fix these problems, but I was deliberately taking actions that ended up preventing them from having a chance to do so.

Nitpick for clarity's sake: I've seen no evidence that this was deliberate in the sense implied, and I would expect to have seen such evidence if it did exist. It may have been deliberate or quasi-deliberate for some other reason, such as social anxiety (which I have seen evidence of).

Comment author: PeerInfinity 23 April 2010 06:28:14PM 2 points [-]

er, yes, that's what I meant. sorry for the confusion. I wasn't deliberately trying to prevent anyone from helping, I was deliberately trying to avoid wasting their time, by having no contact with them, which prevented them from being able to help.

Comment author: NancyLebovitz 23 April 2010 12:45:47PM 4 points [-]

I've heard from an ex-fundamentalist that for some people, conversion is a high in itself (I don't know if this is mostly true for Christians, or applies to movements in general. In any case, he said the high lasts for about two years, and then wears off, so that those people then convert to something else.

Comment author: juliawise 24 September 2011 07:25:20PM 2 points [-]

Huh. I knew this was true of me, but didn't realize it was common. I went from being an extreme Christian at 11 to an extreme utilitarian by about 14 (despite not knowing people who were extreme about either thing).

Comment author: Utilitarian 23 April 2010 06:02:12AM *  5 points [-]

PeerInfinity, I'm rather struck by a number of similarities between us:

  • I, too, am a programmer making money and trying to live frugally in order to donate to high-expected-value projects, currently SIAI.
  • I share your skepticism about the cause and am not uncomfortable with your 1% probability of positive Singularity. I agree SIAI is a good option from an expected-value perspective even if the mainline-probability scenario is that these concerns won't materialize.
  • As you might guess from my user name, I'm also a Utilitronium-supporting hedonistic utilitarian who is somewhat alarmed by Eliezer's change of values but who feels that SIAI's values are sufficiently similar to mine that it would be unwise to attempt an alternative friendly-AI organization.
  • I share the seriousness with which you regard Pascal's wager, although in my case, I was pushed toward religion from atheism rather than the other way around, and I resisted Christian thinking the whole time I tried to subscribe to it. I think we largely agree in our current opinions on the subject. I do sometimes have dreams about going to the Christian hell, though.

I'm not sure if you share my focus on animal suffering (since animals outnumber current humans by orders of magnitude) or my concerns about the implications of CEV for wild-animal suffering. Because of these concerns, I think a serious alternative to SIAI in cost-effectiveness is to donate toward promoting good memes like concern about wild animals (possibly including insects) so that, should positive Singularity occur, our descendants will do the right sorts of things according to our values.

Comment author: PeerInfinity 23 April 2010 04:26:19PM 3 points [-]

Hi Utilitarian!

um... are you the same guy who wrote those essays at utilitarian-essays.com? If you are, we have already talked about these topics before. I'm the same Peer Infinity who wrote that "interesting contribution" on Singularitarianism in that essay about Pascal's Wager, the one that tried to compare the different religions to examine which of them would be the best to Wager on.

And, um... I used to have some really nasty nightmares about going to the christian hell. But then, surprisingly, these nightmares somehow got replaced with nightmares of a hell caused by an Evil AI. And then these nightmares somehow got replaced with nightmares about the other hells that modal realism says must already exist in other universes.

I totally agree with you that the suffering of humans is massively outweighed by the suffering of other animals, and possibly insects, by a few orders of magnitude, I forget how many exactly, but I think it was less than 10 orders of magnitude. But I also believe that the amount of positive utility that could be achieved through a positive Singularity is... I think it was about 35 orders of magnitude more than all of the positive or negative utility that has been experienced so far in the entire history of Earth. But I don't remember the details of the math. For a few years now I was planning to write about that, but somehow never got around to it. Well, actually, I did make one feeble attempt to do the math, but that post didn't actually make any attempt to estimate how many orders of magnitude were involved

Oh, and I totally share your concerns about the possible implications of CEV. Specifically, that it might end up generating so much negative utility that it outweighs the positive utility, which would mean that a universe completely empty of life would be preferable.

Oh, and I know one other person who shares your belief that promoting good memes like concern about wild animals would be more cost effective than donating to Friendly AI research. He goes by the name MetaFire Horsley in Second Life, and by the name MetaHorse in Google Wave. I have spent lots of time discussing this exact topic with him. I agree that spreading good memes is totally a good idea, but I remain skeptical about how much leverage we could get out of this plan, and I suspect that donating to Friendly AI research would be a lot more leveraged. But it's still totally a good idea to spread positive memes in your spare time, whenever you're in a situation that gives you an opportunity to do some positive meme spreading. MetaHorse is currently working on some sci-fi stories that he hopes will be useful for spreading these positive memes. He writes these stories in Google Wave, which means that you can see him writing the stories in real-time, and give instant feedback. I really think it would be a good idea for you to get in contact with him. If you don't already have a Google Wave account, please send me your gmail address in a private email, and I'll send you a Wave invite.

Oh, and I'm still really confused about how CEV is supposed to work. It seems like it's supposed to take into our account our beliefs that the suffering of animals, or any sentient creatures, is unacceptable, and consider that as a source of decoherence if someone else advocates an action that would result in suffering. And apparently it's not supposed to just average out everyone's preferences, it's supposed to... I don't know what, exactly, but it's supposed to have the same or better results than if we spent lots and lots of time talking with the people who would advocate suffering, and we all learned more, were smarter, and "grew up further together", whatever that means, and other stuff. And that sounds nice in theory, but I'm still waiting for a more detailed specification. It's been a few years since the original CEV document was published, and there haven't been any updates at all. Well, other than Eliezer's posts to LW.

Oh, and I read all of your essays (yes, all of them, though I only skimmed that really huge one that listed lots of numbers for the amount of suffering of animals) a few months ago, and we chatted about them briefly. Though that was long enough ago that it would probably be a good idea for me to review them.

Anyway, um... keep up the good work, I guess, and thanks for the feedback. :)

Comment author: Utilitarian 25 April 2010 11:04:41AM *  5 points [-]

Bostrom's estimate in "Astronomical Waste" is "10^38 human lives [...] lost every century that colonization of our local supercluster is delayed," given various assumptions. Of course, there's reason to be skeptical of such numbers at face value, in view of anthropic considerations, simulation-argument scenarios, etc., but I agree that this consideration probably still matters a lot in the final calculation.

Still, I'm concerned not just with wild-animal suffering on earth but throughout the cosmos. In particular, I fear that post-humans might actually increase the spread of wild-animal suffering through directed panspermia or lab-universe creation or various other means. The point of spreading the meme that wild-animal suffering matters and that "pristine wilderness" is not sacred would largely be to ensure that our post-human descendants place high ethical weight on the suffering that they might create by doing such things. (By comparison, environmental preservationists and physicists today never give a second thought to how many painful experiences are or would be caused by their actions.)

As far as CEV, the set of minds whose volitions are extrapolated clearly does make a difference. The space of ethical positions includes those who care deeply about sorting pebbles into correct heaps, as well as minds whose overriding ethical goal is to create as much suffering as possible. It's not enough to "be smarter" and "more the people we wished we were"; the fundamental beliefs that you start with also matter. Some claim that all human volitions will converge (unlike, say, the volitions of humans and the volitions of suffering-maximizers); I'm curious to see an argument for this.

Comment author: Nick_Tarleton 25 April 2010 10:18:16PM 3 points [-]

Some claim that all human volitions will converge

Who are you thinking of? (Eliezer is frequently accused of this, but has disclaimed it. Note the distinction between total convergence, and sufficient coherence for an FAI to act on.)

Comment author: PeerInfinity 25 April 2010 08:20:17PM *  3 points [-]

(edit: The version of utilitarianism I'm talking about in this comment is total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don't bother keeping track of which entity experiences the pleasure or pain. A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.)

I totally agree!!!

Astronomical waste is bad! (or at least, severely suboptimal)

Wild-animal suffering is bad! (no, there is nothing "sacred" or "beautiful" about it. Well, ok, you could probably find something about it that triggers emotions of sacredness or beauty, but in my opinion the actual suffering massively outweighs any value these emotions could have.)

Panspermia is bad! (or at least, severely suboptimal. Why not skip all the evolution and suffering and just create the end result you wanted? No, "This way is more fun", or "This way would generate a wider variety of possible outcomes" are not acceptable answers, at least not according to utilitarianism.)

Lab-universes have great potential for bad (or good), and must be created with extreme caution, if at all!

Environmental preservationists... er, no, I won't try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!

I also agree with your concerns about CEV.

Though of course we're talking about all this as if there is some objective validity to Utilitarianism, and as Eliezer explained: (warning! the following sentence is almost certainly a misinterpretation!) You can't explain Utilitarianism to a rock, therefore Utilitarianism is not objectively valid.

Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe. Well, indirectly it's a fact about the universe, because these beliefs were generated by a process that involves observing the universe. We observe that pleasure really does feel good, and that pain really does feel bad, and therefore we want to maximize pleasure and minimize pain. But not everyone agrees with us. Eliezer himself doesn't even agree with us anymore, even though some of his previous writing implied that he did before. (I still can't get over the idea that he would consider it a good idea to kill a whole planet just to PREVENT an alien species from removing the human ability to feel pain, and a few other minor aesthetic preferences. Yeah, I'm so totally over any desire to treat Eliezer as an Ultimate Source of Wisdom...)

Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with. I still don't see how this could be possible, but maybe that's just a result of my own ignorance. And then there's the extreme difficulty of actually implementing CEV...

And no, I still don't claim to have a better plan. And I'm not at all comfortable with advocating the creation of a purely Utilitarian AI.

Your plan of trying to spead good memes before the CEV extrapolates everyone's volition really does feel like a good idea, but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation. I suspect that if you can't incorporate this process into CEV somehow, then any other possible strategy must involve cheating somehow.

Oh, I had another conversation recently on the topic of whether it's possible to convince a rational agent to change its core values through rational discusson alone. I may be misinterpreting this, but I think the conversation was inconclusive. The other person believed that... er, wait, I think we actually agreed on the conclusion, but didn't notice at the time. The conclusion was that if an agent's core values are inconsistent, then rational discussion can cause the agent to resolve this inconsistency. But if two agents have different core values, and neither agent has internally inconsistent core values, then neither agent can convince the other, without cheating. There's also the option of trading utilons with the other agent, but that's not the same as changing the other agent's values.

Anyway, I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I'm estimating the probability that this is the case at... significantly less than 50%. Not because I have any specific evidence about this, but as a result of applying the Pessimistic Prior. (Is that a standard term?)

Anyway, if this is the case, then the CEV algorithm will end up resulting in the outcome that you wanted. Specifically, an end to all suffering, and some form of utilitronium shockwave.

Oh, and I should point out that the utilitronium shockwave doesn't actually require the murder of everyone now living. Surely even us hardcore utilitarians should be able to afford to leave one planet's worth of computronium for the people now living. Or one solar system's worth. Or one galaxy's worth. It's a big universe, after all.

Oh, and if it turns out that some people's value systems would make them terribly unsatisfied to live without the ability to feel pain, or with any of the other brain modifications that a utilitarian might recommend... then maybe we could even afford to leave their brains unmodified. Just so long as they don't force any other minds to experience pain. Though the ethics of who is allowed to create new minds, and what sorts of new minds they're allowed to create... is kinda complicated and controversial.

Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world's population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it's a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It's a big universe, plenty of room for everyone. Just so long as they don't force any other mind to suffer.

Oh, and maybe there should also be rules against creating a mind that's forced to be wireheaded. There will be some complex and controversial issues involved in the design of the optimally efficient form of utilitronium that doesn't involve any ethical violations. One strategy that might work is a cross between the utilitronium scenario and the Solipsist Nation scenario. That is, anyone who wants to retreat entirely into solipsism, let them do their own experiments with what experiences generate the most utility. There's no need to fill the whole universe with boring, uniform bricks of utilitronium that contain minds that consist entirely of an extremely simple pleasure center, endlessly repeating the same optimally pleasurable experience. After all, what if you missed something when you originally designed the utilitronium that you were planning to fill the universe with? What if you were wrong about what sorts of experiences generate the most utility? You would need to allocate at least some resources to researching new forms of utilitronium, why not let actual people do the research? And why not let them do the research on their own minds?

I've been thinking about these concepts for a long time now. And this scenario is really fun for a solipsist utilitarian like me to fantasize about. These concepts have even found their way into my dreams. One of these dreams was even long, interesting, and detailed enough to make into a short story. Too bad I'm no good at writing. Actually, that story I just linked to is an example of this scenario going bad...

Anyway, these are just my thoughts on these topics. I have spent lots of time thinking about them, but I'm still not confident enough about this scenario to advocate it too seriously.

Comment author: thomblake 27 April 2010 01:40:52PM 5 points [-]

Your comments are tending to be a bit too long.

Comment author: PeerInfinity 27 April 2010 02:13:57PM *  1 point [-]

Thanks for the feedback. I kinda suspected that my comments were too long.

So, um... what would you prefer for me to do instead?

  • split them into multiple comments?
  • post them somewhere else (the Transhumanist Wiki?) and link to them from here?
  • refrain from posting the long comments entirely?
  • find some way to cut them down?
  • stick to a single topic per comment, and create multiple comments if I want to discuss multiple topics?
  • wait longer between posting these comments?
  • something else I haven't thought of?
Comment author: Utilitarian 27 April 2010 05:49:28AM 3 points [-]

Environmental preservationists... er, no, I won't try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!

Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, "Bambi Lovers versus Tree Huggers: A Critique of Rolston"s Environmental Ethics": "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support."

Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe.

Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.

Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with.

Yes, that's the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that's precisely why we're having this conversation, as well as why SIAI's research is so important. :)

but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation.

I hope so. Of course, it's not as though the only two possibilities are "CEV" or "extinction." There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political "realist" scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.

I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I'm estimating the probability that this is the case at... significantly less than 50%.

If you include paperclippers or suffering-maximizers in your definition of "anyone," then I'd put the probability close to 0%. If "anyone" just includes humans, I'd still put it less than, say, 10^-3.

Just so long as they don't force any other minds to experience pain.

Yeah, although if we take the perspective that individuals are different people over time (a "person" is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to "forcing someone" to feel pain....

Comment author: thomblake 27 April 2010 02:13:12PM 0 points [-]

Like many others here, I subscribe to emotivism as well as utilitarianism.

That is inconsistent. Utilitarianism has to assume there's a fact about the good; otherwise, what are you maximizing? Emotivism insists that there is not a fact about the good. For example, for an emotivist, "You should not have stolen the bread." expresses the exact same factual content as "You stole the bread." (On this view, presumably, indicating "mere disapproval" doesn't count as factual information).

Comment author: PeerInfinity 27 April 2010 02:06:17PM *  0 points [-]

Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.

checking out the wikipedia article... hmm... I think I agree with emotivism too, to some degree. I already have a habit of saying "but that's just my opinion", and being uncertain enough about the validity (validity according to what?) of my preferences, to not dare to enforce them if other people disagree. And emotivism seems like a formalization of the "but that's just my opinion". That could be useful.

Yes, that's the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that's precisely why we're having this conversation, as well as why SIAI's research is so important. :)

good point. and yeah, that's that's one of the main issues that's causing me to doubt whether SIAI has any hope of achieving their mission.

I hope so. Of course, it's not as though the only two possibilities are "CEV" or "extinction." There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political "realist" scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.

good point. Have you had any contact with Metafire yet? He strongly agrees with you on this. Just recently he started posting to LW.

oh, and "quixotic", that's the word I was looking for, thanks :)

If you include paperclippers or suffering-maximizers in your definition of "anyone," then I'd put the probability close to 0%. If "anyone" just includes humans, I'd still put it less than, say, 10^-3.

heh, yeah, that "significantly less than 50%" was actually meant as an extremely sarcastic understatement. I need to learn how to express stuff like this more clearly.

Yeah, although if we take the perspective that individuals are different people over time (a "person" is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to "forcing someone" to feel pain....

good point! This suggests the possibility of requiring people to go through regular mental health checkups after the Singularity. Preferably as unobtrusively as possible. Giving them a chance to release themselves from any restrictions they tried to place on their future selves. Though the question of what qualifies as "mentally healthy" is... complex and controversial.

Comment author: Jack 27 April 2010 06:11:17AM 2 points [-]

When discussing utilitarianism it is important to indicate whether you're talking about preference utilitarianism or hedonistic utilitarianism, especially in this context.

Comment author: PeerInfinity 27 April 2010 01:10:07PM *  0 points [-]

Right, sorry. I'm referring to total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don't bother keeping track of which entity experiences the pleasure or pain.

A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.

Comment author: Utilitarian 27 April 2010 07:16:14AM 0 points [-]

Indeed. While still a bit muddled on the matter, I lean toward hedonistic utilitarianism, at least in the sense that the only preferences I care about are preferences regarding one's own emotions, rather than arbitrary external events.

Comment author: Strange7 27 April 2010 02:48:43AM 1 point [-]

Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world's population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it's a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It's a big universe, plenty of room for everyone. Just so long as they don't force any other mind to suffer.

You could also almost certainly convert a considerable percentage of the planet's mass to computronium without impacting the planet's ability to support life. A planet isn't a very mass-efficient habitat, and I doubt many people would even notice if most of the core was removed, provided it was replaced with something structurally and electrodynamically equivalent.

Comment author: NancyLebovitz 27 April 2010 09:04:53AM 2 points [-]

You need the mass of the core to maintain the gravity. What sort of physics do you have in mind?

Comment author: PeerInfinity 27 April 2010 04:50:46AM 1 point [-]

good point, thanks for mentioning that.

heh, that's actually what I meant by leaving the planet "mostly intact", but I should have made that clearer.

Comment author: cupholder 22 April 2010 10:47:43PM 2 points [-]

That-guy-who-isn't-all-that-smart-but-likes-to-sound-smart-by-quoting-really-smart-people was quoting Eliezer Yudkowsky. Almost immediately after that conversation, I googled the things he was talking about. I discovered Singularitarianism.

Guess there's a use for that-guy after all!

Comment author: MatthewB 23 April 2010 09:30:00AM 2 points [-]

A couple of points:

I could not tell from your post if you understood that Pascal's Wager is a flawed argument for believing in ANY belief system. You do understand this don't you (That Pascal's Wager is horribly flawed as an argument for believing in anything)?

Also, as Counsin it seems to be implying (And I would suspect as well), you seem to be exhibiting signs of the True Believer complex.

This is what I alluded to when I discussed friends of mine who would swing back and forth between Born-Again Christian and Satanists. Don't make the same mistake with a belief in the Singularity. One needn't have "Faith" in the Singularity as one would God in a religious setting, as there are clear and predictable signs that a Singularity is possible (highly possible), yet there exists NO SUCH EVIDENCE for any supernatural God figure.

Forming beliefs is about evidence, not about blindly following something due to a feel good that one gets from a belief.

Comment author: byrnema 23 April 2010 06:48:37PM *  0 points [-]

Pascal's wager is not such a horribly flawed argument. In fact, I wager we can't even agree on why its flawed.

Later edit: I assume I am getting voted down for trolling (that is, disrupting the flow of conversation), and I agree with that. An argument about Pascal's wager is not really relevant in this thread. However, especially in the context of being a 'true believer', it is interesting to me that statements are often made that something is 'obvious', when there are many difficult steps in the argument, or 'horrible flawed', when it's actually just a little bit flawed or even controversially flawed. If anyone wants to comment in a thread dedicated to Pascal's wager, we can move this to the open thread, which I hope ultimately makes this comment less trollish of me.

Comment author: Nick_Tarleton 24 April 2010 03:17:03AM *  3 points [-]

Partially seconded. (I think most people agree that the primary flaw is the symmetry argument, but I don't think that argument does what they think it does, and I do see people holding up other, minority flaws. I do think the classic wager is horribly flawed for other, related but less commonly mentioned, reasons.)

I'll write a top-level post about this today or tomorrow. (In the meantime, see Where Does Pascal's Wager Fail? and Carl Shulman's comments on The Pascal's Wager Fallacy Fallacy.)

Comment author: byrnema 24 April 2010 04:17:14AM *  1 point [-]

Thanks for the link to the Overcoming Bias post. I read that and it clarified some things for me. If I had known about that post, above I would have just linked to it when I wrote that the fallacy behind Pascal's wager is probably actually unclear, minor or controversial.

Comment author: SilasBarta 23 April 2010 07:34:02PM *  1 point [-]

There aren't many difficult steps in refuting Pascal's wager, and I dont' think there's be much disagreement on it here.

The refutation of PW, in short, is this: it infers high utility based on a very complex (and thus highly-penalized) hypothesis, when you can find equally complex (and equally well-supported) hypotheses that imply the opposite (or worse) utility.

(Btw, I was one of those who voted you down.)

Comment author: byrnema 23 April 2010 07:37:16PM *  1 point [-]

Again, is it the argument that is wrong, or Pascal's application of it?

(Can you confirm whether you down-voted me because it's off-topic and inflammatory, or because I'm wrong?)

Comment author: SilasBarta 23 April 2010 07:42:12PM *  0 points [-]

Again, is it the argument that is wrong, or Pascal's application of it?

It is always wrong to give weight to hypotheses beyond that justified by the evidence and the length penality (and your prior, but Pascal attempts to show what you should do irrespective of prior). Pascal's application is a special case of this error, and his reasoning about possible infinite utility is compounded by the fact that you can construct contradictory advice that is equally well-grounded.

(Can you confirm whether you down-voted me because it's off-topic and inflammatory, or just because I'm wrong?)

I downvoted you not just for being wrong, but for having made such a bold statement about PW without (it seems) having read the material about it on LW. I also think that such over-reaching trivializes the contribution of writers on the topic and so comes off as inflammatory.

Comment author: byrnema 23 April 2010 08:10:26PM 0 points [-]

It is always wrong to give weight to hypotheses beyond that justified by the evidence and the length penality (and your prior, but Pascal attempts to show what you should do irrespective of prior).

Are you saying, here, that it is wrong to factor in the utility of the hypothesis when giving weight to the hypothesis?

his reasoning about possible infinite utility is compounded by the fact that you can construct contradictory advice that is equally well-grounded.

If he didn't consider all the cases, his particular application of the argument was bad, not the argument itself, right?

I downvoted you not just for being wrong, but for having made such a bold statement about PW without (it seems) having read the material about it on LW. I also think that such over-reaching trivializes the contribution of writers on the topic and so comes off as inflammatory.

I have read the material, but I disagreed with it, and it's often not clear -- especially when the posts are old -- how I can jump in and chime in that I don't agree. Often it's just the subtext I disagree with, so I wait for someone to make it more explicit (or at least more immediate) and then I bring it up.

Thanks for your explanation about the down-voting.

Comment author: JGWeissman 23 April 2010 07:12:56PM 1 point [-]

The reason I believe Pascal's wager is flawed is that it is a false dichotomy. It looks at only one high utility impact, low probability scenario, while excluding others that cancel out its effect on expected utility.

Is there anyone who disagrees with this reason, but still believes it is flawed for a different reason?

Comment author: byrnema 23 April 2010 07:22:00PM 1 point [-]

This is an argument for why the argument doesn't work for theism, it doesn't mean the argument itself is flawed. If you would be willing to multiply the utility of each belief times the probability of each belief and proceed in choosing your belief in this way, then that is an acceptance of the general form of the argument.

Comment author: JGWeissman 23 April 2010 07:41:34PM 0 points [-]

If you assume that changing your belief is an available action (which is also questionable), then the idealized form is just expected utility maximization. The criticism is that Pascal incorrectly calculated the expected utility.

Comment author: byrnema 23 April 2010 07:56:22PM 0 points [-]

Right, one flaw in the idealized form is that it's not clear that you can simply choose the belief that maximizes utility. But in some cases a person can, and does.

I think that an incorrect calculation, because one person considered 2 cases instead of N cases, is very different from being flawed as an argument.

PeerInfinity was writing about applying Pascal's wager to atheism -- so he must have been referring to the general form of the argument, not a particular application. Matthew B wrote that "Pascal's Wager is a flawed argument for believing in ANY belief system". Well, what about a belief system in which there are exactly two beliefs to choose from and the relative probabilities are (.4, .6) and the relative utilities of having the beliefs if they are true are (1000, 100) ? I would say the conclusion of the idealized form of Pascal's wager is that you should pick the belief that maximizes utility, even though it is lower probability.

Comment author: RobinZ 23 April 2010 07:44:20PM 0 points [-]

Taboo "Pascal's wager", please.

Comment author: byrnema 23 April 2010 08:22:50PM *  0 points [-]

Sure.

Here's an argument:

Suppose there is a dichotomy of beliefs, X and Y, their probabilities are Px and Py, and the utilities of having each belief is Ux and Uy. Then, the average utility of having belief X is Px*Ux and the utility of having belief Y is Py*Uy. You "should" choose having the belief (or set of beliefs) that maximizes average utility, because having beliefs are actions and you should choose actions that maximize utility.

What is the flaw in this argument?

For me, the flaw that you should identify is that you should choose beliefs that are most likely to be true, rather than those which maximize average utility. But this is a normative argument, rather than a logical flaw in the argument.

Comment author: Vladimir_Nesov 23 April 2010 08:37:40PM 3 points [-]

Normally, you should keep many competing beliefs with associated levels of belief in them. The mindset of choosing the action with estimated best expected utility doesn't apply, as actions are mutually exclusive, while mutually contradictory beliefs can be maintained concurrently. Even when you consider which action to carry out, all promising candidates should be kept in mind until moment of execution.

Comment author: mattnewport 23 April 2010 08:46:00PM 1 point [-]

This is complicated in the case of religious beliefs where the deity will judge you by your beliefs and not just your actions.

Comment author: byrnema 23 April 2010 08:51:31PM *  0 points [-]

Good point, I edited my form of the argument to include 'sets of beliefs'. If having a set of beliefs maximizes your utility, then having the set is what you "should" do, I think, in the spirit of the argument.

Comment author: RobinZ 23 April 2010 08:27:49PM 0 points [-]
Comment author: khafra 23 April 2010 02:30:19PM 1 point [-]

In chapter five of Jaynes, "Queer Uses for Probability Theory," he explains that although a claimed telepath tested 25.8 standard deviations away from chance guessing, that isn't the probability we should assign to the hypothesis that she's actually a telepath, because there are many simpler hypotheses that fit the data (for instance, various forms of cheating).

This example is instructive when using Pascal's Wager to minimax expected utility. Pascal's Wager is a losing bet for a Christian, because even though expecting positive infinity utility with infitesimal probability seems like a good bet, there are many likelier ways of getting negative infinity utility from that choice. Doing what you can to promote a friendly singularity can still be called "Pascal's Wager" because it's betting on a very good outcome with a low probability, but the low probability is so many orders of magnitude better than Christianity's that it's actually a rather good bet.

Obviously, you don't want to let wishful thinking guide your epistemology, but I don't think that's what PI's talking about.

Comment author: Unknowns 05 August 2010 12:06:40PM 1 point [-]

I haven't yet seen an answer to Pascal's Wager on LW that wasn't just wishful thinking. In order to validly answer the Wager, you would also have to answer Eliezer's Lifespan Dilemma, and no one has done that.

Comment author: PeerInfinity 06 August 2010 04:32:00AM 1 point [-]

Can you please remind me what the question is, that you're looking for an answer to?

And can you please provide a link to an explanation of what Eliezer's Lifespan Dilemma is?

Comment author: Unknowns 06 August 2010 05:38:20AM 1 point [-]

http://lesswrong.com/lw/17h/the_lifespan_dilemma/

If you read the article and the comments, you will see that no one really gave an answer.

As far as I can see, it absolutely requires either a bounded utility function (which Eliezer would consider scope insensitivity), or it requires accepting an indefinitely small probability of something extremely good (e.g. Pascal's Wager).

Comment author: Blueberry 06 August 2010 09:25:51AM *  3 points [-]

If you believe that there is something with arbitrarily high utility, then by definition, you will accept an indefinitely small probability of it.

Assume my life has a utility of 10 right now. My preferences are such that there is absolutely nothing I would take a 99% chance of dying for. Then, by definition, there's nothing with a utility of 1000 or more. The problem comes from assuming that there is such a thing when there isn't. I don't see how this is scope insensitivity; it's just how my preferences are.

Someone who really had an unbounded utility function would really take as many steps down the Lifespan Dilemma path as Omega allowed. That's really what they'd prefer. Most of us just don't have a utility function like that.

Comment author: Unknowns 06 August 2010 10:26:44AM 0 points [-]

So you wouldn't die to save the world? Or do you mean hypothetically if you had those preferences?

I agree with the basic argument, it is the same thing I said. But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.

Comment author: Blueberry 06 August 2010 12:37:04PM *  1 point [-]

So you wouldn't die to save the world? Or do you mean hypothetically if you had those preferences?

If the world is doomed immediately unless I die for it, I have a 100% chance of dying immediately, so I might as well die to save the world. But if it's a choice between living another 50 years and then the world ending, or dying right now and saving the world, and no one would know, I wouldn't die to save the world. I'm too selfish for that.

But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.

Then he should keep taking Omega's offers, and any discomfort he has with that is faulty intuition, like the discomfort from choosing TORTURE over SPECKS.

Comment author: Blueberry 06 August 2010 09:12:37AM 1 point [-]

I'm pretty sure Peer meant the original version of Pascal's Wager, the argument for Christianity, which has the obvious answer, "What if the Muslims are right? or "What if God punishes us for believing?"

Comment author: Unknowns 06 August 2010 10:25:26AM 0 points [-]

That's not an answer, because the probabilities of those things are not equal.

"God punishes us for believing" has a much lower probability, because no one believes it, while many people believe in Christianity.

"Muslims are right" could easily be more probable, but then there is a new Wager for becoming Muslim.

The probabilities simply do not balance perfectly. That is basically impossible.

Comment author: Blueberry 06 August 2010 12:34:23PM 2 points [-]

"God punishes us for believing" has a much lower probability, because no one believes it, while many people believe in Christianity.

Why does the probability have anything to do with the number of people who believe it?

"Muslims are right" could easily be more probable, but then there is a new Wager for becoming Muslim.

There's then the problem that the expected value involves adding multiples of positive infinity (if you choose the right religion) to multiples of negative infinity (if you choose the wrong one), which gives you an undefined result.

The probabilities simply do not balance perfectly. That is basically impossible.

The probability of any kind of God existing is extremely low, and it's not clear we have any information on what kind of God would exist conditioned on some God existing.

There's also the problem that if you know the probability that God exists is very small, you can't believe, you can only believe in belief, which may not be enough for the wager.

Comment author: Unknowns 06 August 2010 01:13:05PM *  1 point [-]

The probability has something to do with the number of people who believe it because it is possible that some of those people have a good reason to believe it, which automatically gives it some probability (even if very small.) But for positions that no one believes, this probability is lacking.

That adding positive and negative infinity is undefined may be true mathematically, but you have to decide one way or another. And it is wishful thinking to say that it is just as good to choose the less probable way as the more probable way. For example, there are two doors. One has a 99% chance of giving negative infinite utility, and a 1% chance of positive infinite. The second door has a 1% chance of negative infinite utility, and a 99% chance of positive infinite utility. Defined or not, it is perfectly obvious that you should choose the second door.

We do have information on what kind of God would exist if one existed: it would probably be one of the ones that are claimed to exist. Anyway, as Nick Bostrom points out, even without this kind of evidence, the probabilities still will not balance EXACTLY, since you will have some evidence even from your intuitions and so on.

It may be true that some people couldn't make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.

Comment author: TobyBartels 07 August 2010 07:39:34PM *  2 points [-]

That adding positive and negative infinity is undefined may be true mathematically, but you have to decide one way or another.

Right; or if you don't decide exactly, at least you have to do (believe or not believe) one or the other.

I would say that the model breaks down. Mathematics (or at least the particular mathematical model being used) is not capable of describing this situation, but that doesn't make the situation itself meaningless. (That would be a version of the map/territory fallacy.)

Defined or not, it is perfectly obvious that you should choose the second door.

Here I disagree with you. I would say that you have not given enough information. It is as if you gave the same problem statement but with the word ‘infinite’ removed (so that we only know whether the utilities are positive or negative). It may seem as if you have given all of the information: the probabilities and the utilities. But the mathematics which we use to calculate everything else out of those values breaks down, so in fact you have not given all of the information.

One important missing piece of information is the ratio of the first positive utility to the second. That and two other independent ratios would be enough information, if they're all finite. (If not, then we might need more information.)

And don't tell me that these ratios are undefined; the mathematical model that calculates the ratios from the information given breaks down, that's all. In fact, there is an atlernative mathematical model of decision which deals only in ratios between utilities; if you'd followed that model from the beginning, then you would never have tried to state the actual utilities themselves at all. (For mathematicians: instead of trying to plot these 4 utilities in a 4-dimensional affine space, plot them in a 3-dimensional projective space.)

It may be true that some people couldn't make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.

Right; the proper conclusion of the argument is not to believe, but to try to believe. And if you buy the argument, then you should try very hard!

Comment author: Blueberry 10 August 2010 08:28:35PM 1 point [-]

The probability has something to do with the number of people who believe it because it is possible that some of those people have a good reason to believe it, which automatically gives it some probability (even if very small.) But for positions that no one believes, this probability is lacking.

This can't be right. The number of people who follow any one religion is affected by how people were raised, by cultural and historical trends, by birth rates, and by the geographic and social isolation of the people involved. None of these things have anything to do with truth. Currently Christianity has twice as many people as any other religion because of historical and political facts; you think this makes it more likely than Islam to be true?

Suppose that in 50 years, because of predicted demographic trends, there are twice as many Muslims as Christians. You then seem to be in the strange position of thinking (a) Christianity is more likely to be true now, but (b) because of changing demographics, you will be likely to think Islam is more likely to be true in 50 years.

We do have information on what kind of God would exist if one existed: it would probably be one of the ones that are claimed to exist.

How do people's claims give you that information? Religions are human cultural inventions. At most one could be true, which means the others have to be made up anyway. If a God did exist, why is it more likely that one of them is true than that they were all made up and humanity never came close to guessing the nature of the God that did exist?

Anyway, as Nick Bostrom points out, even without this kind of evidence, the probabilities still will not balance EXACTLY, since you will have some evidence even from your intuitions and so on.

My intuition tells me that if a God of some sort does exist, the probabilities end up favoring a God that rewards looking at the evidence and believing only what you have reason to be true, but that may just be my bias showing.

Intuition about what religion is true is likely to reflect your upbringing and your culture more than the actual truth. Given that there's currently no evidence of any kind of God or afterlife, I can't see how there is any evidence that God X is more likely to exist than God Y.

It may be true that some people couldn't make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.

It's also worth noticing that Pascal's Wager uses a spherical cow version of religion. Some religious traditions might require actual belief for infinite utility, others just belief in belief, others just certain behavior or words independent of belief.

Comment deleted 06 August 2010 08:32:28AM [-]
Comment author: Oscar_Cunningham 06 August 2010 08:38:49AM 0 points [-]

All this does is show that the dilemma must have a flaw somewhere, but it doesn't explicitly show that flaw. The same problem occurs with finding the flaws in proposed perpeptual motion machines, you know there must be a flaw somewhere, but it's often tricky to find it.

I think the flaw in Pascal's wager is allowing "Heaven" to have infinite utility. Unbounded utilities, fine; infinite utilities, no.

Comment author: Nisan 06 August 2010 12:25:13PM 0 points [-]
Comment author: XiXiDu 06 August 2010 12:30:19PM 2 points [-]
Comment author: Oscar_Cunningham 06 August 2010 01:19:48PM 1 point [-]

That's a great video.

Comment author: Unknowns 06 August 2010 02:50:41PM 0 points [-]

Elliezer in that article:

"The original problem with Pascal's Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God). "

This is just wishful thinking, as I said in another reply. The probabilities do not balance.

Comment author: Unknowns 06 August 2010 10:29:23AM 0 points [-]

What about "living forever"? According to Eliezer, this has infinite utility. I agree that if you assign it a finite utility, then the lifespan dilemma fails (at some point), and similarly, if you assign "heaven" a finite utility, then Pascal's Wager will fail, if you make the utility of heaven low enough.

Comment author: Jack 22 April 2010 05:23:58PM 1 point [-]

Your story and perspective are very interesting. You don't need to self-censor.

Comment author: PeerInfinity 22 April 2010 05:25:26PM 1 point [-]

Thanks. Actually, the reason why I said "I guess I had better stop writing now" is because this comment was already getting too long.

Comment author: thomblake 22 April 2010 05:36:21PM 3 points [-]

Just a note - don't take Jack's advice to not self-censor too literally. There is much weirdness in you, and even the borders of this place would groan under its weight.

Not that there's anything wrong with that.

Comment author: AdeleneDawner 22 April 2010 06:46:44PM *  3 points [-]

The above (below? Depends on your settings, I guess) comment, which is now hidden, involves a poll, and would not (I predict) have otherwise become hidden.

Comment author: Blueberry 22 April 2010 08:11:20PM 2 points [-]

It's also hidden depending on your settings: you can change the threshold for hiding comments as well. I don't hide any comments, because seeing a hidden comment makes me so curious I have to click it, and just draws more attention to it for me.

Comment author: TraderJoe 02 November 2012 01:03:43PM *  0 points [-]

[comment deleted]

Comment author: PhilGoetz 23 April 2010 07:29:32PM 3 points [-]

Can you write a post about satanism? I'd love to know whether there are any actual satanists, and what they believe/do.

Comment author: AdeleneDawner 23 April 2010 07:39:44PM 4 points [-]

I used to know one, and have done a bit of reading about it. It struck me as a reversed-stupidity version of Christianity, though there were a few interesting memes in the literature.

Comment author: MatthewB 24 April 2010 02:16:32AM 2 points [-]

Depending upon the Type of Satanist, yes, they are often just people looking for a high "Boo-Factor" (A term made-up by many of the early followers of a musical Genre called "Deathrock" (it's more public name is now Goth, although that is like comparing a chain saw to a kitchen pealing knife - the "Goths" are the kitchen knife).

Many Satanists, especially those who hadn't really read much of the published Satanic literature would just make something up themselves and it was almost always based in Christian motifs and archetypes. The two institutions who have publicly claimed the title of "Satanist" (The Church of Satan and The Temple of Set) both reject any and all of Christian Theology, Motifs, Archetypes, Symbolism and Characters as being ingenuous and twisted archetypes of older more healthy god archetypes (If you read Jung and Joseph Campbell, this is not uncommon for a rising religious paradigm to hijack an older competing paradigm as its bad-guys)

As Phil has suggested, maybe a front page post will come in handy. It should be recognized that some Satanists happen to be very rational people. They are just using the symbolism to manipulate their environment (although most of the more mature ones have found more mature symbols with which to manipulate the environment and their peers and subordinates).

The types to which I was referring in my post were the Christian Satanists (people who are worshiping the Christian version of Satan), which is just as bad as worshiping the Christian God. Both the Christian God and the Christian Satan are required for that mythology to be complete.

Comment author: wedrifid 24 April 2010 08:37:02AM 7 points [-]

which is just as bad as worshipping the Christian God

Wow! We make worshipping the devil sound bad around here by comparing him to God! Excuse me if I take a hint of pleasure at the irony. ;)

Comment author: MatthewB 25 April 2010 05:17:02AM 1 point [-]

Well, they both (according to Christian Myth) are truly bad characters.

It is unfortunate for God that Satan (Lucifer) had such a reasonable request "Gee, Jehovah, It would certainly be nice if you let us try out that chair every once in a while." Basically, Lucifer's crime was one that is only a crime in a state where the King is seen as having divine authority to rule, and all else is seen as beneath such things (thus reflecting the Divine Order)

It was this act upon which Modern Satanists seized to create a new mythology for Satanism, where it was reason rebelling again an order that was corrupt and tyrannical.

Comment author: Jack 25 April 2010 07:00:46AM *  5 points [-]

It is unfortunate for God that Satan (Lucifer) had such a reasonable request "Gee, Jehovah, It would certainly be nice if you let us try out that chair every once in a while." Basically, Lucifer's crime was one that is only a crime in a state where the King is seen as having divine authority to rule, and all else is seen as beneath such things (thus reflecting the Divine Order)

To be fair this stuff isn't Christian mythology in the way that Adam and Eve, or Loaves and Fishes is Christian mythology. It's just religious fiction.

...

Unless someone has declared John Milton a prophet and possessor of divine revelation. Which would be hilarious.

Comment author: MatthewB 25 April 2010 07:37:04AM 1 point [-]

It isn't stuff that made it into the modern canon, but in the Early Christian Church, Myth of this type appeared all over the place from the Jewish Sources, in an attempt to integrate it into various Christian Sects.

To be fair this stuff isn't Christian mythology in the way that Adam and Eve, or Loaves and Fishes is Christian mythology. It's just religious fiction.

Isn't it ALL just religious fiction?

Comment author: wedrifid 25 April 2010 08:16:55AM 2 points [-]

Well, they both (according to Christian Myth) are truly bad characters.

The Christian Myth includes a quite specific definition of bad so according to the Christian myth only one of them is bad. Is what you mean that according to you the characters as described in the Christian Myth were both truly bad?

Basically, Lucifer's crime was one that is only a crime in a state where the King is seen as having divine authority to rule

That description loses something when the ruler is, in fact, God. One of the bad things about claiming that the king is king because God says so is that it is not the case that any god said any such thing. When the ruler is God then yes, God does say so. The objection that remains is "Who gives a @$@# what God says?" I agree with what (I think) you are saying about the implications of claims of authority but don't like the loaded language. It confuses the issue and well, I would say that technically (that counterfactual) God does have the divine authority to rule. It's just that divine authority doesn't count for squat in my book.

Comment author: Blueberry 24 April 2010 04:30:54AM 4 points [-]

There are Christian Satanists? Correct me if I'm wrong, but I thought Satanism was a religion founded around Rand-like rational selfishness, and explicitly denied any supernatural entities.

Comment author: MatthewB 25 April 2010 05:11:31AM 3 points [-]

Yes, they are "Christian" in the sense that all of the mythology and practices for their worship of Satan are derived from Christianity, and they still believe in a Christian God.

It is just that these people believe that they are defying and opposing the Christian God (Fighting for the other team). They still believe in this God, just no longer have it as the object of their worship and devotion.

This is also the more traditional form of Satanist in our society, and one which the more modern Satanist tends to oppose. The Modern Satanist is a self-worshiping atheist, and as has been pointed out, tend to place everything in the context of self-interest. It is a highly utilitarian philosophy, but often marred in actual practice by ignorant fools who don't seem to understand the difference between just acting like a selfish dick and acting out of self-interest (doing things which improve one's condition in life, not things which worsen one's condition)

Comment author: NancyLebovitz 25 April 2010 09:38:39AM 1 point [-]

There's an Ayn Rand quote I don't have handy to the effect that if the virtues needed for life are considered evil, people are apt to embrace actual evils in response.

Comment author: wedrifid 24 April 2010 08:34:57AM 1 point [-]

Nope, worshipping the devil is right up there as far as meanings for 'Satanism' go.

Comment author: MatthewB 24 April 2010 02:07:41AM 0 points [-]

You mean, like a main page post? I'd love to.

You would be surprised about how rational the real Satanists (and their various offshoots and schisms) are (as the non-Christian based Satanist is an athiest).

In fact, the very first Schism of the Church of Satan gave birth to the Temple of Set (Founded by the then head of the Army's Psychological Warfare Division), which was described as a "Hyper-Rational Belief System" (Although in reality it still had some rather unfortunately insane beliefs among its constituents). The Founder was very rational though. He even had quite a bit of science behind his position... It's just that his job caused him to be a rather creepy and scary guy.

Comment author: PhilGoetz 25 April 2010 08:11:51PM 0 points [-]

Has today's Satanism retained any connections to Alistair Crowley?

Comment author: Document 20 January 2011 11:19:38PM 1 point [-]

Most of them swung back and forth between the extremes of bad belief systems (From born-again Christian to Satanist, and back, many times)...

At least they're maintaining lightness.