hairyfigment comments on Welcome to Less Wrong! (7th thread, December 2014) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (635)
Wow, I'm so glad I stumbled onto slatestarcodex, and from there, here!!! You guys are all like smarter, cooler versions of me! It's great to have a label for the way my brain is naturally wired and know there other people in the world besides Peter Singer who think similarly. I'm really excited, so my "intro" might get a little long...
Part 1-Look at me, I'm just like you!
I'm Ellen, a 22 year old Spanish major and world traveling nanny from Wisconsin, so maybe not your typical LWer, but actually quite typical in other, more important ways. :)
I grew up in a Christian home/bubble, was super religious (Wisconsin Evangelical Lutheran Synod), truly respected/admired the Christians in my life, but even while believing, never liked what I believed. I actually just shared my story plus some interesting studies on correlations between personality, intelligence, and religiosity, if anyone is interested: http://magicalbananatree.blogspot.com/2015/02/christian-friends-do-you-ever-feel.html The post is based almost entirely on what I've come to learn is called "consequentialism" which I'm happy to see is pretty popular over here. I subscribe to this line of thinking so much that I used to pray for a calamity to strengthen my faith. I chose a small Lutheran school despite having great credentials to get into an Ivy, because with an eye on eternity, I wanted to avoid any environment that would foster doubt. My friends suggested I become a missionary, but to me, it made far more sense to become a high profile lawyer and donate 90% of my salary to fund a dozen other missionaries. (A Christian version of effective altruism?) No one ever understood!
Some people might deconvert because they can't believe in miracles, or they can't get over the problem of evil. These are bad reasons, I think, and based on the presupposition that God doesn't exist. Personally, the hardest thing for me was believing that God was all-powerful. Like, if God were portrayed as good, but weak, struggling against an evil god and just doing the best he could to make a just universe and make his existence known, I probably would never have left the faith. It took me long enough as it is!
Part 2-A noob atheist's plea for help
Anyway, now I've "cleared my mind" of all that and am starting fresh, but my friends have a lot of questions for me that I'm not able to answer yet, and I have a lot of my own, too. I'm starting by reading about science (not once had I even been exposed to evolution!) but have a lot of other concerns on the back burner, and maybe you guys can point me in the right direction:
Who was the historical Jesus? As a history source, why is the Bible unreliable?
How can I have morality?? Do I just have to rely on intuition? If the whole world relied on reason alone to make decisions, couldn't we rationalize a LOT of things that we intuit as wrong?
Does atheism necessarily lead to nihilism? (I think so, in the grand scheme of things? But the world/our species means something to us, and that's enough, right?)
What about all the really smart people I know and respect, like my sister and Grandma, who have had their share of doubts but ultimately credit their faith to having experienced extraordinary, miraculous answers to prayer? Like obviously, their experiences don't convince ME to believe, but I hate to dismiss them as delusional and call it a wild coincidence...
Are rationalists just as guilty of circular reasoning as Christians are? (Why do I trust human reason? My human reason tells me it's great. Why do Christians trust God? The Bible tells them he's great.)
Part 3-Embarrassingly enthusiastic fan mail
Yay curiosity! Yay strategic thinking! Yay honesty! Yay open-mindedness! Yay opportunity cost analyses! Yay common sense! Yay tolerance of ambiguity! Yay utilitarianism! Yay acknowledging inconsistency in following utilitarianism! Yay intelligence! Yay every single slatestarcodex post! Yay self-improvement! Yay others-improvement! Yay effective altruism!
Ahhh this is all so cool! You guys are so cool. I can't wait to read the sequences and more posts around this site! Maybe someday I'll even meet a real life rationalist or two, it seems like the Bay Area has a lot. :)
This puzzled me, since it sounds a lot like the problem of evil. I take it you were describing the argument you lay out at the link?
For completeness - since I'm about to bash Christianity - I should note that Paul does not write like he has even an imagined revelation on the subject of Hell. He writes as if people in the Roman Empire often talked about everyone going to Hades when they died, and therefore he could count on people receiving as "good news" the claim that belief in Jesus would definitely send you to Heaven. (Later, the Gospels implied that your actions could send you to Heaven or Hell regardless of what you believed. Early Christians might have split the difference by reserving baptism for those they saw as living a 'Christian' life.) Clearly one can be a Christian in Paul's sense without believing in Hell.
We don't know. I have some qualms about Richard Carrier's argument (eg in On the historicity of Jesus: Why we might have reason for doubt). But plugging different numbers into his calculations, I come out with no more than a 54% chance Jesus even existed. We can't answer every factual question; some information is almost certainly lost to us forever.
This one seems fundamental enough that if people insist on the truth of miracles - and reports that you can move mountains if you have faith the size of a mustard seed - I don't know what to tell them. But besides directing people to mainstream scholarship (which by the way places the date of Mark after the destruction of the Temple), I can note that Mark inter-cuts the story of the fig tree with Jesus expelling the money-lenders from the Temple. The tree seems like a straightforward metaphor. Then we have later Gospels openly changing the narrative for their own purposes. Mark says Jesus could give no sign to those who did not believe, and they would not have believed (says Jesus in a parable) even if some guy named Lazarus had returned from the dead. John says Jesus performed signs all the time, and as you would expect this led many people to believe in him, especially when he brought Lazarus back from the dead. Though the resurrected disciple who Jesus loved disappears from the narrative after the period John depicts, and even Acts shows no awareness of this important witness.
If you want to have morality, you can just do it. By this I mean that any function assigning utility to outcomes in a physically meaningful way appears consistent. But yes, I've come to agree that simple utility functions like maximizing pleasure in the Universe technically fail to capture what I would call moral. For more practical advice, see a lot of this site and perhaps the CFAR link at the top of the page.
This depends. I would normally use the term "nihilism" to mean a uniform utility function, which does not distinguish between actions. This is equivalent to assigning every outcome zero utility. As the previous link shows, plenty of non-uniform utility functions can exist whether Yahweh does or not.
If you mean the lack of a moral authority you can trust absolutely, or that will force you to behave morally, then I would basically say yes. There is no authority anywhere.
Do they seem smarter and more worthy of respect than Gandhi? Perhaps he's not the best example, but putting him next to the many people from non-Christian religions who have made similar claims to religious experience may get the point across. (Aleister Crowley made a detailed study of mystical experience and how to produce it, but you may find him abrasive at best.)
That also depends on what you mean.
Oh, oops, I can see why that would be puzzling. But yeah, you figured it out. Do you really think my link was an argument though? A lot of people have accused me of trying to deconvert my friends, but I really don't think I was making an argument so much as sharing my own personal thoughts and journey of what led me away from the faith.
You correctly point out that not all Christians believe in hell, but I didn't want to just tweak my belief until I liked it. If I was going to reject what I grew up with, I figured I might as well start with a totally clean slate.
I'm really glad you and other atheists on here have bothered looking into Historical Jesus. Atheists have a stereotype of being ignorant about this, which actually, for those who weren't raised Christians, I kind of understand, since now that I consider myself atheist, it's not like I'm suddenly going to become an expert on all the other religions just so I can thoughtfully reject them. But now that my friends have failed to convince me atheism is hopeless, they're insisting it's hallucinogenic, that atheists are out of touch with reality, and it's nice (though unsurprising) to see that isn't the case.
Okay, I know that I personally can have morality, no problem! But are you trying to say it's not just intuition? Or if I use that Von Neumann–Morgenstern utility theorem you linked, I'm a little confused, maybe you can simplify for me, but whose preferences would I be valuing? Only my own? Everyone's equally? If I value everyone's equally and say each human is born with equal intrinsic value, that's back to intuition again, right? Anyway, yeah, I'll look around and maybe check out CFAR too if you think that would be useful.
Oh! I like that definition of nihilism, thanks. Personally, I think I could actually tolerate accepting nihilism defined as meaninglessness (whatever that means), but since most people I know wouldn't, your definition will come in handy.
Also, good point about Gandhi. I had actually planned on researching whether people from other religions claimed to have answered prayers like Christians do, but bringing up the other alleged "religious experiences" of people of other faiths seems like a good start for when my sister and I talk about this. Now I'm curious about Crowley too. I almost never really get offended, so even if he is abrasive, I'm sure I can focus on the facts and pick out a few things to share, even if I wouldn't share him directly.
Thanks for your reply! Hopefully you can follow this easily enough; next time I'll add in quotes like you did...
The theorem shows that if one adopts a simple utility function - or let's say if an Artificial Intelligence has as its goal maximizing the computing power in existence, even if that means killing us and using us for parts - this yields a consistent set of preferences. It doesn't seem like we could argue the AI into adopting a different goal unless that (implausibly) served the original goal better than just working at it directly. We could picture the AI as a physical process that first calculates the expected value of various actions in terms of computing power (this would have to be approximate, but we've found approximations very useful in practical contexts) and then automatically takes the action with the highest calculated expected value.
Now in a sense, this shows your problem has no solution. We have no apparent way to argue morality into an agent that doesn't already have it, on some level. In fact this appears mathematically impossible. (Also, the Universe does not love you and will kill you if the math of physics happens to work out that way.)
But if you already have moral preferences, there shouldn't be any way to argue you out of them by showing the non-existence of Vishnu. Any desires that correspond to a utility function would yield consistent preferences. If you follow them then nobody can raise any logical objection. God would have to do the same, if he existed. He would just have more strength and knowledge with which to impose his will (to the point of creating a logical contradiction - but we can charitably assume theologians meant something else.) When it comes to consistent moral foundations, the theorem gives no special place to his imaginary desires relative to yours.
I mentioned above that a simple utility function does not seem to capture my moral preferences, though it could be a good rule of thumb. There's probably no simple way to find out what you value if you don't already know. CFAR does not address the abstract problem; possibly they could help you figure out what you actually value, if you want practical guidance.
Note that he doesn't believe in making anything easy for the reader. The second half of this essay might perhaps have what you want, starting with section XI. Crowley wrote it under a pseudonym and at least once refers to himself in the third person; be warned.
Thanks a lot for explaining the utility theorem. So just to be sure, if moral preferences for my personal values (I'll check CFAR for help on this, eventually) are the basis of morality, is morality necessarily subjective?
I'll get to Crowley eventually too, thanks for the link. I've just started the Rationality e-book and I feel like it will give me a lot of the background knowledge to understand other articles and stuff people talk about here.
If "subjective" means "a completely different alien species would likely care about different things than humans", then yes. You also can't expect that a rock would have the same morality as you.
If "subjective" means "a different human would care about completely different things than me" then probably not much. It should be possible to define a morality of an "average human" that most humans would consider correct. The reason it appears otherwise is that for tribal reasons we are prone to assume that our enemies are psychologically nonhuman, and our reasoning is often based on factual errors, and we are actually not good enough at consistently following our own values. (Thus the definition of CEV as "if we knew more, thought faster, were more the people we wished we were, had grown up farther together"; it refers to the assumption of having correct beliefs, being more consistent, and not being divided by factional conflicts.)
Of course, both of these answers are disputed by many people.
There is a set of reasonably objective facts about what values people have, and how your actions would impact them, That leads to reasonably objective answers about what you should and shouldn't do in a specific situation. However, they are only locally objective,..what value based ethics removes is globally objective answers, in the sense that you should always do X .or refrain from Y irrespective of the contexts,
It's a bit like the difference between small g and big G in physics,
Nope. It leads to reasonably objective descriptive answers about what the consequences of your actions will be. It does not lead to normative answers about what you should or should not do.
Okay, I guess I'm still confused. So far I've loved everything I've read on this site and have been able to understand; I've appreciated/agreed with the first 110 pages of the Rationality ebook, felt a little skeptical for liking it so completely, and then reassured myself with the Aumann's agreement theorem it mentions. So I feel like if this utility theorem which bases morality on preferences is commonly accepted around here, I'll probably like it once I fully understand it. So bear with me as I ask more questions...
Whose preferences am I valuing? Only my own? Everyone's equally? Those of an "average human"? What about future humans?
Yeah, by subjective, I meant that different humans would care about different things. I'm not really worried about basic morality, like not beating people up and stuff, but...
I have a feeling the hardest part of morality will now be determining where to strike a balance between individual human freedom and concern for the future of humanity.
Like, to what extent is it permissible to harm the environment? If something, like eating sugar for example, makes people dumber, should it be limited? Is population control like China's a good thing?
Can you really say that most humans agree on where this line between individual freedom and concern for the future of humanity should be drawn? It seems unlikely...
I'm the wrong person to ask about "this utility theorem which bases morality on preferences" since I don't really subscribe to this point of view.
I use the world "morality" as a synonym for "system of values" and I think that these values are multiple, somewhat hierarchical, and are NOT coherent. Moral decisions are generally taken on the basis of a weighted balance between several conflicting values.
By definition, you can only care about your own preferences. That being said, it's certainly possible for you to have a preference for other people's preferences to be satisfied, in which case you would be (indirectly) caring about the preferences of others.
The question of whether humans all value the same thing is a controversial one. Most Friendly AI theorists believe, however, that the answer is "yes", at least if you extrapolate their preferences far enough. For more details, take a look at Coherent Extrapolated Volition.
Is that a fact? It's true that the theories often discussed here , utilitarianism and so in, don't solve the motivation problem, but that doesn't mean no theory does,
Not necessarily subjective, in the sense that "what should I do in situation X" necessarily lacks an objective answer.
Even if you treat all value as morally relevant, and you certain dont have to, there is a set of reasonably objective facts about what values people have, and how your actions would impact them, That leads to reasonably objective answers about what you should and shouldn't do in a specific situation. However, they are only locally objective,..