A few notes about the site mechanics
To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!
Less Wrong
comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via
Markdown syntax (you can click the "Help" link below the text box to bring up a primer).
You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.
However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.
Replies to your comments across the site, plus
private messages from other users, will show up in your
inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.
Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.
EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like
common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.
A few notes about the community
If you've come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)
If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma—honestly, you don't know what you don't know about the community norms here.)
Alternatively, if you're still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest
open comment
thread. In fact, Open Threads are intended for
anything 'worth saying, but not worth its own post', so please do dive in! Informally, there is also the unofficial
Less Wrong IRC chat room, and you might also like to take a look at some of the other regular
special threads; they're a great way to get involved with the community!
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)
If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.
Finally, a big thank you to everyone that helped write this post via its predecessors!
Comments (635)
HI. Curt Doolittle. I follow LW via Feedly, but today someone asked me to comment on a LW article. I write analytic philosophy in epistemology (specifically truth), ethics, law, politics and science. I'm reasonably well known and easy to find on the web.
Here is my response to the recent post on Signaling by Outliers (Hipster analogy). You can use it as a test of worthiness.
All, Thank you for asking me to respond. I'll convert it from signaling (the author's criticism and somewhat humorous demonstration of signaling), from moral justification, to scientific language, and I think it will be clearer:
1) All radicals do not fit into the center of the distribution - the statement is tautological, not insightful.
2) We all signal, and signaling is necessary for evolutionary reproductive selection.
3) The presumption of not fitting into some locus of the median of the distribution is a democratic one - that we are equal rather than (as I argue) we constitute a division of cognitive labor: perception, evaluation, knowledge and advocacy. (humans divide cognition more so than other creatures because we specialize in cognition.)
4) Our theories do tend to justify our social positions (signaling) but then, we would not have information necessary to theorize about any other set of interests, now would we?
5) The origin of theories is irrelevant (justification is false), and therefore the question of a theory produced by any subset of a polity can be judged by only criticism - its irrelevant who comes up with a theory.
The vast difference between pseudoscience and science in ethics, law, politics, and economics is captured those few words.
Now, to state the positive version: the solution to the fallacy of the enlightenment hypothesis of equality of ability, interest, and value is captured in these additional points:
6) economic velocity (wealth) is determined by the degree of suppression of parasitism (free riding/imposed costs). This eliminates transaction costs.
7) central power originates to centralize parasitism and increase material costs, by suppressing local parasitism and transaction costs. Once centralized they can be incrementally eliminated. If and only if an institutional means of following rules can be used to replace personal judgement.
8) The only means of producing institutional rules to replace personal judgement (provision of 'decidability') is in the independent, common, evolutionary law resting upon a prohibition on parasitism/free-riding/imposed costs (negatives), codified as property rights (positives): productive, warrantied, fully informed, voluntary transfer(exchange), free of negative externalities.
9) Language evolved to justify (morality), negotiate (deceive), and rally and shame (gossip), and only tangentially and late to describe (truth). Truth as we understand it is an invention and an unnatural one - which is why it is unique to the west, and why it has taken philosophers so long to understand it. However, westerners evolved a military epistemology because they relied upon self-financing warriors voluntarily participating, as well as the jury and truth telling. (The marginal difference in intellectual ability apparently not common - they were all smart enough. and such testimony was in itself 'training'.)
10) We cannot expect or demand truth from people unless they know how to produce it. ie: Education in what I would consider the religion of the west: "the true, the moral and the beautiful". So I consider this education 'sacred' not just utilitarian.
11) We cannot demand truth and law from people unless it is not against their interests: ie: the only universal political system is Nationalism, because groups can act truthfully internally, truthfully externally, and can use trade negotiations to neutralized competitive differences. And with nationalism, individuals cannot escape paying the cost of transforming their own societies, and themselves, and laying the burden of doing so upon other societies.
12) Commons are a profound competitive advantage. Territorial, institutional, normative, genetic, physical, and economic (industrial) commons are a profound advantage to any group. The west is the most successful producer of commons so it is even more important to the west. So we must provide a means of producing those commons. The difference between market for private goods and services (where competition in production is a good incentive) and corporate (public) goods, where we must prevent privatization of gains an socialization of losses, requires that we provide monopoly protection of those goods from consumption. But does not require that we provide monopoly contribution to them. Commons require only that the people willing to pay for them, do so. Otherwise there is no demonstrated preference for that commons. Insurance is a commons and I will leave that for another time. Return on investment (dividends) are the product of commons. I will leave that for another time as well. The central point is that we can produce a market for common goods using government just as we do in the market private goods. But that law and commons are two different things. and that there is no reason whatsoever, knowing how to construct the common law, that government should be capable of producing law. it cannot. Law is. It cannot be created. Only identified.
(This is also probably the most profound 1000 words on politics that you will be able to find at this moment in time)
propertarianism
Curt Doolittle The Propertarian Institute
Which one?
Been looking for this for a few moments. I don't see much to expand on myself. I found out about LW when someone pointed me to the 1000-year old vampire post which I really liked.
And that's almost enough for now. I tried using the search but I didn't get the thing I wanted. All or fucking nothing I guess: What's the best way to ask a girl out?
"Best" means a lot of things that I'm naturally not aware of otherwise I wouldn't be asking this :) But true, I feel like there's a lot of things to account for in "best" that I might not be realistically able to do in different situations.
If you're asking why I'm asking this, it's just because although I manage a conversation (I do have an almost severe aversion to inane conversations/topics so sometimes I really have nothing to say, and in the case I do I always think "this is stupid but.. fucking conversation") at a level I consider okayish (could work on this too, but that's an entirely different topic) I always feel like "now's not the time". Not sure why. Maybe I'm not getting the right signal or maybe I'm missing it, but I always have this feeling that even though I'd like to do it, I'd probably mess up. Instinctively (or in some cached way) I think I should lead the conversation there but.. well, this is dragging on. So guys (I guess girls too), what's the best way to ask a girl out?
The person behind this account is not at all new to the Less Wrong community. He has read all of the sequences multiple times, as well as much of the output of many non-Eliezer figures associated with or influenced by LW, and has been around for more than half the time the site has existed. Suffice it to say he knows his stuff. He used to comment and then stopped for reasons which remain unclear.
The obvious question is, why the new account, especially since I'm not trying to hide who I was? I decline to answer.
Less Wrong is important to me. Reading the sequences caused in me a serious upgrade. LW inspired a lot of meetup groups, one of which I attend every week. It's not the group I wish I was attending, but it's better than the alternative: none. Things fall apart. Roko exploded. Vladimir_M vanished, Yvain seceded; many others of import including Eliezer have abandoned LW. They all have their reasons, some common and others not. There are forces, it seems, driving the best away, leaving behind a smattering of dunces.
I aim to turn the tide. Nate Soares didn't show up until 2013. Less Wrong is still at least theoretically a place that can attract good people. Less Wrong has been navel-gazing about its own demise for a long time, and the wails have gotten stronger while nothing else has. What is more, the widespread perception that "X is dead," is a self-fulfilling prophecy. But I think it can be done, I think I can lay down a gauntlet, for myself and others, the Less Wrong Rejuvenation Project. Why do I think it can be done? Wei Dai is still here. He is my benchmark. The day he goes off to greener pastures is the day I give up.
The name refers to inferential distance, something I want myself and my audiences to keep in mind.
New to this site... Have studied very little about logic and philosophy starting with some big famous papers that talk about how we know nothing for certain (thanks, Descartes), going through whether All Ravens are Black, studying the Perfect Island argument, learning about Famine, Affluence, and Morality, and ending somewhere along the lines of whether justified true belief is knowledge. That is to say, I'm not that educated on logic or rationality, but entertaining ideas is a great hobby of mine.
I came to Less Wrong because I found it on Harry Potter MOR (I haven't read HPMOR, or HP for that matter, but I find both interesting nonetheless, and I just got really excited when I found that a site like this existed.).
My beliefs: I am a theist, and I do not affiliate with a religion or political party. Of course, that is to say, the mark of an educated mind is to be able to entertain ideas without fully accepting them. :) I also like to assume that the majority of the population is evil and has ulterior motives, but that's just me... I'm a high school student who's just looking for something to write about and something to learn about. Just a new perspective altogether.
Nice to be here.
Wow, I'm so glad I stumbled onto slatestarcodex, and from there, here!!! You guys are all like smarter, cooler versions of me! It's great to have a label for the way my brain is naturally wired and know there other people in the world besides Peter Singer who think similarly. I'm really excited, so my "intro" might get a little long...
Part 1-Look at me, I'm just like you!
I'm Ellen, a 22 year old Spanish major and world traveling nanny from Wisconsin, so maybe not your typical LWer, but actually quite typical in other, more important ways. :)
I grew up in a Christian home/bubble, was super religious (Wisconsin Evangelical Lutheran Synod), truly respected/admired the Christians in my life, but even while believing, never liked what I believed. I actually just shared my story plus some interesting studies on correlations between personality, intelligence, and religiosity, if anyone is interested: http://magicalbananatree.blogspot.com/2015/02/christian-friends-do-you-ever-feel.html The post is based almost entirely on what I've come to learn is called "consequentialism" which I'm happy to see is pretty popular over here. I subscribe to this line of thinking so much that I used to pray for a calamity to strengthen my faith. I chose a small Lutheran school despite having great credentials to get into an Ivy, because with an eye on eternity, I wanted to avoid any environment that would foster doubt. My friends suggested I become a missionary, but to me, it made far more sense to become a high profile lawyer and donate 90% of my salary to fund a dozen other missionaries. (A Christian version of effective altruism?) No one ever understood!
Some people might deconvert because they can't believe in miracles, or they can't get over the problem of evil. These are bad reasons, I think, and based on the presupposition that God doesn't exist. Personally, the hardest thing for me was believing that God was all-powerful. Like, if God were portrayed as good, but weak, struggling against an evil god and just doing the best he could to make a just universe and make his existence known, I probably would never have left the faith. It took me long enough as it is!
Part 2-A noob atheist's plea for help
Anyway, now I've "cleared my mind" of all that and am starting fresh, but my friends have a lot of questions for me that I'm not able to answer yet, and I have a lot of my own, too. I'm starting by reading about science (not once had I even been exposed to evolution!) but have a lot of other concerns on the back burner, and maybe you guys can point me in the right direction:
Who was the historical Jesus? As a history source, why is the Bible unreliable?
How can I have morality?? Do I just have to rely on intuition? If the whole world relied on reason alone to make decisions, couldn't we rationalize a LOT of things that we intuit as wrong?
Does atheism necessarily lead to nihilism? (I think so, in the grand scheme of things? But the world/our species means something to us, and that's enough, right?)
What about all the really smart people I know and respect, like my sister and Grandma, who have had their share of doubts but ultimately credit their faith to having experienced extraordinary, miraculous answers to prayer? Like obviously, their experiences don't convince ME to believe, but I hate to dismiss them as delusional and call it a wild coincidence...
Are rationalists just as guilty of circular reasoning as Christians are? (Why do I trust human reason? My human reason tells me it's great. Why do Christians trust God? The Bible tells them he's great.)
Part 3-Embarrassingly enthusiastic fan mail
Yay curiosity! Yay strategic thinking! Yay honesty! Yay open-mindedness! Yay opportunity cost analyses! Yay common sense! Yay tolerance of ambiguity! Yay utilitarianism! Yay acknowledging inconsistency in following utilitarianism! Yay intelligence! Yay every single slatestarcodex post! Yay self-improvement! Yay others-improvement! Yay effective altruism!
Ahhh this is all so cool! You guys are so cool. I can't wait to read the sequences and more posts around this site! Maybe someday I'll even meet a real life rationalist or two, it seems like the Bay Area has a lot. :)
My two cents:
Who cares? Okay you obviously do, but why? If the religion is false and reports of miracles are lies, is there really an impotant difference between a) "Yes, once there was a person called Jesus, but almost everything that Bible attributes to him is completely made up" and b) "No, everything about Jesus is completely made up"?
In other words, if I tell you that my uncle Joe is the true god and performs thousand miracles every thursday, why would you care about whether a) I have a perfectly ordinary, non-divine, non-magical uncle called Joe, and I only lied about his divinity and miracles, or b) actually I lied even about having an uncle called Joe? What difference would it make and why?
Because it was written by people who had an agenda to "prove" that they are the good ones and the divinely chosen ones? Maybe even because it contains magic?
I don't fully trust even historical books written recently. It can be funny to read history textbooks written by two countries which had conflicts recently; how each of them describes the events somewhat differently. And today's historical books are much more trustworthy than the old ones, because today people are literate, they are allowed to read and compare the competing books, they are allowed to criticize without getting killed immediately.
Sorry for the offensive comparison, but trusting Bible's historical accuracy would be as if in the parallel universe Hitler would win the war, then he would write his own historical book about what "really happened" and make it a mandatory textbook for everyone... and then a few thousand years later people would trust his every written word to be honest and accurate.
Exactly. You already know what you care about. Atheism simply means there is no higher boss who could tell you "actually, you should like this and hate that, because I said so", and you would have to shut up and obey.
On the other hand; people can be wrong about their preferences, especially when their decisions are based on wrong assumptions. But "being wrong" is different from "disagreeing with the boss".
I would recommend the PDF version. It is better organized; you can read it from the beginning to end, instead of jumping through the hyperlinks. And it does not include the comments, which will allow you to focus on the text and finish it faster (the comments below the original articles are 10x as much text as the articles themselves; they are often interesting, but then it is really extremely lot of text to read).
Thanks for replying!
Why do I care about Historical Jesus? I actually wouldn't, I guess, except that I absolutely need to have a really well thought out answer to this question in order to maintain the respect of friends and family, some of whom credit Historical Jesus as one of the top reasons for their faith.
Good point about the authors being biased, thanks, no offense taken! I still don't like when people say miracles/magic definitively prove the Bible wrong though, since if a God higher than our understanding were to exist, of course he could do magic when he felt like it. Still, based on our understanding of the world, there is no good reason/evidence at all to believe in such a God.
I got the Rationality ebook, and it is great! Sooo well-written, well-organized, and well thought out! I just started today and am already on the section "Belief in Belief." I love it so much so far that it's a page-turner for me as much as my favorite suspense/fantasy novels. Definitely worth sharing and going back to read and re-read :)
Yep. On the social level I get it, but on another level, it's a trap.
The trap works approximately like this: "I will allow you not to believe in my bullshit, but only if you give me a free check to bother you with as many questions as I want about my bullshit, and you have to explore all of these questions seriously, give me a satisfactory answer, and of course I am allowed to respond by giving you even more questions".
If you agree on this, you have de facto agreed that the other side is allowed to waste unlimited amounts of your time and attention, as a de facto punishment for not believing their bullshit. -- Today you are asked to make to make a well-researched opinion about Historical Jesus, which of course would take a few weeks or months to do a really serious historical research; and tomorrow it will be either something new, e.g. a well-researched opinion about the history of the Church, or about the history of Crusades, or about the history of Inquisition, or whatever. Alternatively, they may point at some parts of your answer about the Historical Jesus and say: okay, this part is rather weak, you have to bring me a well-researched opinion about this part. For example, you were quoting Josephus and Tacitus, so now give me a full research about both of them, how credible they are, what other claims they made, etc.
Unless the other side gives up (which they have no reason to; this games costs them almost nothing), there are only two ways this can end. First, you might give up, and start pretending to be religious again. Second, after playing a few rounds of this game, you refuse to play yet another round... in which case the other side will declare their victory, because it "proves" your atheism is completely irrational.
Well, you might play a round or two of this game just to show some good will... but it is a game constructed so that you cannot win. The real goal is to manipulate you into punishing yourself and feeling guilty. -- Note: The other side may not realize they are actually doing this. They may believe they are playing a fair game.
Good point, thanks!! I can't get too caught up in this; there are things I'd rather be learning about, so I need a limit. I'd like to think I can win, though, but this is probably just self-anchoring fallacy (I'm learning!)
Just because I would have been swayed by an absence of positive evidence doesn't mean everyone will be, even people who seem decently smart and open-minded with a high view of reason, like my old track coach and religion teacher. I just made a deal though, that I would read any book of his choice about the Historical Jesus (something I probably would have done anyway!) if he reads Rationality: AI to Zombies :)
Be careful about distinguishing two very different propositions:
(1) There was a preacher named Jesus of Nazareth who lived in a certain time in a certain place.
(2) Jesus of Nazareth rose from the dead and was the Son of God.
Specifically, evidence in favor of (1) usually has nothing to do (2).
That doesn't sound quite right to me, at least if you mean "nothing" literally", given that not-(1) logically implies not-(2).
I think the much smaller posterior probability of (2) than (1) has more to do with the much smaller prior than with the evidence.
This puzzled me, since it sounds a lot like the problem of evil. I take it you were describing the argument you lay out at the link?
For completeness - since I'm about to bash Christianity - I should note that Paul does not write like he has even an imagined revelation on the subject of Hell. He writes as if people in the Roman Empire often talked about everyone going to Hades when they died, and therefore he could count on people receiving as "good news" the claim that belief in Jesus would definitely send you to Heaven. (Later, the Gospels implied that your actions could send you to Heaven or Hell regardless of what you believed. Early Christians might have split the difference by reserving baptism for those they saw as living a 'Christian' life.) Clearly one can be a Christian in Paul's sense without believing in Hell.
We don't know. I have some qualms about Richard Carrier's argument (eg in On the historicity of Jesus: Why we might have reason for doubt). But plugging different numbers into his calculations, I come out with no more than a 54% chance Jesus even existed. We can't answer every factual question; some information is almost certainly lost to us forever.
This one seems fundamental enough that if people insist on the truth of miracles - and reports that you can move mountains if you have faith the size of a mustard seed - I don't know what to tell them. But besides directing people to mainstream scholarship (which by the way places the date of Mark after the destruction of the Temple), I can note that Mark inter-cuts the story of the fig tree with Jesus expelling the money-lenders from the Temple. The tree seems like a straightforward metaphor. Then we have later Gospels openly changing the narrative for their own purposes. Mark says Jesus could give no sign to those who did not believe, and they would not have believed (says Jesus in a parable) even if some guy named Lazarus had returned from the dead. John says Jesus performed signs all the time, and as you would expect this led many people to believe in him, especially when he brought Lazarus back from the dead. Though the resurrected disciple who Jesus loved disappears from the narrative after the period John depicts, and even Acts shows no awareness of this important witness.
If you want to have morality, you can just do it. By this I mean that any function assigning utility to outcomes in a physically meaningful way appears consistent. But yes, I've come to agree that simple utility functions like maximizing pleasure in the Universe technically fail to capture what I would call moral. For more practical advice, see a lot of this site and perhaps the CFAR link at the top of the page.
This depends. I would normally use the term "nihilism" to mean a uniform utility function, which does not distinguish between actions. This is equivalent to assigning every outcome zero utility. As the previous link shows, plenty of non-uniform utility functions can exist whether Yahweh does or not.
If you mean the lack of a moral authority you can trust absolutely, or that will force you to behave morally, then I would basically say yes. There is no authority anywhere.
Do they seem smarter and more worthy of respect than Gandhi? Perhaps he's not the best example, but putting him next to the many people from non-Christian religions who have made similar claims to religious experience may get the point across. (Aleister Crowley made a detailed study of mystical experience and how to produce it, but you may find him abrasive at best.)
That also depends on what you mean.
Oh, oops, I can see why that would be puzzling. But yeah, you figured it out. Do you really think my link was an argument though? A lot of people have accused me of trying to deconvert my friends, but I really don't think I was making an argument so much as sharing my own personal thoughts and journey of what led me away from the faith.
You correctly point out that not all Christians believe in hell, but I didn't want to just tweak my belief until I liked it. If I was going to reject what I grew up with, I figured I might as well start with a totally clean slate.
I'm really glad you and other atheists on here have bothered looking into Historical Jesus. Atheists have a stereotype of being ignorant about this, which actually, for those who weren't raised Christians, I kind of understand, since now that I consider myself atheist, it's not like I'm suddenly going to become an expert on all the other religions just so I can thoughtfully reject them. But now that my friends have failed to convince me atheism is hopeless, they're insisting it's hallucinogenic, that atheists are out of touch with reality, and it's nice (though unsurprising) to see that isn't the case.
Okay, I know that I personally can have morality, no problem! But are you trying to say it's not just intuition? Or if I use that Von Neumann–Morgenstern utility theorem you linked, I'm a little confused, maybe you can simplify for me, but whose preferences would I be valuing? Only my own? Everyone's equally? If I value everyone's equally and say each human is born with equal intrinsic value, that's back to intuition again, right? Anyway, yeah, I'll look around and maybe check out CFAR too if you think that would be useful.
Oh! I like that definition of nihilism, thanks. Personally, I think I could actually tolerate accepting nihilism defined as meaninglessness (whatever that means), but since most people I know wouldn't, your definition will come in handy.
Also, good point about Gandhi. I had actually planned on researching whether people from other religions claimed to have answered prayers like Christians do, but bringing up the other alleged "religious experiences" of people of other faiths seems like a good start for when my sister and I talk about this. Now I'm curious about Crowley too. I almost never really get offended, so even if he is abrasive, I'm sure I can focus on the facts and pick out a few things to share, even if I wouldn't share him directly.
Thanks for your reply! Hopefully you can follow this easily enough; next time I'll add in quotes like you did...
The theorem shows that if one adopts a simple utility function - or let's say if an Artificial Intelligence has as its goal maximizing the computing power in existence, even if that means killing us and using us for parts - this yields a consistent set of preferences. It doesn't seem like we could argue the AI into adopting a different goal unless that (implausibly) served the original goal better than just working at it directly. We could picture the AI as a physical process that first calculates the expected value of various actions in terms of computing power (this would have to be approximate, but we've found approximations very useful in practical contexts) and then automatically takes the action with the highest calculated expected value.
Now in a sense, this shows your problem has no solution. We have no apparent way to argue morality into an agent that doesn't already have it, on some level. In fact this appears mathematically impossible. (Also, the Universe does not love you and will kill you if the math of physics happens to work out that way.)
But if you already have moral preferences, there shouldn't be any way to argue you out of them by showing the non-existence of Vishnu. Any desires that correspond to a utility function would yield consistent preferences. If you follow them then nobody can raise any logical objection. God would have to do the same, if he existed. He would just have more strength and knowledge with which to impose his will (to the point of creating a logical contradiction - but we can charitably assume theologians meant something else.) When it comes to consistent moral foundations, the theorem gives no special place to his imaginary desires relative to yours.
I mentioned above that a simple utility function does not seem to capture my moral preferences, though it could be a good rule of thumb. There's probably no simple way to find out what you value if you don't already know. CFAR does not address the abstract problem; possibly they could help you figure out what you actually value, if you want practical guidance.
Note that he doesn't believe in making anything easy for the reader. The second half of this essay might perhaps have what you want, starting with section XI. Crowley wrote it under a pseudonym and at least once refers to himself in the third person; be warned.
Thanks a lot for explaining the utility theorem. So just to be sure, if moral preferences for my personal values (I'll check CFAR for help on this, eventually) are the basis of morality, is morality necessarily subjective?
I'll get to Crowley eventually too, thanks for the link. I've just started the Rationality e-book and I feel like it will give me a lot of the background knowledge to understand other articles and stuff people talk about here.
If "subjective" means "a completely different alien species would likely care about different things than humans", then yes. You also can't expect that a rock would have the same morality as you.
If "subjective" means "a different human would care about completely different things than me" then probably not much. It should be possible to define a morality of an "average human" that most humans would consider correct. The reason it appears otherwise is that for tribal reasons we are prone to assume that our enemies are psychologically nonhuman, and our reasoning is often based on factual errors, and we are actually not good enough at consistently following our own values. (Thus the definition of CEV as "if we knew more, thought faster, were more the people we wished we were, had grown up farther together"; it refers to the assumption of having correct beliefs, being more consistent, and not being divided by factional conflicts.)
Of course, both of these answers are disputed by many people.
There is a set of reasonably objective facts about what values people have, and how your actions would impact them, That leads to reasonably objective answers about what you should and shouldn't do in a specific situation. However, they are only locally objective,..what value based ethics removes is globally objective answers, in the sense that you should always do X .or refrain from Y irrespective of the contexts,
It's a bit like the difference between small g and big G in physics,
Nope. It leads to reasonably objective descriptive answers about what the consequences of your actions will be. It does not lead to normative answers about what you should or should not do.
Okay, I guess I'm still confused. So far I've loved everything I've read on this site and have been able to understand; I've appreciated/agreed with the first 110 pages of the Rationality ebook, felt a little skeptical for liking it so completely, and then reassured myself with the Aumann's agreement theorem it mentions. So I feel like if this utility theorem which bases morality on preferences is commonly accepted around here, I'll probably like it once I fully understand it. So bear with me as I ask more questions...
Whose preferences am I valuing? Only my own? Everyone's equally? Those of an "average human"? What about future humans?
Yeah, by subjective, I meant that different humans would care about different things. I'm not really worried about basic morality, like not beating people up and stuff, but...
I have a feeling the hardest part of morality will now be determining where to strike a balance between individual human freedom and concern for the future of humanity.
Like, to what extent is it permissible to harm the environment? If something, like eating sugar for example, makes people dumber, should it be limited? Is population control like China's a good thing?
Can you really say that most humans agree on where this line between individual freedom and concern for the future of humanity should be drawn? It seems unlikely...
By definition, you can only care about your own preferences. That being said, it's certainly possible for you to have a preference for other people's preferences to be satisfied, in which case you would be (indirectly) caring about the preferences of others.
The question of whether humans all value the same thing is a controversial one. Most Friendly AI theorists believe, however, that the answer is "yes", at least if you extrapolate their preferences far enough. For more details, take a look at Coherent Extrapolated Volition.
I'm the wrong person to ask about "this utility theorem which bases morality on preferences" since I don't really subscribe to this point of view.
I use the world "morality" as a synonym for "system of values" and I think that these values are multiple, somewhat hierarchical, and are NOT coherent. Moral decisions are generally taken on the basis of a weighted balance between several conflicting values.
If these are the questions weighing heavily on your mind, then you would probably enjoy Gary Drescher's Good and Real. I suggest reading the first Amazon review to get a good idea of the topics it covers. It is very similar to some of the content in the Sequences. (By the way, if you purchase the book through that link, 5% goes to Slate Star Codex.)
Also, the Sequences have recently been released as an ebook entitled Rationality: From AI to Zombies. (You can download the book for free in MOBI, EPUB, and PDF format if you follow the 'Buy Now' link at the bottom of that page and enter a price of $0.00. If you do this, it won't request any payment information. If you pay more than that, the money will go to the Machine Intelligence Research Institute.) I have found that Rationality is much, much easier to read than the Sequences.
You may not yet have the background knowledge necessary to understand it, and if that's the case then you can always return to it later, but I think that the most relevant post on this topic is Where Recursive Justification Hits Bottom. It's chapter 264 in Rationality. (That's a daunting number but the chapters are very short. Rationality is Bible-length but you can hack away at it one chapter at a time, or more at a time, if you please.) To be frank, you're asking the Big Questions and you might have to read a bit before you can answer them.
When I read that, I'm reminded of something that Luke Muehlhauser, a prominent LessWrong user and former devout Christian, once wrote:
As you said yourself, "Yay tolerance of ambiguity!" Although their beliefs are false, their experiences can certainly be real. Even if there exists no God, that doesn't mean that the Presence-of-God Quale isn't represented by the patterns of neural impulses of some human brains. It's easy, nay, the default action, to view others with false beliefs in a negative light, but if rationalism were always intuitively obvious, then the world would be a very different place. I try not to make myself feel bad by overestimating my ability to convince others of the value of rationalism. That doesn't mean that I keep my mouth shut all of the time, but I do take it a day at a time, and it seems to work; sometimes I talk about something and it doesn't seem to go anywhere, and then a friend will bring it up days or weeks later and say something like, "You know, I was thinking about that, and I realized it made a lot of sense." And then I privately jump up and down. Sometimes it doesn't work, but for me, there's definitely a middle ground between falling in line and abandoning All I Have Ever Known. I also often see Paul Graham's essay What You Can't Say linked here when new atheists ask about how to maintain ties with religious family members.
EDIT: Oh, and welcome to LessWrong!
Thanks for the welcome!! Good and Real does seem like a good read. I'm going to read Rationality first, which I'm guessing will help me work through some of my questions, but I'll definitely keep that one in mind for later.
Where Recursive Justification Hits Rock Bottom was really relevant, thanks for the link. I'm still digesting Occam's Razor, I think that was the only concept completely new to me.
Thanks for the link to Luke's story. It seems like we went through the same difficult process of desperately wanting to believe, but ultimately just not being able to. I find it super encouraging that his doubts stemmed from researching the Historical Jesus, since that's one thing that my old high school track coach/religion teacher insists I have to look into. He claims no atheist has ever been able to answer any of his questions. The atheists I know all credit a conflict with science as the reason they left Christianity, and I credit...I don't even know, my personal thoughts, I guess... but it's great to know that researching history will also lead there. I'll have to go through the same resources he used so I can better explain myself to Christian friends.
"Although their beliefs are false, their experiences can certainly be real. Even if there exists no God, that doesn't mean that the Presence-of-God Quale isn't represented by the patterns of neural impulses of some human brains." Thanks for that!! It does make me feel better.
Hahaha, wow, I haven't even considered trying to convince others of the value of rationalism yet. Especially after my deconversion, I've been totally on the defensive, almost apologizing for my rationality. ("It's not my fault; it's the personality I was born with. If you guys really believe, you should feel lucky not just for having been born into Christian homes, but also, more importantly, for having been born with the right personalities for faith." and "You think my prayers for a stronger faith weren't answered because my faith wasn't strong enough, but I was doing everything possible to strengthen my faith to no avail." and "Believing isn't a choice, no matter how much I wanted it, I couldn't believe. So if any brand of Christianity is true, Calvinism is your best bet, and I wasn't among the elect.")
So far this strategy is doing remarkably, remarkably well in maintaining ties with friends and family. People understand where I'm coming from, and they feel just awful, sorry for me since they think I'm going to hell, but for the most part, not finding me at fault. Pity is slightly annoying when I'm so happy, but hopefully their pity will eventually lead them to find God unfair, which will lead them to dislike their beliefs, which will lead them to question why they bother believing something they don't like...and then, they won't find much reason at all aside from upbringing/community. Those were actually pretty much the steps of my deconversion process, only I didn't need a personal connection with a particular unbeliever to get there. Anyway, if nothing else, the defensive strategy works wonders for relations. I helped a friend share her doubts with her family in this way, and she said it worked for her too.
I just thought to point out that there's going to be a Rationality reading group; basically, it's a planned series of posts about each Part in the book, where you have the opportunity to talk about it and ask questions. You clearly are very curious, (it's the only way you could survive so many hyperlinks) so it seems like just the thing for you.
Just to give you words for this, and from what I read in the blog post that you linked to in your first comment (which I found very amusing), I think you're trying to verbalize that Christianity was inconsistent. You don't have to prefer consistency, but most people claim to prefer it, and apparently you do prefer it. (I know I do.) You didn't like it as a system because it was a system that said that God was perfectly benevolent and ridiculously selfish (though the second statement was only implicit) at the same time. You can always look at other subjects like science and history and come to the conclusion that religion conflicts with those things when it shouldn't; but you can also just look at religion and see how it conflicts with itself. I think that's what you did.
I saw some of your other comments about meaning, and meaninglessness in the absence of God, and nihilism. Notice that when you ask "Does life have meaning in the absence of God?", everyone says that it depends on what you mean, offers some possible interpretations, and shares their viewpoints and conclusions on what it means. The simplest way to give you a clue as to some of the problems with the question is something that you wrote yourself:
Vagueness is part of the problem, but there are other parts as well. Even though I've never been religious and therefore don't know what it's like to lose faith, worrying about "meaninglessness" is something that I dealt with. I promise that atheists aren't all secretly dead inside. (I actually used to wonder about that.) Rationality Parts N and P deal with questions like that.
I also want to say that I agree with Viliam_Bur's comments on you doing research to defend your new beliefs: It's a lot cheaper time- and resource-wise to act like a skeptic than it is to do research, and you never have to tolerate that awful feeling that you might be wrong. Even when you return with evidence contrary to their beliefs, their standards of evidence are too high for it to matter. I think it's telling that your coach sat around waiting for unusually knowledgeable, atheistic passersby to tell him about the Historical Jesus instead of doing any research on his own.
Cool, thanks so much for mentioning the Rationality reading group!! I'm probably going to finish each section long before it's discussed, but I'll definitely go back to re-read and chat. I'll bookmark it for sure! So exciting! I will try to bribe my sister and maybe a few other people to participate as well (self-anchoring again, maybe, but I'll call it optimism, haha).
Ooh, I like consistency, and Christianity is inconsistent. Christianity conflicts with itself. A God can't be both perfectly benevolent and ridiculously selfish. That's why I rejected it. Yeah, that sounds nice, thanks for the words. :)
Good point about vagueness. I like this slatestarcodex post" The Categories Were Made for Man, Not Man for the Categories Looking forward to parts N and P now too!
And yeah, good point about the standards of evidence being too high. Still, right now my only info about Historical Jesus is based off a few articles I've read on the internet, and I just feel like after 22 years learning one thing, I can't just reject it and jump ahead to other things without being able to formulate basic, well-reasoned atheist answers to common Christian questions. I guess it's not just about maintaining my friends' respect, it's also about my own self-respect. I can't go around showing the improbability of every religion, but I want to be able to do so about the one I grew up in (maybe this is a cousin of the sunk-cost fallacy?). Luckily, all of the groundwork here has already been done by other atheists, it should just a matter of familiarizing myself with basic facts/common arguments.
You are awesome! I wish I could radiate only half as much enthusiasm and happiness. Even though I feel it - I just can't render it as much. I plan to learn from you in this regard!
You are welcome. I will also try to answer your questions. Some of them I ponderd myself and arrived at some answers. But then I had more time. I have a comparable background and I have a deep interest in children so you may also find my ressources for parents of interest.
But now to your questions:
Awesome. But it can be explained by the presence of memes in real-life christian culture that regulate such actions as misguided. See Reason as memetic immune disorder.
The Jesus Seminar may have answers of the kind you desire. If a historical Jesus can be found by taking the bible as historcal evidence instead of sacred text, then look there. The Jesus Seminar has been heavily criticised (in part legitimately so) but it may provide the counter-balance to your already known facts. See also http://en.wikipedia.org/wiki/Jesus_Seminar
Well. What do you mean by "how"? By which social process does moral exist? Or due to which psychological process? The spiritual process apparently is out of business because it is ungrounded. There was a Main post with nice graphs about it that I can't find.
You might also want to replace the question with "why do I think that I have morality?"
No. Atheism does remove one set of symbol-behavior-chains in your mind, yes. But a complex mind will most likely lock into another better grounded set of symbol-behavior-chains that is not nihilistic but - depending on your emotional setup - somehow connected to terminal values and acting on that. "symbol-behavior-chains" is my ad-hoc term. Ask if it is unclear.
I feel with you. I have the same challenge. See my first link above. I respect them. I know how complex this migration is. I was free to explore. How can't I not reciprocate. I don't want to manipulate. I just want the best for them. And then extensions of the simulation argument might actually lead you back to theism (as least a bit).
Good luck and cheers!
Hey! <retracted because I changed my mind about the sensibleness of putting personal info on the internet and more people started recognising my name than I'm happy with>
You seem legit. Also, wait, the #lesswrong IRC channel stopped being dead?
--
Hi Act, welcome!
I will gladly converse with you in Russian if you want to.
Why do you want a united utopia? Don't you think different people prefer different things? Even if assume the ultimate utopia is unform, wouldn't we want to experiment with different things to get there?
Would you feel "dwarfed by an FAI" if you had little direct knowledge of what the FAI is up to? Imagine a relatively omniscient and omnipotent god taking care of things on some (mostly invisible) level but doesn't ever come down to solve your homework.
--
P.S.
I am dismayed that you were ambushed by the far right crowd, especially on the welcome thread.
My impression is that you are highly intelligent, very decent and admirably enthusiastic. I think you are a perfect example of the values that I love in this community and I very much want you on board. I'm sure that I personally would enjoy interacting with you.
Also, I am confident you will go far in life. Good dragon hunting!
So pointing out flaws in someone's position is now "ambushing" them?
Disagreeing is ok. Disagreeing is often productive. Framing your disagreement as a personal attack is not ok. Lets treat each other with respect.
--
I wouldn't call it an ambush, but in any case Acty emerged from that donnybrook in quite a good shape :-)
I sympathize with your sentiment regarding friendship, community etc. The thing is, when everyone are friends the state is not needed at all. The state is a way of using violence or the threat of violence to resolve conflicts between people in a way which is as good as possible for all parties (in the case of egalitarian states; other states resolve conflicts in favor of the ruling class). Forcing people to obey any given system of law is already an act of coercion. Why magnify this coercion by forcing everyone to obey the same system rather than allowing any sufficiently big group of people choose their own system?
Moreover, in the search of utopia we can go down many paths. In the spirit of the empirical method, it seems reasonable to allow people to explore different paths if we are to find the best one.
I used "homework" as a figure of speech :)
This might be so. However, you must consider the tradeoff between this sadness and efficiency of dragon-slaying.
The problem is, if you instantly go from human intelligence to far superhuman, it looks like a breach in the continuity of your identity. And such a breach might be paramount to death. After all, what makes tomorrow you the same person as today you, if not the continuity between them? I agree with Eliezer that I want to be upgraded over time, but I want it to happen slowly and gradually.
I do think that some kind of organisational cooperative structure would be needed even if everyone were friends - provided there are dragons left to slay. If people need to work together on dragonfighting, then just being friends won't cut it - there will need to be some kind of team, and some people delegating different tasks to team members and coordinating efforts. Of course, if there aren't dragons to slay, then there's no need for us to work together and people can do whatever they like.
And yeah - the tradeoff would definitely need to be considered. If the AI told me, "Sorry, but I need to solve negentropy and if you try and help me you're just going to slow me down to the point at which it becomes more likely that everyone dies", I guess I would just have to deal with it. Making it more likely that everyone dies in the slow heat death of the universe is a terribly large price to pay for indulging my desire to fight things. It could be a tradeoff worth making, though, if it turns out that a significant number of people are aimless and unhappy unless they have a cause to fight for - we can explore the galaxy and fight negentropy and this will allow people like me to continue being motivated and fulfilled by our burning desire to fix things. It depends on whether people like me, with aforementioned burning desire, are a minority or a large majority. If a large majority of the human race feels listless and sad unless they have a quest to do, then it may be worthwhile letting us help even if it impedes the effort slightly.
And yeah - I'm not sure that just giving me more processor power and memory without changing my code counts as death, but simultaneously giving a human more processor power and more memory and not increasing their rationality sounds... silly and maybe not safe, so I guess it'll have to be a gradual upgrade process in all of us. I quite like that idea though - it's like having a second childhood, except this time you're learning to remember every book in the library and fly with your jetpack-including robot feet, instead of just learning to walk and talk. I am totally up for that.
We don't need the state to organize. Look at all the private organizations out there.
The cause might be something created artificially by the FAI. One idea I had is a universe with "pseudodeath" which doesn't literally kill you but relocates you to another part of the universe which results in lose of connections with all people you knew. Like in Border Guards but involuntary, so that human communities have to fight with "nature" to survive.
I think it's a bit of a shame that society seems to funnel our most intelligent, logical people away from social science. I think social science is frequently much more helpful for society than, say, string theory research.
The bigger shame is the kind of BS that passes for humanities/social science these days.
Is that a fact? I've seen social scientists complain that social science is trying too hard to emulate the hard science.
Yes, most social science is cargo cult science. That's perfectly consistent with it being BS.
Look, it may very well be that social science is low-quality. But your comments in this thread are not at all up to LW standards. You need to cite evidence for your positions and stop calling people names.
I think there may be a self-reinforcing spiral where highly logical people aren't impressed by social science, leading them to avoid it, leading to social science being unimpressive to highly logical because it's done by people who aren't highly logical. But I could be wrong--maybe highly logical people are misperceiving.
It's not just a self-reinforcing spiral. There is also a driver, namely since social science has more political implications and there is a lot of political control over science funding, social science selects for people willing to reach the "correct" conclusions even if they have to torture logic and the evidence to do so.
Well that's a self-reinforcing spiral of a different type. In general, I see a number of forces pushing newcomers to a group towards being similar to whoever the folks already in the group are:
The Iron Law of Bureaucracy, insofar as it's accurate.
Self-segregation. It's less aversive to interact with people who agree with you and are similar to you, which nudges people towards forming social circles of similar others.
Reputation effects. If Google has a reputation for having great programmers, other great programmers will want to work there so they can have great coworkers.
This is why it took someone like Snowden to expose NSA spying. The NSA was the butt of jokes in the crypto community for probably doing illicit spying long before Snowden... which meant people who cared about civil liberties didn't apply for jobs there (who wants to work for the evil empire?) (Note: just my guess as someone outside crypto; could be totally wrong on this one.)
Edit: evaporative cooling should probably be considered related to the bullet points above.
You're assuming that "intelligent" == "logical". That just ain't so and especially ain't so in social sciences.
"The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function." -- F. Scott Fitzgerald
If you consider finance a subset of social science then the U.S. puts a lot of its best and brightest there.
Finance is not social science. I think it's more similar to engineering: you need to have a grasp of the underlying concepts and be able to do the math, but the real world will screw you up on a very regular basis and so you need to be able to deal with that.
Note: I do find it plausible that doing STEM in undergrad is a good way to train oneself to think, and the best combo might be a STEM undergrad and a social science grad degree. You could do your undergrad in statistics, since statistics is key to social science, and try to become the next Andrew Gelman.
As advice for others like me, this is good. For me personally it doesn't work too well; my A level subjects mean that I won't be able to take a STEM subject at a good university. I can't do statistics, because I dropped maths last year. The only STEM A level I'm taking is CompSci, and good universities require maths for CompSci degrees. I could probably get into a good degree course for Linguistics, but it isn't a passionate adoration for linguistics that gets me up in the mornings. I adore human and social sciences.
I don't plan to be completely devoid of STEM education; the subject I actually want to take is quite hard-science-ish for a social science. If I get in, I want to do biological anthropology and archaeology papers, which involve digging up skeletons and chemically analysing them and looking at primate behaviour and early stone tools. It would be pretty cool to do some kind of PhD involving human evolution. From what I've seen, if I get onto the course I want to get onto, it'll teach me a lot of biology and evolutionary psychology and maybe some biochemistry and linguistics.
I agree wholeheartedly. A field like theoretical physics is much more glamorous to large number of intelligent people. I think it's partly signaling, but I'm not sure that explains everything.
What makes the least sense to me are people who seem to believe (or even explicitly confirm!) that they are only interested in things which have no applications. Especially when these people seem to disparage others who work in applied fields. I imagine this teasing might explain a bit of why so many smart people work in less helpful fields.
I think to an extent, physics is more intellectually satisfying to a lot of smart people. It's much easier to prove things for definite in maths and physics. You can take a test and get right answers, and be sure of your right answers, so when you're sufficiently smart it feels like a lot of fun to go around proving things and being sure of yourself. It feels much less satisfying to debate about which economics theories might be better.
Knowing proven facts about high level physics makes you feel like an initiate into the inner circles of secret powerful knowledge, knowing a bunch about different theories of politics (especially at first) just makes you feel confused. So if you're really smart, 'hard' sciences can feel more fun. I know I certainly enjoy learning computer science and feeling the rush of vague superiority when I fix someone's computer for them (and the rush of triumph when my code finally compiles). When I attempt to fix people's sociological opinions for them, there's no rush of vague superiority, just a feeling of intense frustration and a deeply felt desire to bang my head against the wall.
Then there's the Ancient Greek cultural thing where sitting around thinking very hard is obviously superior to going out and doing things - cool people sit inside their mansions and think, leaving your house and mucking around in the real world actually doing things is for peasants - which has somehow survived to this day. The real world is dirty and messy and contains annoying things that mess up your beautiful neat theories. Making a beautiful theory of how mechanics works is very satisfying. Trying to actually use the theory to build a bridge when you have budget constraints and a really big river is frustrating. Trying to apply our built up knowledge about small things (molecules) to bigger things (cells) to even bigger things (brains) to REALLY BIG AND COMPLICATED things (lots and lots of brains together, eg a society) is really intensely frustrating. And the intense frustration and higher difficulty (more difficult to do it right, anyway) means there's more failure and less conclusive results / slower progress, which leads some people to write off social science as a whole. The rewarding rush of success when your beautifully engineered bridge looks shiny and finished is not something you really get in the social sciences, because it will be a very long time before someone feels the rewarding rush of success that their beautiful preference-satisfying society is shiny and perfect.
I do think that the natural sciences are hopelessly lost without the social sciences, but for most super-clever people, is studying natural science more fun than doing social science? Definitely - I mean, while the politics students are busy reading books and banging their heads against walls and yelling at each other, physics students are putting liquid nitrogen in barrels of ping pong balls so that the whole thing explodes! (I loved chemistry in secondary school for years, right up until I finally caught on that coloured flames were the closest we were going to get to scorching our eyebrows off. Something about health and safety, thirteen year olds, and fire. I wish I hadn't stopped loving chemistry, because I hear once you're at university they do actually let you set things on fire sometimes.)
I don't think that something being (more) mathematically rigorous explains all of what we see. Physicists at one time used to study fluid dynamics. Rayleigh, Kelvin, Stokes, Heisenberg, etc., all have published in the field. You can do quite a lot mathematically in fluids, and I have felt like part of some inner circle because of what I know about fluid dynamics.
Now the field has been basically displaced by quantum mechanics, and it's usually not considered part of "physics" in some sense, and is less popular than I think you might expect if a subject being amenable to mathematical treatment is attractive to some folks. Physicists are generally taught only the most basic concepts in the field. My impression is that the majority of physics undergrads couldn't identify the Navier-Stokes equations, which are the most basic equations for the movement of a fluid.
It could also be that fluids have obvious practical applications (aerodynamics, energy, etc.) and this makes the subject distasteful to pedants. That's just speculation, however. I'm really not sure why fields like physics, etc., are so attractive to some people, though I think you've identified parts of it.
You do make a good point about the sense of completion being different in engineering vs. social science. I suppose the closest you could get in social science is developing some successful self-help book or changing public policy in a good way, but I think these are much harder than building things.
I think there's also definitely a prestige/coolness factor which isn't correlated with difficulty, applicability, or usefulness of the field.
Quantum mechanics is esoteric and alien and weird and COOL and saying you understand it whilst sliding your glasses down your nose makes you into Supergeek. Saying "I understand how wet stuff splashes" is not really so... high status. It's the same thing that makes astrophysics higher status than microbiology even though the latter is probably more useful and saves more lives / helps more people - rockets spew fire and go to the moon, bacteria cells in a petri dish are just kind of icky and slimy. I am quite certain that, if you are smart enough to go for any field you want, there is a definite motivation / social pressure to select a "cool" subject involving rockets and quarks and lasers, rather than a less cool subject involving water and cells or... god forbid... political arguments.
And, hmm, actually, not quite true on the last point - a social scientist could develop an intervention program, like a youth education program, that decreases crime or increases youth achievement/engagement, and it would probably feel awesome and warm and fuzzy to talk to the youths whose lives were improved by it. So you could certainly get closer than "developing some successful self-help book". It is certainly harder, though, I think, and there's certainly a higher rate of failure for crime-preventing youth education programs than for modern bridge-building efforts.
To be honest, I found QM to be the least interesting subject of all physics which I've learned about.
Also, I don't think the features you highlighted work either. Fluid dynamics has loads of counterintuitive findings, perhaps even more so than QM, e.g., streamlining can increase drag at low Reynolds numbers, increasing speed can decrease drag in certain situations ("drag crisis"). Fluids also has plenty of esoteric concepts; very few people reading the previous sentence likely know what the Reynolds number or drag crisis is.
Physicists, even astrophysicists, know little more about how rockets work than educated laymen. Rocketry is part of aerospace engineering, of which the foundation is fluid dynamics. Maybe rocketry is a counterexample, but I don't really think so, as there are a lot more people who think rockets are interesting than who know what a de Laval nozzle is. Even that has some counterintuitive effects; the fluid accelerates in the expansion!
You make me suddenly, intensely curious to find out what a Reynolds number is and why it can make streamlining increase drag. I am also abruptly realising that I know less than I thought about STEM fields, given I just kind of assumed that astrophysicists were the official People Who Know About Space and therefore rocketry must be part of their domain. I don't know whether I want to ask if you can recommend any good fluid dynamics introductions, or whether I don't want to add to the several feet high pile of books next to my bed...
Okay - so why do you think quantum mechanics became more "cool" than fluid dynamics? Was there a time when fluid dynamics held the equivalent prestige and mystery that quantum mechanics has today? It clearly seems to be more useful, and something that you could easily become curious about just from everyday events like carrying a cup of tea upstairs and pondering how near-impossible it is not to spill a few drops if you've overfilled it.
The best non-mathematical introduction I have seen is Shape and Flow: The Fluid Dynamics of Drag. This book is fairly short; it has 186 pages, but each page is small and there are many pictures. It explains some basic concepts of fluid dynamics like the Reynolds number, what controls drag at low and high Reynolds numbers, why golf balls (or roughened spheres in general) have less drag than smooth spheres at high Reynolds number (this does not imply that roughening always reduces drag; it does not on streamlined bodies as is explained in the book), how drag can decrease as you increase speed in certain cases, how wind tunnels and other similar scale modeling works, etc.
You could also watch this series of videos on drag. They were made by the same person who wrote Shape and Drag. There is also a related collection of videos on other topics in fluid dynamics.
Beyond that, the most popular undergraduate textbook by Munson is quite good. I'd suggest buying an old edition if you want to learn more; the newer editions do not add anything of value to an autodidact. I linked to the fifth edition, which is what I own.
I'll offer a few possibilities about why fluids is generally seen as less attractive than QM, but I want to be clear that I think these ideas are all very tentative.
This study suggests that in an artificial music market, the popularity charts are only weakly influenced by the quality of the music. (Note that I haven't read this beyond the abstract.) Social influence had a much stronger effect. One possible application of this idea to different fields is that QM became more attractive for social reasons, e.g., the Matthew effect is likely one reason.
The vast majority of the field of fluid mechanics is based on classical mechanics, i.e., F = m a is one of the fundamental equations used to derive the Navier-Stokes equations. Maybe because the field is largely based on classical effects, it's seen as less interesting. This could be particularly compelling for physicists, as novelty is often valued over everything else.
I've also previously mentioned that fluid dynamics is more useful than quantum mechanics, so people who believe useless things are better might find QM more interesting.
There also is the related issue that a wide variety of physical science is lumped into the category "physics" at the high school level, so someone with a particular interest might get the mistaken impression that physics covers everything. I majored in mechanical engineering in college, and basically did it because my father did. My interest even when I was a teenager was fluids, but I hadn't realized that physicists don't study the subject in any depth. I was lucky to have picked the right major. I suppose this is a social effect of the type mentioned above.
(Also, to be clear, I don't want to give the impression that more people do QM than fluids. I actually think the opposite is more likely to be true. I'm saying that QM is "cooler" than fluids.)
Fluid mechanics used to be "cooler" back in the late 1800s. Physicists like Rayleigh and Kelvin both made seminal contributions to the subject, but neither received their Nobel for fluids research. I recall reading that two very famous fluid dynamicists in the early 20th century, Prandtl and Taylor, were recommended for the prize in physics, but neither received it. These two made foundational contributions to physics in the broadest sense of the word. Taylor speculated the lack of Nobels for fluid mechanics was due to how the Nobel prize is rewarded. I also recall reading that there was indications that the committee found the mathematical approximations used to be distasteful even when they were very accurate. Unfortunately those approximations were necessary at the time, and even today we still use approximations, though they are different. Maybe the lack of Nobels contributes to fluids not being as "cool" today.
Ooh, yay, free knowledge and links! Thankyou, you're awesome!
The linked study was a fun read. I was originally a bit skeptical - it feels like songs are sufficiently subjective that you'll just like what your friends like or is 'cool', but what subjects you choose to study ought to be the topic of a little more research and numbers - but after further reflection the dynamics are probably the same, since often the reason you listen to a song at all is because your friend recommended it, and the reason you research a potential career in something is because your careers guidance counselor or your form tutor or someone told you to. And among people who've not encountered 80k hours or EA, career choice is often seen as a subjective thing. It'd be like with Asch's conformity experiments where participants aren't even aware that they're conforming because it's subconscious, except even worse because it's subconscious and seen as subjective...
That seems like a very plausible explanation. There could easily be a kind of self-reinforcing loop, as well, like, "I didn't learn fluid dynamics in school and there aren't any fluid dynamics Nobel prize winners, therefore fluid dynamics isn't very cool, therefore let's not award it any prizes or put it into the curriculum..."
At its heart, this is starting to seem like a sanity-waterline problem like almost everything else. Decrease the amount that people irrationally go for novelty and specific prizes and "application is for peasants" type stuff, and increase the amount they go for saner things like the actual interest level and usefulness of the field, and prestige will start being allocated to fields in a more sensible way. Fluid dynamics sounds really really interesting, by the way.
"I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic." (Horace Lamb)
(Indeed, today quantum electrodynamics makes correct predictions within one part per billion and fluid dynamics has an open million-dollar question.)
I've studied Spanish for some time and would be happy to converse with you. I'm not sure if you only want to converse with native speakers. I've been wanting to learn how to talk about LessWrongian stuff in Spanish.
--
Why do you dream of doing Human, Social and Political Sciences?
--
In otherwords, you're completely mindkilled about the topics in question and thus your opinions about them are likely to be poorly thought out. For example, when you think about, most of what is called "racism/sexism/etc." is actually perfectly valid Baysian inference (frequently leading to true conclusions that some people would prefer not to believe). As for AIDS, are you also angry at people opposing traditional morality since they also help spread AIDS?
Frankly, given your list, it looks like you merely stumbled up on the causes fashionable where you grew up and implicitly assumed that since everyone is so worked up about them they must be good causes. Consider that if you had grown up differently you would feel just as angry at anyone standing in the way of saving people's souls.
--
Where you live is more then just your immediate family.
Well technically one could define "sexism" and "racism" however one wants; however, in practice that's not how most people who oppose them use the words.
That's because usually the individual does fit the trend. In fact these days people tend to under update for fear of being called "racist" and/or "sexist".
So are you also angry about what happened to Watson?
Are you also angry about people beating people without those psychological issues in dark alleys? The latter is much more common. Are you angry about, say, what happened in Rotherham and the ideology that lead to it being cover up? What about all the black on black violence in inner cities that no one seems to care about and cops don't want to stop for fear of being called "racist" when they disproportionately arrest black defendants.
Do you know what the word "hate" means? I've seen it thrown around to apply to lot's of situations where there is no actual hate involved. Furthermore, in the rare cases where I've seen actual hate, well like you yourself said latter "emotion is arational" and hate is sometimes appropriate.
Yet earlier you said "I'm against beatings and murder in general, really." Do you see the contradiction here? Do you some beatings and killings [your example wasn't murder since it was legal] even if they increase utility?
--
I agree they "seem" that way if you only superficially read the news. If you pay closer attention one notices that (at least in the US) fear of being precised as "racist" is a much larger cause of people being beaten up in dark alleys (and occasionally in broad daylight). It is the reason why cops don't want to police high crime (black) neighborhoods, why programs that successfully reduce crime (like stop and frisk) are terminated.
I would argue the exact opposite. Hatred and anger evolved as methods that let us pre-commit to revenge/punishment by getting around the "once the offense has happened it's no longer in one's interest to carry out the punishment" problem. They do this by sabotaging one's reasoning process to keep one from noticing that carrying out the punishment is not in one's interest. Applied against things, i.e., anything that can't be motivated by fear of punishment, all one gets is the partially sabotaged reasoning process without any countervailing benefits.
In fact, I don't think it's possible to be angry at a 'thing' like a disease. In order to do so one must either anthropomorphize the disease or actually get angry at some people (like say those people who refuse to give enough money to research for curing it).
Ah, so you're a socialist?
Eh, I'm not sure I'm an anything-ist. Socialist ideas make a lot of sense to me, but really I'm a read-a-few-more-books-and-go-to-university-and-then-decide-ist. If I have to stand behind any -ist, it's going to be "scientist". I want to do research to find out which policies most effectively make people happy, and then I want to implement those policies regardless of whether they fall in line with the ideologies that seem attractive to me.
But yeah, I do think that it is morally wrong to let people suffer and morally right to make people happy, and I think you can create a lot of utility by taking money from people who already have a lot (leaving them with enough to buy food and maybe preventing them from going on holiday / buying a nice car) and giving it to people who have nothing (meaning they have enough money for food and education so they can survive and try and change their situation). So I agree with taxing people and using the money to provide universal healthcare, housing, food, etc. Apparently that makes me a socialist.
That would increase utility in the very short term, agreed. Of course, it would destroy the motivation to work, thus leading to a massive drop in utility shortly there after.
Well, "providing universal healthcare and welfare will lead to a massive drop in motivation to work" is a scientific prediction. We can find out whether it is true by looking at countries where this already happens - taxes pay for good socialised healthcare and welfare programs - like the UK and the Nordics, and seeing if your prediction has come true.
The UK employment rate is 5.6%, the United States is 5.3%. Not a particularly big difference, nothing indicating that the UK's universal free healthcare has created some kind of horrifying utility drop because there's no motivation to work. We can take another example if you like. Healthcare in Iceland is universal, and Iceland's unemployment rate is 4.3% (it also has the highest life expectancy in Europe).
This is not an ideological dispute. This is a dispute of scientific fact. Does taxing people and providing universal healthcare and welfare lead to a massive drop in utility by destroying the motivation to work (and meaning that people don't work)? This experiment has already been performed - the UK and Iceland have universal healthcare and provide welfare to unemployed citizens - and, um, the results are kind of conclusive. The world hasn't ended over here. Everyone is still motivated to work. Unemployment rates are pretty similar to those in the US where welfare etc isn't very good and there's not universal healthcare. Your prediction didn't come true, so if you're a rationalist, you have to update now.
I wasn't talking about providing people with universal healthcare. (That merely leads to a somewhat dysfunctional healthcare system). I was talking about taking so much from the "haves" that you "[prevent] them from going on holiday / buying a nice car".
Word of advice, try actually reading what I wrote before replying next time. Yes, I realize this is hard to do while one is angry; however, that's an argument for not using anger as your primary motivation.
Scandinavia and the UK are relatively ethnically homogenous, high-trust, and productive populations. Socialized policies are going to work relatively better in these populations. Northwest European populations are not an appropriate reference class to generalize about the rest of the world, and they are often different even from other parts of Europe.
Socialized policies will have poorer results in more heterogenous populations. For example, imagine that a country has multiple tribes that don't like each other; they aren't going to like supporting each other's members through welfare. As another example, imagine that multiple populations in a country have very different economic productivity. The people who are higher in productivity aren't going to enjoy their taxes being siphoned off to support other groups who aren't pulling their weight economically. These situations are a recipe for ethnic conflict.
Icelanders may be happy with their socialized policies now, but imagine if you created a new nation with a combination of Icelanders and Greeks called Icegreekland. The Icelanders would probably be a lot more productive than the Greeks and unhappy about needing to support them through welfare. Icelanders might be more motivated to work and pay taxes if it's creating a social safety net for their own community, but less excited about working to pay taxes to support Greeks. And who can blame them?
There is plenty of valid debate about the likely consequences of socialized policies for populations other than homogenous NW European populations. Whoever told you these issues were a matter of scientific fact was misleading you. This is an excellent example of how the siren's call of politically attractive answers leads people to cut corners during their analysis so it goes in the desired direction, whether they are aware they are doing it or not.
Generalizing what works for one group as appropriate for another is a really common failure mode through history which hurts real people. See the whole "democracy in Iraq" thing as another example.
The correct term is social-democrat, actually. Among the different systems, social democracy has very rarely received full-throated support, but seems to have done among the best at handling the complexity of the values and value-systems that humans want to be materially represented in our societies.
(And HAHAHA!, finally I can just come out and say that without feeling the need to explain reams and reams of background material on both value-complexity and left-wing history!)
Oh, that's all well and good. I just tend to bring up socialism because I think that "left-wing politics" is more of a hypothesis space of political programs than a single such program (ie: the USSR), but that "bad vibes" in the West from the USSR (and lots and lots of right-wing propaganda) have tended to succeed in getting people to write off that entire hypothesis space before examining the evidence.
I do think that an ideally rational government would be "more" left-wing than right-wing, as current alignments stand, but I too think it would in fact be mixed.
Have some reading material!
<rolls eyes> ...among the various socio-political systems the one I prefer is the best one because it is the best... X-)
Actually, in voting and activism, I'm a full-throated socialist. Social democracy is weaksauce next to a fully-developed socialism, but we don't have a fully-developed socialism, so you're often stuck with the weaksauce.
And as an object-level defense: social democracy, as far as I can tell, does the best at aggregating value information about diverse domains of life and keeping any one optimization criterion from running roughshod over everything else that people happen to care about.
--
Every system that works is covert or overt meritocracy. Social democracy works, so ....
Because the former is what a lot of other people using your rhetoric mean. And assuming that you mean what a lot of other people using your rhetoric mean is a reasonable assumption.
Also, even interpreting what you said as "I am angry about people beating LBGTQA+ individuals", it sounds like you are angry about it as long as it happens at all, regardless of its prevalence. Terrorism really happens too, but disproportionate anger against terrorism that ignores its prevalence has led to (or has been an excuse for) some pretty awful things.
--
False beliefs in equality are also responsible for millions of people being dead, and in fact have a much higher body-count then racism.
--
Actually falsely believing in equality of ability => being willing to kill to make equality happen. The chain of reasoning goes as follows:
1) As we know all people/groups are of equal ability, but group X is more successful then other groups, thus they must be cheating in some way, we must pass laws to stop the cheating/level the playing field.
2) We passed laws to level the playing field but group X is still winning, they must be cheating in extremely subtle ways, we must pass more laws to stop/punish this.
3) Group X is still ahead, we must presume members of group X are guilty until proven innocent, etc.
No that's not what I'm saying. In the grandparent you said:
My point is that not being able to read IQ-by-race-and-gender studies is likely to lead to a repeat of Mao/Pol Pot. Thus being extremely concerned about being able to read them is a perfectly rational reaction.
Unfortunately, as we've just established you have very false ideas about how to go about doing that. Furthermore, since these same false ideas are currently extremely popular in academia, going there to study is unlikely to fix this.
An excellent way to stop people from being killed is to make them strong or get them protected by someone who is strong. Strong in a broad sense here, from courage to coolness under pressure etc.
Here is a problem. To be a strong protector correlates with having the kind of transphobic and so on, long list of anti-social justice stuff or bigotry, because that list reduces to either disliking weakness or distrusting difference / having strong ingroup loyalty, and there is a relationship between these (a tribal warrior would have all).
Here is a solution. Basically moderate, reciprocal bigotocracy. Accept a higher-status, somewhat elevated i.e. clearly un-equal social role of the strong protector type i.e. that of traditional men, in return for them actively protecting all the other groups from coming to serious harm. The other groups will have to accept having lower social status, and it will be hard on their pride, but will be safer. This can be made official and perhaps more palatable by conscripting straight males, everybody claiming genderqueer status getting an exemption, and also after the service expecting some kind of community protection role, in return for higher elevated social status and respect. Note: this would be the basic model of most European countries up to the most recent times, status-patriarchy and male privilege explicitly deriving from the sacrifice of conscription.
This is not easy to swallow. However there seem to be not many other options. You cannot have strong protectors who are 100% PC because then they will have no fighting spirit. Without strong protectors, all you can hope is a utopia and hoping the whole Earth adopts it or else any basic tribe with gusto will take you over.
But I think a compromise model of not 100% complete equality and providing a proctor role in return should be able to work, as this has always been the traditional civilized model. In the recent years it was abandoned due to it being oppressive, and perhaps it was, but perhaps there is a way to find a compromise inside it.
Well, right here is a nice example:
Would you care to be explicit about the connection between IQ-by-race studies and genocide..?
There is no connection. I'm not trying to imply a connection. The only connection is that they are both things possibly implied by the word "racism".
I'm trying to say that when I say "I oppose racism", intending to signal "I oppose people beating up minorities", and people misunderstand badly enough that they think I mean "I oppose IQ-by-race studies", it disturbs me. If people know that "I oppose racism" could mean "I oppose genocide", but choose to interpret it as "I oppose IQ-by-race studies", that worries me. Those things are completely different and if you think that I'm more likely to oppose IQ-by-race studies than I am to oppose genocide, or if you think IQ-by-race studies are more important and worthy of being upset about than genocide, something has gone very wrong here.
A sentence like "I oppose racism" could mean a lot of different things. It could mean "I think genocide is wrong", "I think lynchings are wrong", "I think people choosing white people for jobs over black people with equivalent qualifications is wrong", or "I think IQ by race studies should be banned". Automatically leaping to the last one and getting very angry about it is... kind of weird, because it's the one I'm least likely to mean, and the only one we actually disagree about. You seriously want to reply to "I oppose racism" with "but IQ by race studies are valid Bayesian inference!" and not "yes, I agree that lynching people is very wrong"? Why? Are IQ by race studies more important to your values than eliminating genocide and lynchings? Do you genuinely think that I am more likely to oppose IQ-by-race studies than I am to oppose lynchings? The answer to neither of those questions should be yes.
That's because most people who say "I oppose racism" mean the latter, and no one except you means the former. That's largely because most people oppose beating people up for no good reason and thus they don't feel the need to constantly go about saying so.
The same is true for terrorism, but if someone came here saying "I'm really angry at terrorism and we have to do something", you'd be justified in thinking that doing what they want might not turn out well.
I'm sure we can agree that terrorism is bad, too. In fact, I'm sure we can agree that Islamic terrorism specifically is bad. So being really angry at it is likely to produce good results, right?
I am very angry about terrorism. I think terrorism is a very bad thing and we should eliminate it from the world if we can.
Being very angry about terrorism =/= thinking that a good way to solve the problem is to randomly go kill the entire population of the Middle East in the name of freedom (and oil). I hate terrorism and would prevent it if I could. In fact, I hate people killing each other so much, I think we should think rationally about the best way to eliminate it utterly (whilst causing fewer deaths than it causes) and then do that.
Then why wasn't it included along with racism/sexism/etc. in your list of things your angry about in the ancestor?
You do realize no one thinks that. In particular that wasn't the position Jiro was arguing against.
If you see someone else very angry about terrorism, though, wouldn't you think there's a good chance that they support (or can be easily led into supporting) anti-terrorism policies with bad consequences? Even if you personally can be angry at terrorism without wanting to do anything questionable, surely you recognize that is commonly not true for other people?
It's the same for racism.
"The more you believe you can create heaven on earth the more likely you are to set up guillotines in the public square to hasten the process." -- James Lileks
--
That thing:
Besides, we're talking about "more likely", not "inevitably".
--
Ask them, I'm not an altruist. But I heard it may have something to do with the concept of compassion.
Historically, it correlates quite well. You want to help the "good" people and in order to do this you need to kill the "bad" people. The issue, of course, is that definitions of "good" and "bad" in this context... can vary, and rather dramatically too.
If we take the metaphor literally, setting up guillotines in the public square was something much favoured by the French Revolution, not by Napoleon Bonaparte.
Bollocks. You want to change the world and change is never painless. Tearing down chunks of the existing world, chunks you don't like, will necessarily cause suffering.
The French Revolution wanted to design a better world to the point of introducing the 10-day week. Napoleon just wanted to conquer.
--
There is a price to be paid. If you use fury and anger too much, you will become a furious and angry kind of person. Embrace the Dark Side and you will become one with it :-/
Maybe :-) The reason you've met a certain... lack of enthusiasm about your anger for good causes is because you're not the first kid who wanted to help people and was furious about the injustice and the blindness of the world. And, let's just say, it does not always lead to good outcomes.
Don't mind Lumifer. He's one of our resident Anti-Spirals.
But, here's a question: if you're angry at the Bad, why? Where's your hope for the Good?
Of course, that's something our culture has a hard time conceptualizing, but hey, you need to be able to do it to really get anywhere.
There is historical precedent for groups advocating equality, altruism, and other humanitarian causes to do a lot of damage and start guillotining people. You would probably be horrified and step off the train before it got to that point. But it's important to understand the failure modes of egalitarian, altruistic movements.
The French Revolution, and Russian Revolution / Soviet Union ran into these failure modes where they started killing lots of people. After slavery was abolished in the US, around one quarter of the freed slaves died.
These events were all horrible disasters from a humanitarian perspective. Yet I doubt that the original French Revolutionaries planned from the start to execute the aristocracy, and then execute many of their own factions for supposedly being counter-revolutionaries. I don't think Marx ever intended for the Russian Revolution and Soviet Union to have a high death toll. I don't think the original abolitionists ever expected the bloody Civil War followed by 25% of the former slaves dying.
Perhaps, once a movement for egalitarianism and altruism got started, an ideological death spiral caused so much polarization that it was impossible to stop people from going overboard and extending the movement's mandate in a violent direction. Perhaps at first, they tried to persuade their opponents to help them towards the better new world. When persuasion failed, they tried suppression. And when suppression failed, someone proposed violence, and nobody could stop them in such a polarized environment.
Somehow, altruism can turn pathological, and well-intentioned interventions have historically resulted in disastrous side-effects or externalities. That's why some people are cynical about altruistic political attitudes.
--
Failure often comes with worse consequences than just an unchanged status quo.
You yourself are unlikely to start the French Revolution, but somehow, well-intentioned people seem to get swept up in those movements. Even teachers, doctors, and charity workers can contribute to an ideological environment that goes wrong; this doesn't mean that they started it, or that they supported it every step of the way. But they were part of it.
The French Revolution and guillotines is indeed a rarer event. But if pathological altruism can result in such large disasters, then it's quite likely that it can also backfire in less spectacular ways that are still problematic.
As you point out, many interventions to change the world risk going wrong and making things worse, but it would be a shame to completely give on making the world a better place. So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.
My model is that these revolutions created a power vacuum that got filled up. Whenever a revolution creates a power vacuum, you're kinda rolling the dice on the quality of the institutions that grow up in that power vacuum. The United States had a revolution, but it got lucky in that the institutions resulting from that revolution turned out to be pretty good, good enough that they put the US on the path to being the world's dominant power a few centuries later. The US could have gotten unlucky if local military hero George Washington had declared himself king.
Insofar as leftist revolutions create worse outcomes, I think it's because since the leftist creed is so anti-power, leftists don't carefully think through the incentives for institutions to manage that power. So the stable equilibrium they tend to drift towards is a sociopathic leader who can talk the talk about egalitarianism while viciously oppressing anyone who contests their power (think Mao or Stalin). Anyone intelligent can see that the sociopathic leader is pushing cartoon egalitarianism, and that's why these leaders are so quick to go for the throats of society's intellectuals. Pervasive propaganda takes care of the rest of the population.
Leftism might work for a different species such as bonobos, but human avarice needs to be managed through carefully designed incentive structures. Sticking your head in the sand and pretending avarice doesn't exist doesn't work. Eliminating it doesn't work because avaricious humans gain control of the elimination process. (Or, to put it another way, almost everyone who likes an idea like "let's kill all the avaricious humans" is themselves avaricious at some level. And by trying to put this plan in to action, they're creating a new "defect/defect" equilibrium where people compete for power through violence, and the winners in this situation tend not to be the sort of people you want in power.)
To me it sounds like you're an intense, inspired person who wants to make a great impact and has a start at a few plans for doing it. Way to go!
You assume that studying politics in university tells you a good answer to that question. To me that doesn't seem true.
If you look at a figure like Julian Assange who actually plays and make meaningful moves, Assange didn't study politics at university.
Studying politics at Cambridge on the other hand will make it easier to become an elected politician in the UK. But that's not necessarily because of the content of lectures but because of networking.
It quite often happens that young people don't speak to older more experienced people when making their decisions about what to study. As your goal is making a difference in the world, it could be very useful to ask 80,000 for coaching to make that choice: https://80000hours.org/career-advice/ You might still come out of that with wanting to go to the same program in Cambridge but you will likely have better reasons for doing so and will be less naive.
--
Getting elected in the UK is certainly a valid move, but it comes with buying into the status quo to the extend that you hold opinions that make you fit into a major party.
I think the substantial discussion about Liquid Democracy doesn't happen inside the politics departments of universities but outside of them. A lot of 20th century and earlier political philosophy just isn't that important for building something new. It exists to justify the status quo and a place like Cambridge exists to justify the status quo.
Even inside Cambridge you likely want to spend time in student self-governance and it's internal politics.
--
To some degree, the idea of a "Friendship and Science Party" has already been tried. The Mugwumps wanted to get scholars, scientists and learned people more involved in politics to improve its corrupt state. It sounds like a great idea on paper, but this is what happened:
According to this account, the more contact science has with politics, the more corrupted it becomes.
--
I think you missed what I see as the main point in "What they might have considered, however, was that there was no valve in their pipe. Aiming to purify the American state, they succeeded only in corrupting the American mind." Not surprising, because Moldbug (the guy quoted about the Mugwumps) is terribly long-winded and given to rhetorical flourishes. So let me try to rephrase what I see as the central objection in a format more amenable to LW:
The scientific community is not a massive repository of power, nor is it packed to the gills with masters of rhetoric. The political community consists of nothing but. If you try to run your new party by listening to the scientific community without first making the scientific community far more powerful and independent, what's likely to happen is that the political community makes a puppet of the scientific community, and then you wind up running your politics by listening to a puppet of the political community.
To give a concrete relatable figure: The US National Science Foundation receives about 7.5 billion dollars a year from the US Congress. (According to the NSF, they are the funding source for approximately 24 percent of all federally supported basic research conducted by America's colleges and universities, which suggests 30 billion federal dollars are out there just for basic research)
The more you promote "Do what the NSF says", the more Congress is going to be interested in using some of those billions of dollars to lean on the NSF and other similar organizations so that you will be promoting "Do what Congress says" at arm's remove. No overt dishonesty needs be involved. Just little things like hiring sympathetic scientists, discouraging controversial research, asking for a survey of a specific metric, etc.
Suppose you make a prediction that a law will decrease the crime rate. You pass the law. You wait a while and see. Did the crime rate go down? Well, how are you measuring crime rate? Which crimes are you counting? To take an example discussed on Less Wrong a while ago, if you use the murder rate as proxy for crime rate over the past few decades, you are going to severely undercount crime because of improvements in medical technology that make worse wounds more survivable.
Obviously you can fix this particular metric now that I've pointed it out. But can you spot and fix such issues in advance faster and better than people throwing around 30 billion dollars and with a massive vested interest in retaining policy control?
When trying to solve something like whether P=NP, you can throw more and brighter scientists at the problem and trust that the problem will remain the same. But the problem of trying to establish science-based policy, particularly when "advocating loads of funding for science", gets harder as it gets more important and you throw more people at it. This is a Red Queen's Race where you have to keep running just to stay in place, because you're not dealing with a mindless question that has an objective answer floating out there, you're dealing with an opposed social force with lots of minds and money that learns from its own mistakes and figures out how to corrupt better, and with more plausible deniability.
Hi,
I am Falk. I am a PhD student in the computational cognitive science lab at UC Berkeley. I develop and test computational models of bounded rationality in decision making and reasoning. I am particularly interested in how we can learn to be more rational. To answer this question I am developing a computational theory of cognitive plasticity. I am also very interested in self-improvement, and I am hoping to develop strategies, tools and interventions that will help us become more rational.
I have written a blog post on what we can do to accelerate our cognitive growth that I would like to share with the LessWrong community, but it seems that I am not allowed to post it yet.
I look forward to reading your post.
Hi all, I'm new. I've been browsing the forum for two weeks and only now I've come across this welcome thread, so nice to meet you! I'm quite interested in the control problem, mainly because it seems like a very critical thing to get right. My background is a PhD in structural engineering and developing my own HFT algorithms (which for the past few years has been my source of both revenue and spare time). So I'm completely new to all of the topics on the forum, but I'm loving the challenge. At the moment I don't have any karma points so I can't publish, which is probably a good thing given my ignorance, so may I post some doubts and questions here in hope to be pointed in the right direction? Thanks in advance!
Hello and welcome! Don't be shy about posting; if you're a PhD making money with HFT, I think you are plenty qualified, and external perspectives can be very valuable. Posting in an open thread doesn't require any karma and will get you a much bigger audience than this welcome thread. (For maximum visibility you can post right after a thread's creation.)
Hi John, thanks for the encouragement. One thing that strikes me of this community is how most people make an effort to consider each other's point of view, it's a real indicator of a high level of reasonableness and intellectual honesty. I hope I can practice this too. Thanks for pointing me to the open threads, they are perfect for what I had in mind.
I am have been a Less Wrong user with an anonymous account since the Overcoming Bias days. I decided to create this new account using my real name.
Hello.
I'm currently attempting to read through the MIRI research guide in order to contribute to one of the open problems. Starting from Basics. I'm emulating many of Nate's techniques. I'll post reviews of material in the research guide at lesswrong as I work through it.
I'm mostly posting here now just to note this. I can be terse at times.
See you there.
Hello!
I’ve lived in Berkeley for about six years. My girlfriend is going to medical school so we’re going to be moving to Boca Raton, Florida (most likely) or Columbus, Ohio in less than a month. I’m sad to be leaving the Bay Area but thrilled to be with my girlfriend when she starts such an exciting chapter of her life. I’m also very fortunate that I can handle nearly all my business online.
I co-founded a startup devoted to making a web game with an old buddy of mine. This same guy introduced me to LW.
Critical thinking and debate has been a focus of mine since I was quite young so LW fit right into my interests. I’m very interested in instrumental/practical applications of rationality. I’ve been lurking for many years and finally decided to make an account to get over my fear of online embarrassment given my unfamiliarity with a lot of the lexicon and protocol on LW.
Some passions of mine are movies, seeking out novel experiences (examples are shooting an AK-47, judging a singing competition, and visiting Pixar), and martial arts.
I’m also interested in effective altruism and AI research but still have a lot of learning to do, especially in the latter.
Welcome!
You may want to check out some of AnnaSalamon's old posts for some things to try as far as applied rationality goes, if you haven't already.
Have you been / are you interested in connecting with the Bay Area Rationalist or EA community while you're still here?
Thanks for the tip! I've read some of her posts but will look into the ones I've haven't.
We're going to be moving in about two weeks and are fairly busy before so probably not going to be able to. I regret not going to a Berkeley Meetup while I had more time.
Hello all =)
I am reading LW more that one year. I organized book club meetups about HPMOR in Kyiv, Ukraine in past (https://vk.com/hpmor_meeting and https://vk.com/efficient_reading5)
Now i start to organization process of first general LW meetup in Kyiv, our google group: http://groups.google.com/d/forum/LessWrong-Kyiv
On the first meet we will discuss Daniel Kahneman`s "Thinking, Fast and Slow" book in addition to what we will do in the future =)
Please, if you can - give any useful suggestions about what and how first meetup must be done (i have read LW pdf file about how to organize meetups).
Awesome! Note that you can advertise your meetup further using the LW meetup system.
Thanks,
http://lesswrong.com/meetups/1fd
Done =)
Hello there! I mean, here and there, too! I will do my best to come, although I have not read the book. Good luck!
A Challenger Has Arrived! Hello, yes, I'd like to announce that I am successfully existing for the first time in forever. I've been a lurker for quite some time, and have finished Eliezer's book. As I've stepped up my studies and plan to continue doing so, I've decided that scouting for a party to join would be wise.
Right now I'm finalizing my grasp of Rationality: From A.I. to Zombies, and organizing some notes I have on my personal struggle with willpower depletion. I would really appreciate if anyone knows of any site-external sources I could devour, in service to these goals.
From this basic grasp of rationality technique I will be departing to MIRI's research guide, so if you're currently on a quest to join the best, I certainly could use some companions in case I stumble.
Thanks, PhoenixComplex7
Welcome! Re: willpower stuff, I found this guy's writing very helpful several years ago. You can get his free book by putting your email in at the bottom of this page. (More specifics on the willpower issues you are facing might allow me to give more targeted advice.)
Introduction comment, as requested.
I've been coming back to this site over and over again, for one or two years now I would say, for any number of topics, and today it dawned on me that there's something great about this site, the community / comments, and material, and that - maybe - I would like to become a part of it.
One email confirmation later, and the goal is achieved in its entirety.
Right, guys?
sigh
EDIT: One minor technical question... the comment system seems to be more or less a straight port from reddit, correct? But, unlike reddit, comment score starts at 0, it seems. Or did my other comment immediately receive a negative vote, seconds after going live?
Hail zanglebert.
The comments do indeed start at a null score. I also have noticed a "Powered by Reddit" icon in the lower right. That is the extent of my knowledge.
I joined a while ago but don't think I ever posted here. I'd lurked for quite some time here and at various blogs a degree or so separation away since before that. I've mostly link-hopped my way around the sequences and various pieces of fiction and followed folks on facebook and recently realized we had a local LW meetup. I'm happy to answer any questions about me, but never really know what kind of information would be relevant to put in an introductory post, so instead I thought I'd make a proposal instead:
I've seen (for a while) a lot of activity regarding AI / Singularities / Existential Risk within these groups of people. For my own part, I have pretty much no background knowledge when it comes to that. So I was looking to really dig into the book Superintelligence as a way to get a rudimentary understanding of it all.
That said, I find that I definitely get a lot more out of learning when I have people to discuss it with. So, with a bit of encouragement, since this is the "get-to-know-you" thread, I figured I'd to put a call out on here to see if there was anyone who might be interested in reading (or re-reading) the book along with me being skype buddies for this process.
My current plan is to go through it a chapter at a time and discuss / do further research / etc before moving on. Message me if that sounds like something you might be interested in doing!
~Kim
Finally bit the bullet and made an account-- hi people! I've been "LW adjacent" for a while now (meatspace friends with some prominent LWers, hang around Rationalist Tumblr/ Ozy's blog on the sidelines, seems like everyone I know has read HPMOR but me), and figured I ought to take the plunge.
Call me Vivs. I'm in my early twenties, currently doing odd jobs (temping, restaurant work, etc.) in preparation to start a Masters' this fall. I'm a historian, and would loooooove to talk history with any of you! (fans of Anne Boleyn/Thomas Cromwell/Victorian social peculiarities to the front of the line, please) I've always been that girl who pays waaaaay too much attention to if the magic system is internally consistent in a fantasy novel and gets overly irritated if my questions are brushed off with "But magic isn't real," so I have a feeling I'll like the way this site thinks, even if I'm way out of the median 'round these parts in a lot of ways.
Do you find D&D's cast-and-forget system consistent? It was borrowed from Jack Vance's Dying Earth novels, but those felt really weird novels to me.
No! I actually find D&Ds system super-frustrating, but then I hate having luck-based elements in magic systems. :P
Hi!
I just want to say I found Stefan Zweig's The World Of Yesterday really insightful about that. I used to think that kind of prudishness came from religion. According to Zweig, it was actually almost the opposite: it came from Enlightenment values, as in, trying really really hard to always act rationally (not 100% in our sense, but in the sense of: deliberately, thoughtfully, impassionately) and considered sexual instincts a far too dangerous, uncontrollable, passionate, "irrational" force, that is where it came from. Which suggests that Freud was the last Victorian, so to speak.
Hi back!
Actually, interestingly, some Victorian prudishness was encouraged by Victorian feminists, weirdly enough. Old-timey sexism said that women were too lustful and oozed temptation, hence why they should be excluded from the cool-headed realms of men (Arthurian legend is FULL of this shit, especially if Sir Gallahad is involved). Victorian feminists actually encouraged the view of women as quasi-asexual, to show that no, having women in your university was not akin to inviting a gang of succubi to turn the school into an orgy pit (this was also useful, as back then, there were questions on the morality of women). A lot of modern sexism actually has its roots not in anything ancient, but in a weird backlash of Victoriana.
LOL. To quote Nobel Laureate Tom Hunt as of a couple of weeks ago:
Uggghhhh.... that guy. I may not be a scientist, but I saw red when I read that.
I found that particular piece of stupidity particularly amusing since my field is upwards of 55 percent female (at my level - the old guard of people who have been in it since the 60s or 70s is more male) and I have worked in labs where I was the only man.
This quote seems to have been intended as a joke and was taken out of context. A very flawed accuser: Investigation into the academic who hounded a Nobel Prize winning scientist out of his job reveals troubling questions about her testimony
One therefore wonders at man/man, woman/man and woman/woman troubles, which statistically should account for the majority of academic, er, troubles.
He's asserting that most troubles between men and women fall into a particular category. It might be that man/man troubles rarely fall into that category, and because most of that category is missing, are less numerous overall.
Well... Having once been infatuated with my supervisor and more than once reduced by him to tears even when my infatuation wore off, I can say this:
It's not people falling in love with people that really reduces group output. Being in love I worked like I would never do again.
It's people growing disappointed with people/goals, or having an actual life (my colleague quit her PhD when her husband lost his job, + they had a kid), or - God forbid! - competing for money. Now that's what I would call trouble.
Feminists of that era were practically moral guardians. In the USA, they closely allied with temperance movements and managed to secure the double victory of securing women's right to vote and prohibiting alcohol.
I can't track the reference right now, but I recall reading a transcript of a Parliamentary debate where they decided not to extend anti-homosexuality legislation to women on the grounds that women couldn't help themselves.
Welcome to LW! I suspect you'll find a lot of company here, at least as regards thinking in unwarranted detail about fictional magic systems.
What is this "unwarranted" thing you're talking about?
X-)
Thanks! I actually had a VERY long side discussion in an undergrad history course about whether stabbing a person possessed by a dybbuk creates a second dybbuk...
Hi, I'm new here. I find this site while looking for information about A.I. I read a few articles and couldn't help but smile to myself and think 'wasn't this what to the Internet was suppose to be. I had no idea this site existed and I'm honestly glad to have found stacks of future reading, you know that feeling. I never really post on sites and would have usually have lurked myself silly but I've been promted into action by a question. I posted this to reddit in the shower thoughts section because it seemed appropriate but I'd like to ask you (more).
I was reading about Orthogonality thesis, and Oracle A.I.'s as warnings and attempted precaution to potential hostile outcomes. I've recently finished Robots and Empires and couldn't help but think that something like the Zeroth law could further complicate trying to restrain A.I.'s with begin laws like do no harm or seemingly innocent tasks like acquire paper clips. To me it seemed trying to stop A.I.'s from harming us whilst also completing another task would always end up with us in the way. So I thought perhaps we should try to give the A.I. a goal that would not benefit from violence in anyway. Try to make it Buddha-like. To become all knowing and one with all things? Would a statement like that even mean anything to a computer? The one critism I receive was "what would be the point of that?" I don't know. But I'm curious.
What do you think?
Thank you for this article. I'm finding it still difficult to navigate the site in terms of comments and posts. Would it be possible to edit some more explanation in the "site mechanics" portion of this article to include an explanation of what open threads are and how to use them?
Open threads are for things that aren't important enough for either a toplevel post or a discussion post. You use them just like you used the welcome thread: leave a comment and let people respond :)
Hello people.
I am brand new to this site and really to the topic of rationality in general. A friend recommended HPMOR to me a few months ago and I loved it. I then read Cialdini's 'Influence' on recommendation from these forums, and I am now reading Rationality: from AI to Zombies.
My background is in science, having studied oceanography at university, graduating about ten years ago. I am currently thinking about training as a science teacher. I look forward to becoming better acquainted with this topic, and being involved in the discussions.
Welcome! :D
Welcome!
Hello, all!
I’ve lurked this site on and off for at least five years, probably longer. I believe I first ran into it while exploring effective altruism. Articles that had a definite impact on my thinking included those on anchoring, priming, akrasia, and Newcomb's problem. Alicorn's Luminosity series is also up there, and I keep perpetual bookmarks to "The Least Convenient Possible World" and "Avoiding Your Belief's Real Weak Points."
I earned a B.A. in history, worked for a couple years in a financial planning office, then ended up on the rather weird track of becoming a professional piano accompanist. It turned out to be a far more financially and logistically feasible career move than the other grand idea I attempted at the time (convincing GiveWell I'd be an awesome hire). So piano is what I'm doing now. (GiveWell is admittedly still my longshot/backburner plan B, but I'm focusing all professional development on the music end of things right now).
Some things I've got more than a passing interest in, which I think fit the LW ethos:
Taubman approach. Approach to keyboard technique (and prevention of repetitive-motion-injury) that got the recognition and interdisciplinary interest of the scientific and medical communities. My personal experience is, "This shit works: it saved my wrists and music career," and the data indicates my experience isn't just anecdote or placebo effect.
Evaluating the effectiveness of charitable-giving interventions. I went to a highly conservative/libertarian college, where, if I wanted to donate to or support any poverty-alleviation program, I'd better be ready with a 95-point defense of my choice. Or else. It's been a continuing interest of mine ever since, appealing equally well to both my cynicism and idealism.
Finding secular alternatives to the community-building structures, motivational structures, and self-examination/self-change disciplines of religion.
Classical stoicism. Thus far I've found its framework and mindhacks to be a balanced, practical fit for my personality and temperament. I especially appreciate how it hasn't yet sent me into any extreme, detrimental pitfalls as I've tried to apply it. I'd be interested in meeting other people who are trying to methodically apply it to their lives, but I get the feeling we're probably a pretty quiet and weird bunch.
I likely won't comment here much, but I wanted to at least finally make an account, introduce myself, and let you all know I've found the site valuable over the years. I've been making a more concerted effort recently to seek out and connect with individuals who value things I value, and I figured it was high time to drop by the Less Wrong community, as a part of that.
Hello LW World!
I have been reading the writings of Eliezer Yudkowsky for about 2 years now, ever since a friend of mine introduced me to HPMOR. It continues to blow my mind that there is an entire movement and genre dedicated to reason. It's provided a depth of thought that I've always felt different from others for enjoying, and now I can happily say that there's a community for it.
I am currently an unemployed veteran and college dropout seeking to solve the financial problems which prevent me from currently completing my degree. I am halfway finished with an ultrasound tech school and I am also studying programming as a hobby. I'm proud of a lot of my work so far, from making the beginnings of an awesome game on Scratch to completing an advanced challenge on Hackerrank (technically it's incomplete, but it's only the timeout limit on large inputs that I have yet to find a solution for). I'm also learning web design skills on FreeCodeCamp where I have found very supportive mentors and hope to get a basic foot-in-the-door level of skills to gain employment.
What I REALLY wanted to do but failed at due to financial hardship is to work in neuroscience research. I'm more interested in the cybernetic side of turning science fiction into real scientific discoveries, but AI research is not a concept that I would turn away from, as I believe it has mutually beneficial applications to connect with neuroscience. Fingers crossed, I can either accomplish my goals toward neuroscience sooner rather than later or I can be lucky enough to survive to the point where aging is cured and widely distributed, giving me more than a lifetime to complete my goals.
The reason I'm posting today in particular is that I wanted to know if Reason, Cyberpunk, and Transhumanist themed poetry that I have created would have a place here. I'm thinking that I would like to have feedback from others who enjoy thinking critically about life. That said, the poetry I've made is an art form and would only expect to get feedback from rationalists to the extent that Reason is an art form. Perhaps any concern of that nature is really the result of a fallacious view of Reason that still clings to me as the "Hollywood Reason" concept that Eliezer described.
Regardless, what I have created is intended to be thought provoking and entertaining for individuals who often think of the intricate concepts that are on LessWrong. Any feedback that would help me to make them more thought provoking and entertaining would be a great help to improve them. Any advice on if there is an acceptable space for such a thing as well as advice on where to begin is appreciated in advance.
Welcome!
I don't think there's an official rule about poetry. Speaking as a person with over 9000 karma, my intuition is that it'd be well received if it has some novel ideas/perspective and was linked to from an open thread.
Hello everyone.
My name is Kabelo Moiloa, and I graduated from the Anglo-American School of Moscow three weeks ago. My deep interests are math, computer science and physics, in fact I might consider doing a series of posts here on Homotopy Type Theory, since I've been going through the HoTT Book. I first came to this website likely four years ago, so I don't remember well how it was. As I recall, I came here soon after I deconverted from Catholicism, and have found the discussions and content here fascinating ever since. For example, although I had already rejected theistic morality before reading the articles here, Fake Explanations allowed me to explain why. The idea that morality is, "intrinsic to the nature of God," is no more explanatory than "my confusion about this metal plate is explained by the phrase heat conduction." Additionally, the emphasis here on beating akrasia and achievement, lead me to pursue commitment devices, productivity systems etc., which have improved my ability to archive my goals, although unfortunately I only pursued these late in my senior year of high school. I was also exposed to Cognito Mentoring, which was quite useful.
I remember you, glad to hear it :-).
I love teaching, especially interacting with my students and their thinking, and I love philosophy, especially ethics. Understandably, I'm a philosophy teacher. I also enjoy politics, history, biology and the great outdoors.
Hey everyone!
I'm a long-time lurker of this site, but I haven't posted anything before. I've read all the sequences twice over the past few years, along with almost all non-sequence posts. The list of all posts was really not in an obvious location, but I eventually managed to find it!
So I'm new to the idea of actually communicating with people over the internet; I've never actually been a member of any forum before. Though I have a Reddit account, I've only made about ten posts in the year that I've been there. It's really weird; I often find myself thinking I have a response to something I read, then thinking "too bad I can't communicate with them!", completely forgetting that no, wait, I have an account expressly for that purpose.
I've decided that this pseudo-voyeurism of online communities has gone on long enough and decided to join. I don't know if I'll have anything to contribute, as I'm pretty critical of the value of my own ideas, to the point that I once tried to start a blog but decided that everything I could ever want to say has already been said, and I deleted the blog after one post. Maybe I need to impose a comment quota on myself?
In any case, I'm a physics grad student who mostly works in biophysics. I'm also interested in pure mathematics, philosophy, and computer science / artificial intelligence, though I procrastinate too much and don't really know more than the average CS minor. I plan on changing that at some point (he said, ironically).
Hello, everyone!
LW came to my attention not so long ago, and I've been commited to reading it since that moment about a month ago. I am a 20-year old linguist from Moscow, finishing my bachelor's. Due to my age, I've been pondering with usual questions of life for the past few years, searching for my path, my philosophy, essentially, a best way to live for me.
I studied a lot of religions, philosophies, and they all seemed really flat, essentially because of the reasons stated in some articles here. I came close to something resembling a nice way to live after I read "Atlas shrugged", but something about it bothered me, and after thorough analysis of this philosophy I decided to take some good things from it and move on, as I did a lot of times before.
I found this gem of a site through reddit and roko's basilisk (is it okay if I say it here? I heard discussion was banned). I am deeply into the whole idea of rationality and nearly all ideas that are presented on this site, but something really bothers me here, too.
The thing is that it is implied that altruism and rationality go hand in hand; maybe I missed some important articles that could explain me, why?
Let's imagine a hypothetical scenario: there is a guy, Steve, who really does not feel anything when he helps other people nor when does other "good" things generally; he does this only because his philosophy or religion tells them to. Say this guy was introduced to ideas of rationality and thus he is no longer bound by his philosophy/religion. And if Steve also does not feel bad about other people suffering (or even takes pleasure in it?)?
What i wanted to say is that rationality is a gun that can point both ways: and it is a good thing that LessWrong "sells" this gun with a safety mechanism (if it is such "safety mechanism". Once again, maybe I missed something really critical that explains why altruism and "being good" is the most rational strategy).
In other ways, Steve does not really care about humanity; he cares about his well-being and will utilize all knowledge he got just to meet his ends ( people are different, aren't they? and ends are different, too).
Or even another, average rationalist Jack estimated that his own net gain will be significantly bigger if he hurts or kills someone (considering his emotions and feelings about overall humanity net gain, and all other possible factors). That means he must carry on? Or is it a taboo here? Or maybe it is a problem of this site's demographics and nobody even considered this scenario (which fact I really doubt).
I feel that i dive too deep into metaphors, but i am not yet a good writer. I hope you understood my thought and can make me less wrong. :)
edit: fixed formatting
Welcome, Ozyrus.
This is moral philosophy you're getting into, so I don't think that there's a community-wide consensus. LessWrong is big, and I've read more of the stuff about psychology and philosophy of language than anything else, rather than the stuff on moral philosophy, but I'll take a swing at this.
It seems that your implicit question is, "If rationality makes people more effective at doing things that I don't value, then should the ideas of rationality be spread?" That depends on how many people there are with values that are inconsistent with yours, and it also depends on how much it makes people do things that you do value. And I would contend that a world full of more rational people would still be a better world than this one even if it means that there are a few sadists who are more effective for it. There are murderers who kill people with guns, and this is bad; but there are many, many more soldiers who protect their nations with guns, and the existence of those nations allow much higher standards of living than would be otherwise possible, and this is good. There are more good people than evil people in the world. But it's also true that sometimes people can for the first time follow their beliefs to their logical conclusions and, as a result, do things that very few people value.
Jack doesn't have to do anything. If 'rationality' doesn't get you what you want, then you're not being rational. Forget about Jack; put yourself in Jack's situation. If you had already made your choice, and you killed all of those people, would you regret it? I don't mean "Would you feel bad that all of those people had died, but you would still think that you did the right thing?" I mean, if you could go back and do it again, would you do it differently? If you wouldn't change it, then you did the right thing. If you would change it, then you did the wrong thing. Rationality isn't a goal in itself, rationality is the way to get what you want, and if being 'rational' doesn't get you what you want, then you're not being rational.
Excellent answer! Yes, you deducted the implicit question right. I also agree that this is a rather abstract field of moral philosophy, though i did not see that at first. Although I don't think that your argument for the world being a better place with everyone being rational holds up, especially this point
Even if there are, there is no proof that after becoming "rational" they will not become "bad" (apostrophes because bad is not defined sufficiently, but that'll do.). I can imagine some interesting prospect for experiments in this field by the way. I also think that the result will vary if the subject is placed in society of only-rationalists vs usual society - with "bad" actions carried out more in the second example, as there is much less room for cooperation.
But of course that is pointless discussion, as the situation is not really based on reality in any way and we can't really tell what will happen. :)
That is not so. There is a certain overlap between the population of rationalists and the population of altruists, people from this set intersection are unusually well represented on LW. But there is no "ought" here -- it's perfectly possible to be a non-altruist rationalist or to be a non-rational altruist.
Hi all,
I'm a recently graduated aerospace engineer. First came upon LW via HPMOR a couple years ago, been through the Sequences once since then, currently going through Rationality: A to Z mostly as a refresher.
Gravitated toward aerospace as a sort of proto existential risk mitigation effort, but having spoken with Nick Beckstead via 80,000 hours and comparing the potential of various fields to mitigate X-Risk within the next ~100 years which resulted in my discounting space development relative to other fields, currently more open to other avenues.
Very interested in learning more computer science, and applied mathematics more generally, but part of what makes me strongly prefer LW over other communities interested in the same is the strong focus on effective, economical implementation of ideas
Hi All, I live at the LW Boston house, the Citadel. My undergrad and grad was in Biology, and I am switching into programming. I am interested in psychology and cognitive biases. I value self-improvement and continuous learning. I recently started blogging at https://evolvingwithtechnology.wordpress.com.
Hello, my name is Daniel.
I've wanted to join the rationality community for a little while now, and I finally worked up the courage after a brief but informative discussion with Anna Salamon, CFAR's executive director (who was as kind as I was nervous).
I'm working on finishing up a B.S. in Electrical Engineering, and I plan on continuing to a doctorate in some branch of decision or control theory. I also study philosophy, fiction writing, and computer science.
Since becoming aware of rationality in general, and Eliezer Yudkowsky's way of making everything make sense, I've gotten pretty heavily into cognitive psychology and metacognition.
To be frank, I understand that I'm a rank amateur in the field of rationality in general, but I'm looking forward to trying to get better. So if you're downvoting me, or even upvoting me, explaining why in a comment or message would be extremely helpful, so I can take the time to reinforce my positive cognitive pathways, and prune my negative ones.
See you in the threads!
Well met!
My name is Fox, and I am an actor and magician...well...In actuality, I guess those are both the same thing. I know how you all love concision, so I'll try again...* ahem *
Well met!
My name is Fox, and I am a liar. Empathetic to a fault, highly spiritual, and emotionally driven--still an emo boy at heart--I live as far from consciously as it gets. My main passions are girls, music, and service to others. Core values are: love, kindness, beauty, passion, immersion, and evolution.
For the past year I have studied and practiced magick. It is very real to me and has been the lens through which I view the universe as well as my primary method for navigating life. I have enjoyed many experiences and even some progress, living as such. ...for a time.
Lately, I have just been working and preparing to reenter school. I find this "being an adult" business baffling and struggle with finances. Throw intangibles into the mix and it's untenable. Which brings me to why I am here: I need to implement rational-thinking as my default state-of-mind to help me with goal-oriented decision making and getting a grasp on the most elusive concept in the world to me: self-discipline
-Reverend Mother Gaius Helen Mohiam during the testing of Paul Atreides in Dune.
So LW...will you be my Gom Jabbar??
plus
8-0 Prediction: you will have a very confused and maybe fun trip.
Hello everyone!
I'm new to the site. I'm a grad student with a science background hoping to learn more about rationality and science. I've read posts on LW for quite some time (~ 3 years). I'm an atheist and a skeptic with some knowledge about theoretical physics.
See you around! ~dajoker
Hi! I am socially retarded... There are many things the standard human was born with the capacity to grasp that I never can. The word "autism" appears to me to be being thrown around a lot lately, mostly as a meaningless word used to convey that one thinks another is simply not normal but when I first noticed how heavily users on the internet threw around the word two years ago I identified as such for a bit to make conversation more expedient. I am able to comprehend metaphors and similes and such for some reason, but things such as having the capacity to roleplay or being able to perceive what I should do in any given scenario to maximize the happiness of the human before me is incomprehensible to me. I like to think I am a purely logical thinker and was born to be such but I'd rather not start talking about that right now...
My education is pretty poor. Eighth grade. I have read next to no books, and the internet was what taught me to speak English as I do today. My English was very basic prior, even though it is my only language. I looked up in the dictionary every word I encountered that I couldn't define for two years, until I decided that refining my expression in the English language for the human's sake was a waste of time and stopped caring.
I feel like I can't express more about myself without delving right into my philosophy, the likes of which I used to contest with every mind I came across indiscriminately only to have them still disagree with me 99% of the time despite my cornering them in argument, and I don't really want to because I've had such bad experiences with convincing others to think like me. The downvote system on this website is kind of intimidating as well... my first post on this website got downvoted once almost immediately and I'm not sure if I can tell by whom. I hate systems that enable passive-aggression like that. Even conversing in real life is awful because others can use petty tricks to try to emotionally manipulate you instead of actually just explaining why you should think like them via argument. It's just masturbation for them, and they have no interest in convincing you to think like them. I suppose that is one thing I feel I can safely say about my philosophy... I don't see my opinions as just opinions, I see them as an objective rationalization of this universe the likes of which one cannot disagree with without simply being wrong. I want to rationalize everything too, you know. I used to be indoctrinated to the point where I thought simply asking questions was evil. All I'd ever wanted to do was rationalize to all my understanding of the universe to objectively minimize their pain and maximize their pleasure for the sake of forcing the world to tend to its most rational end as i perceive it but whatever... I'm still being impertinent with whatever I'm writing here since I don't think just up and writing out my opinions would be a good idea.
I have very few interests. I really only care about defining right and wrong, and giving my philosophy to others, which I haven't done for a very long time. One day I hope to start expressing my opinions on what is right and wrong in a formal manner just to have done so in my lifetime. I apologize for the entirely vague post... I still haven't really any idea how this site works but if I ever debate users here or something I won't hesitate to express my opinions in their entirety.
Edit: I misunderstood what you said by "rationalize", sorry.
As Polymath said, rationalization means "To try to justify an irrational position later"", basically making excuses.
Anyway, I wouldn't worry about the downvotes, based on this post the people downvoting you probably weren't being passive aggressive, but rather misinterpreted what you posted. It can take a little while to learn the local beliefs and jargon.
Hi, Hoofwall. Welcome to LessWrong.
I have considered the label "autistic" to describe myself at some points in the past, but now I'm not sure. I may be at another point in the spectrum, or I may be just imagining things. But I can definitely empathize with anyone who struggles to make themselves understood to humans.
I'm confused about one point: Your usage of the verb " to rationalize" suggests that you intend it for a meaning that is slightly different from the standard meaning it has in logical jargon. We usually say that someone is "rationalizing" when they make an irrational decision and then, afterwards, make up an excuse to keep feeling good about it. I suspect that's not what you meant when you used that word; it feels like it would have been clearer to use the verb "to reason."
Of course, this is only my speculation. Please correct me if I'm wrong. (Within the rationalist culture prevalent in this forum, correcting other people when they're wrong is socially accepted as something you can do, but also, accepting corrections when you're wrong is something you're expected to do.)
Hello.
My name is Mikhail, and I have been lurking on LW for a several months, mostly reading sequences. I have discovered this site after reading HPMoR, as no doubt many had.
I'm a practitioner of GTD, and I am looking for
supplementing understood low-level practices of performing things with metaalgorithms for decision making and planning
improving tactics / learning tricks for handling low-level tasks which don't come naturally to me (such as learning languages), and hence cannot be efficiently done by regular planning / execution process
A bit of personal info: I am a software engineer (15y experience, more if one counts tinkering with software during school years), I live in Norway and originally from Siberia.
Hi. I've been lurking here for a couple of months, reading up on some of the sequences and so forth, I made an account because I wanted to post a few things on the discussion board. Mainly to do with why I'm pretty convinced that immortality is already a thing, and how that has badly damaged my belief in a utilitarian system of ethics. Finally, I wanted to ask about something to do with FAI; essentially, why wouldn't X work. I'm curious to see how FAI will reveal itself to be more fiendish than I already thought.
We'd love to know who you are:
What you're doing:
What you value:
How you came to identify as an aspiring rationalist
How you found us
Very new here. Hopefully I can learn a lot from all of you.
Welcome!
Hello, everyone! I've been lurking for about a year and I've finally overcome the anxiety I encounter whenever I contemplate posting. More accurately, I'm experiencing enough influences at this very moment to feel pulled strongly to comment.
I've just tumbled to the fact that I may have an instinctive compulsion against the sort of signalling that's often discussed here and by Robin Hanson. In the last several hours alone I've gone far out of my way to avoid signalling membership in an ingroup or adherence to a specific cohort. Is this sort of compulsion common amongst LWers? (I'm aware that declaring myself an anti-signaller runs the risk of an accusation of signalling itself but whadayagonnado.)
I'm also very interested in how pragmatism, pragmaticism, and Charles Sanders Peirce form (if at all) the philosophical underpinnings of the sort of rationality that LW centers on. It seems like Peirce doesn't get nearly as much attention here as he should, but maybe there are good reasons for that.
Speaking for myself, (a) I am not good at playing social games, therefore I hate environments where things like signalling are the only important thing, and (b) joining any faction feels to me like indirectly supporting all their mistakes, which I would rather avoid.
Hello, everyone!
I am a long-time lurker and reader of LessWrong, and I have finally worked myself up to making an account and writing some comments. I am looking forward to participating in the discussions more, and hopefully writing some posts and contributing to the thought-bank here. So far, LessWrong have been a great resource for me, helping me to get a sturdier basis for my ideological framework, and exposing me to some good new ideas to think about.
For a little bit about myself: I am 29 years old, Russian, bachelors’ degree in Chemistry and Math and a Masters’ in Nuclear Chemistry from an American university. Currently I live in Russia, working as an instructor in IT / software development for a business analytics software company. The job is pretty much another step of school, only going into a “job experience” slot on the resume, instead of the “education” one – we study a topic for a month, then we go and teach it to our developers. My first year was our company’s software applications, then development and coding, now I am on the databases part. Eventually, I am hoping to return to a sciencier sort of work, though.
Religion-wise, I am an atheist, formerly going through all kinds of interesting religious searches (maybe I will make a separate comment on the rationalist origin thread about that). Politics – wise, I find it hard to classify myself as going with any traditional views (call me an effective anarchist, maybe?). Or maybe I am hoping for a better set of political ideas to emerge someday in the future.
My interests are the following:
Reading everything I can get my hands on, preferably science and science-pop literature, fiction and science fiction.
Science and self-education. When I found Less Wrong, it sparked yet again my interest in the more arcane parts of IT, and I am currently working through the basics part of the Miri research guide posted here, while also keeping up with my job-related applied IT studies. In the past, I found myself sometime venturing into the evolution theory field (still hoping to find some time some day to make a study of evolutionary algorithms and maybe program some fun simulation with evolving pseudo-life), basics of quantum (well, that was in my school program), biology, sometimes philosophy, religion and applied ethics.
As for less science and reading-related interests – I enjoy camping, rafting, the general summery outdoors stuff. In my city, summer is short, so we try to squeeze as much goodness as possible out of it.
Anyways, I am looking forward to having some fun discussions here. Nice to meet you, guys!
Hello,
I am a month long lurker who finally decided to make an account.
I'm 24, and am living as a US expat in Beijing right now. I have a BA in Economics from a top 5 university, where the most important thing I learned was just how little that actually meant. I got pretty disillusioned with academia, and I've only been able to start enjoying intellectual pursuits again in the last year or so; hence, it is nice to find a non-university community where I might be able to discuss interesting ideas without all of the self-important swagger.
I would say that the other important thing my econ background influenced is my rational decision making: I do not vote; I was involved in effective altruism (until I became an ethical nihilist); etc. I think I've experience some significant emotional blunting from this, and have mixed feelings about it. Hopefully being in a community of similarly oriented people (and getting more information about typical outcomes) will help me work through whether this is something that I need to address or not.
I lean somewhat classical-liberal (or pro-market left of center, with significant room for government provisioning for market failure) at the moment, but lately I've fallen into a more libertarian heuristi, which I want to become more aware of and counteract as I disagree with that political philosophy on several formal issues. Hopefully I can use the resources at LW to recalibrate on this issue in particular.
My interests are pretty broad: - Public finance / policy - Game theory / auction theory / voting theory (especially wrt collective decisionmaking / policy) - Epistemology (especially regress / Munchausen Trilemma) - Dynamics of social identity (especially the ethics of statistical discrimination) - Aesthetics (especially w.r.t. visual art) - Psychology and personal identity (especially antipsychiatry) - Consciousness, continuity of experience, and personhood - Literature (especially Latin American)
Additionally, I enjoy learning math, though I am not very talented at it (I was a single Algebra/Galois Theory class away from a math degree though). Recently, I've been going back through some old analysis / algebra / number theory books to give it another shot; I'm still bad at it, but it's nonetheless rewarding.
One of the things about LW that seems really awesome is the deep programming knowledge. I enjoyed the few programming classes I took, and look forward to learning more about its applications to modelling decision making.
Anyways, I look forward to engaging with you, and if anyone has anything they want to point me towarda here, I'd love the tip.
See Vaniver on decision theory!