A few notes about the site mechanics
To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!
Less Wrong
comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via
Markdown syntax (you can click the "Help" link below the text box to bring up a primer).
You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.
However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.
Replies to your comments across the site, plus
private messages from other users, will show up in your
inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.
Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.
EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like
common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.
A few notes about the community
If you've come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)
If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma—honestly, you don't know what you don't know about the community norms here.)
Alternatively, if you're still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest
open comment
thread. In fact, Open Threads are intended for
anything 'worth saying, but not worth its own post', so please do dive in! Informally, there is also the unofficial
Less Wrong IRC chat room, and you might also like to take a look at some of the other regular
special threads; they're a great way to get involved with the community!
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)
If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.
Finally, a big thank you to everyone that helped write this post via its predecessors!
Comments (635)
New to the site. LW came to my attention today in a Harper's Magazine article "Come With Us If You Want To Live (Among the apocalyptic libertarians of Silicon Valley)" January 2015. I hope to learn about rationalism. My background includes psychology, psycho-metrics, mechanics, and history but my interests are best described as eclectic. I value clarity of expression but also like creativity and humor. I view the world skeptically, sometimes cynically. For amusement I often speak ironically and this, at times, offends my listeners when I fail to adequately signal it. I do not hesitate to apologize when I see that I have offended someone. Hello.
Welcome!
Welcome to Less Wrong!
(Wow. So you came here after reading the Harper's article, huh? That's actually pretty surprising to me. It's only one data point, but I feel as though I should significantly weaken what I said here about the article. Color me impressed.)
Hello all. My name's Tom and I'm a second-year undergraduate mathematics student in Adelaide, Australia. I rediscovered LessWrong a few months back after a conversation with friends about charitable donations where I referenced a post here about effective altruism. I had previously read only a few of the Sequences posts, having been directed here by Eliezer's fanfiction, but since signing up I've made my way through about 80% of the major sequences.
If anyone has any questions about my background or interests, please feel free to ask.
Hi everyone!
My name is Rick, and I'm 29. I've been lurking on LW for a few years, casually at first, but now much more consistently. I did finally post a stupid question last week, and I've been going to the Austin Meetup for about a month, so I feel it's time to introduce myself.
I'm a physics PhD student in Austin. I'm an experimentalist, and I work on practical-ish stuff with high-intensity lasers, so I'm not much good answering questions about string theory, cosmology, or the foundations of quantum mechanics. I will say that I think the measurement problem (as physicists usually refer to the question which "many worlds" is intended to answer) is interesting, but it's not clear to me why it gets so much attention.
I come from a town where (it seems like) everybody's dad has a PhD, and many people's moms have them as well. Getting a PhD in physics or engineering just seemed like the thing to do. I remember thinking as a teenager that if you didn't go to grad school, you were probably an uneducated yokel. More importantly, I learned very early that a person can have a PhD and still make terrible decisions or have terrible beliefs. I also formed weird beliefs like "chemistry is for girls" and "engineers ride mountain bikes; physicists ride road bikes". I think I still associate educational attainment too strongly with status.
I've been involved in the atheist and secular humanism communities for close to ten years now. I gradually transitioned from viewing these communities as a source of intellectual stimulation to sources of interesting and relatable people. I'm still involved in the secular humanism club that I started a few years back at UT.
I was vaguely aware of Less Wrong for a while before my roommate showed me HPMOR. After reading through all of that (which had been released at the time), I got more into the site and quickly read all the core sequences. I found all of it to be much more intellectually satisfying than all of the atheist apologetics I'd read in college, and I realized how much better it was for actually accomplishing something other than winning an argument. Realizing how toxic most political arguments are and understanding why I could win an argument and still feel icky about it were pretty huge revelations for me. In the last six months, I've been able to use things that I learned here and made some seriously positive changes in my life. It's been pretty great.
I'm also interested in backpacking, rock climbing, and competitive cycling. A bike race is a competition in which knowing what your opponent knows about you can be a decisive advantage. It's very much a Newcomb-like problem. Maybe I'll start a thread about that sometime.
Hi I'm Harsh Gupta I'm an undergraduate student studying Mathematics and Computing at IIT Kharagpur, India. I became interested in Rationality when I came across the wikipedia article for Conformational Bias around 2 years ago. That was pretty intriguing, I searched more and read Dan Ariely's book Predictably Irrational. Then also read his other book Upside of Irrationality and now I'm reading hpmor and Khaneman's Thinking Fast and Slow. I also read The Art of Startegy around the same time as Arliey's book and that was a life changer too. The basic background of Game Theory that I got from The Art of Startegy helped me learn to analyze complex real life situation from mathematical perspective. I came to know about lesswrong from grwern.net, which was suggested by friend who is learning functional programming. I want to get more involved with the community and I would like to contribute some articles in future. BTW is there any community todo list?
Hello LessWrong community,
I came to this site after having read Harper's Magazine article "Come With Us If You Want To Live" by LW member @swfrank (@vernvernvern and I have this in common!). I am 21 years old, and am a percussionist living in Omaha Nebraska.
The first rational thought I can recall occurred in Kearney, NE. I was about 8 years old, I was walking across a soccer pitch on my way home from school. I was singing a modern christian worship song, looking into the sky. As I stared into space, I realized how meaningless my words were. I was alone and I sang to no one (time seemed to slow, it was a surreal experience). I began questioning the existence of a watchful god (this was a hard thing to do in my highly christian family). After that I struggled to involve myself in worship. This was a cornerstone event for me, leading to a more rational way of life.
I am now a junior at University of Nebraska at Omaha working toward a percussion performance degree. My diet consists of about 60% Soylent. I look forward to the connections I will make on LessWrong.
I have compiled some individuals who have played a large role in my rationality and progress: Bjork (musician), Omar Rodriguez Lopez (of The Mars Volta), Stanley Kubrick, C.S. Lewis, Ralph Ellison, Friedrich Nietzsche, George Orwell, Ludwig Van Beethoven, György Ligeti (composer), David Lang (composer), Elon Musk, and Steven Schick (percussionist).
Philip Kolbo
I can relate to having musicians in my list of intellectual inspirations. Greg Graffin of Bad Religion was certainly an influence in mmy developing aspirations to rationality.
Yea, punk is a inspiration to me as well. You can see that with Omar.
Hello. I’m Mark. I’m a 24-year-old software engineer in Michigan.
I found LessWrong a little over a year ago via HPMOR. I’m working through the books listed on MIRI’s Research Guide. I finished Bostrom’s Superintelligence earlier this week, and I’m currently working through the Sequences and Naive Set Theory. I’m not quite sure what I want to do after I complete the Research Guide; but AI is challenging and interesting, so I’m excited to learn more.
P.S. I’m a SuperLurker™. I find it very difficult to post in public forums. I only visualize the futures where future!Me looks back at his old posts and cringes. If you suffer similarly, I hope you will follow my lead and introduce yourself. Throw caution to the wind! Or, you know, just send me a private message (a simple “hey” will suffice) and maybe we can help each other.
Instead of cringing you can think "wow, I made a lot of progress since". It did the trick for me, but well, YMMV.
You say you anticipate cringing... is that a correct anticipation? Do you currently find yourself frequently beating yourself up for things you've said or written? If so, maybe that's the bug you want to fix first. Reinterpretation can be a good strategy; maybe try to frame your past post differently. For example, despite whatever factors you might find make a post of yours cringeworthy, it seems likely that at least one person found it valuable, interesting, or at least amusing.
Anyway, welcome!
Be honest, do you really actually fear cringing when you re-read your stuff months or years from now? Sounds to me like an invented reason to mask a much more plausible fear: Looking foolish in front of others by saying foolish things. Well in case you do make a fool of yourself you always have the option of admitting "back then I was foolish in saying that and I have changed my mind because of X". In this communuty being able to do that is usually accompanied with a slight status gain rather than severe status punishment and ridicule, so no need to worry about that.
Hello. My name is Tom. I'm 27 and currently working an a PhD in mathematics. I came to this site by following a chain of links that started with TVTropes of all things.
I have been a fan of rational thinking as long as I can remember. I'd always had the habit of asking questions and trying to see things from every point of view. I devoured all sorts of books growing up and shifted my viewpoints often enough that I became willing to accept the notion that everything I currently believe is wrong. That's what pushed me to constantly question my own beliefs. I have read enough of this site to satisfy myself that it would be worthwhile to make an account and perhaps participate in the community that built it.
Welcome to LW. At one point in my life I would read a randomly selected passage from the Enchiridion before going to sleep every night.
Which of them all?
Presumably that of Epictetus, the ancient Stoic.
Greetings, y’all. I’m very excited to take the plunge into the LW community proper. I spent the last six months plowing through the sequences and testing the limits of my friends’ patience when I tried to engage them in it. Besides looking for people to talk to, I am beginning to feel a profound restlessness at not doing anything with all the new ideas in my head. At 27, I’m not a “level 1 adult” yet. I don’t really have something to protect or a purpose I’m dedicated to. I hope that by being active in the community will at least get me in the habit of being active.
My name is Jacob, I was born in the Soviet Union and grew up in Israel. My parents are scientists, my dad is probably top 10 worldwide in his field. I grew up playing soccer and sitting at dinner with students and scientists from around the world, I hope I actually did realize even as a teenager how awesome it was. I did my Bar Mitzva at a reform synagogue but God was never really part of our family conversation, I don’t think that I’ve said a prayer and actually meant it since I was 12 or 13. There are just enough Russian-speaking math geeks in Israel to form a robust subculture and I was at the top of it: winning national competitions in math and getting drunk the next day on cheap vodka. I had a very strange four-year service in the IDF. I sweated blood for a degree in math and physics that got me a minimum-wage job in the Israeli desert, and then effortlessly breezed my way through a top 20 MBA in the US that suddenly made me a middle class New Yorker. I work an easy job that leaves me with plenty of energy at the end of the day to play sports, perform stand up, date, and improve my skills as a rationalist by considering my intellectual biases.
I stumbled on LW after reading an article about Roko’s #$&%!@ of all things, and the last few months were what I saw someone here describe as “epiphany porn”. Even before that, I read a lot on similar themes and took it all very seriously: “Fooled by Randomness” made me quit my job as a day-trader for a hedge fund and “Thinking Fast and Slow” changed my life in several ways, including the choice of car I bought. I’m very happy to start noticing changes in my brain after LW too. For example, I spent a lot of my time in the US arguing with anti-zionists. I just recently realized that the hypocrisy and stupidity I usually find arrayed against me has pushed me into a pro-Israel affective death spiral of my own, that I’m now trying to climb out of. In general, I argue less about politics now and don’t ever plan to vote anymore. I just went to my first OB-New York meetup and hung out at the solstice concert, I hope to become more and more engaged with LWers offline going forward.
The main result of my business school days are several entrepreneurial fantasies about “Moneyballing” things. One recent idea is to set up a personal philanthropy investment fund - people put in X% of their salary that can be used only for emergency or charity. This eliminates the psychological pain of giving money, increases giving, makes personal altruism much more focused and effective and saves on taxes. I also came up with a better matching algorithm for dating websites. Dating in general is at the very top of my interests. While a rigorous model of Bayesian dating seems as unattainable as quantum relativity, I do find that my open minded approach has gotten me in relationships that I didn’t even believe were an option a few years ago (that’s a discussion I’d love to get to on somewhere else on this site).
And finally: where I hope to end up. Perhaps even a year ago I imagined I could be perfectly satisfied living a content middle-class life with a decent job, good relationships and fun hobbies. I realized that the world doesn’t care too much that I was always the smartest person in the room as a teenager, and that I’d do well to dedicate myself to humility. Unfortunately, LW changed that. I see now that things are changing and going to change unpredictably, and that smart people occasionally do make a very non-humble impact. I’m not in a rush to plunge myself into some grand project (like FAI) just for the sake of it, but I do feel that my life is getting too comfortable for comfort. When the waves come, I want to have built a rad surfboard.
Wow, just... wow. *salutes*
Welcome, Jacob!
Hello LessWrongers! After discovering the blog and MIRI research papers through a friend (Gyrodiot ) a few weeks ago, I finally decided to register here. For I keep seeing fascinating discussions I want to be part of, and I also would like to share my ideas about IA and rationnalism.
Currently, I am a first year student in an french Engineering school in Computer science and applied mathematics. Before that, I was in "Classes Préparatoires" for two years, an intensive formation in mathematics and physics to pass engineering school contests. Even If it was quite harsh (basically 30 hours of classes + 5 hours exam + homeworks impossible to finish every week), it gave me some kicks to become a post-rigorous mathematics student. (post-rigorous being here the definition of Terence Tao : http://terrytao.wordpress.com/career-advice/there%E2%80%99s-more-to-mathematics-than-rigour-and-proofs/ )
For my interest, I am actually working with one of my teacher on a online handwriting OCR based on a model of oscillatory handwriting he developped. But we also explore the cognitive consequences of the model, mostly Piaget's idea of assimilation, which can be linked to modern discoveries about mirror neurons. I also self-study Quantum Computation, even more now that there is high probability I will be on a summer research internship on Quantum information theory.
On the topics I saw here on LW and on the MIRI web-site, I think the corrigibility is the one that interests me the most.
That's all folks. ;)
Welcome :D Glad to see you there.
I'm Matt, 32, Living in Los Angeles. I first read Less Wrong sometime in 2012, and attended the CFAR Workshop in February 2014, and finally now am getting around to signing up an account, because while i am not as wrong as I used to be, I'm still mostly wrong much of the time, but I'm working on fixing that. Sometimes I make overly complicated jokes that misuse mathmatical language, because I'm a programmer, not a mathematician. Sometimes I host rationalist rap battles, which in practice are a bit more like ratioanlist group hugs than the thing you saw in 8 mile. I'm an atheist who will gladly debate educated theists. I like board games and short walks on the beach. I'm @matt2000 on twitter.
Welcome Matt. :) Can you explain a little more what you mean by rationalist rap battle? Seems fun.
Hello. I am new to this site as well. My background includes physics, mathematics, and philosophy at graduate level, which I am studying now.
I do not identify myself as a "rationalist", but that does not mean that I may not be a rationalist or that I am not trying to follow some of the advice that is given here to be a rationalist. I discovered LW after reading the story "Three Worlds Collide", which I discovered thanks to tvtropes.org. Lately I have been thinking and writing a lot about my own goals, and when I took a look around LW I was surprised to discover that many of the conclusions that I have arrived at independently appear in the sequences and other posts here. Thus I find myself agreeing with many of the things said here, but without having ever considered myself a "rationalist" explicitly. Still now, I'm not sure if "rationalism" is the right label to identify the kind of aspirations that I have and that I have found in this site. But it may be.
Anyway, to me that is unimportant. I think I am likely to find people here with a kind of interests that are very difficult to find in people you meet in person. I hope that I will be able to discuss here some topics that I cannot talk about anywhere else. Thus I have decided to sign up :)
Welcome!
Maybe we can invent a new label for people like you and me who aren't sure if they identify as "rationalists" but nonetheless find themselves agreeing with lots of what's written on Less Wrong anyway :P Quasirationalist or semirationalist, perhaps?
Thanks!
Actually, even though I said it is unimportant, I would like to explore further this particular question at some point. I would like to know: 1) How does my thought differ, if it does, from the major current of thought in LW. 2) Does this difference, if there is any, amount to the fact that I am not as rational as the average LWer is? Or is it due to factors that are neutral from the point of view of rationality (if there are such things)?
I'll write about it when I find the time.
Hello, everybody, and happy belated solstice.
I used to post here from a different account until some time ago, then I decided it was not anonymous enough (also, the username was quite silly) so I deleted it. Here I am again, but this time I'll be more careful about privacy.
BTW, the only reason for the underscores in my username is that the software won't let me use spaces, so don't bother with them. Also, in case you need to refer to me with a gendered pronoun, I'm a "he".
Hi there everyone, happy mid-winter festive period.
I'm V (not from the film), 33, and living in the wilds of the UK, for now. I became very sick when I was 16 and essentially slept through my late teens and 20s so I'm playing catch up with a vengeance. I found the site through a friend and I've been a (silent, shadowy) member for a while but hadn't been able to carve out the time to get through the sequences, until now.
I'm a final year Applied Maths and Computer Science student but I'm also really interested in cognitive science, rationality, philosophy and their applications. I detest being wrong and not understanding things I consider to be important. Rationality is the best tool I've found for helping me get out of my own way and for protecting myself from myself and others. Having lost so much time and having had a generally strange life, I care a lot about getting the most out of the time that I do have, having opinions that reflect reality as closely as possible and making the best quality decisions I can.
At the moment degree work, trying to move house and preparing for post graduation is swallowing my life but I do have a couple of side projects on the go; a couple of app ideas which may or may not be useful enough to make, gaining basic programming proficiency (for some value of all three words) and a portfolio of work; a blog about my later stage recovery and the process of becoming "well", and a few other bits and pieces.
I have embarrassingly poor grammar and spelling which I'm trying to improve so I'm happy to be corrected if I start spewing word salad. I'm aware I've just invited replies consisting entirely of corrections to this comment and that's o.k.
Been looking for this for a few moments. I don't see much to expand on myself. I found out about LW when someone pointed me to the 1000-year old vampire post which I really liked.
And that's almost enough for now. I tried using the search but I didn't get the thing I wanted. All or fucking nothing I guess: What's the best way to ask a girl out?
"Best" means a lot of things that I'm naturally not aware of otherwise I wouldn't be asking this :) But true, I feel like there's a lot of things to account for in "best" that I might not be realistically able to do in different situations.
If you're asking why I'm asking this, it's just because although I manage a conversation (I do have an almost severe aversion to inane conversations/topics so sometimes I really have nothing to say, and in the case I do I always think "this is stupid but.. fucking conversation") at a level I consider okayish (could work on this too, but that's an entirely different topic) I always feel like "now's not the time". Not sure why. Maybe I'm not getting the right signal or maybe I'm missing it, but I always have this feeling that even though I'd like to do it, I'd probably mess up. Instinctively (or in some cached way) I think I should lead the conversation there but.. well, this is dragging on. So guys (I guess girls too), what's the best way to ask a girl out?
Do you have a girl in mind or do you mean generally speaking?
A specific one in mind? I actually have a few girls I'd like to ask out.
But I'd suppose a general solution would probably have a better a specifically optimized one.
I'd like to be greedy and ask for both, as I assume the answer will be different depending on how I answer. So "yes" and "no". :)
I'm guessing here, but it sounds like you have a very common problem, which people usually call "fear of rejection" but I think should be called "no plan for rejection". We instinctively avoid situations we don't feel able to handle, and in anyone able to think ahead, this includes situations that might lead into situations we don't feel able to handle. And that can feel like now's not the time.
A popular method for fixing this is The Rejection Game. Ask for something and get rejected, once per day, for a month. Your requests should be somewhat ridiculous, so you'll get rejected even though you're super polite and respectful. (Ask salespeople for discounts, for example.) After rejection, don't give up immediately, but negotiate a bit - this gives you something to do and should get you rejected more firmly.
Also, it might help to pretend they're boys.
Bonus prize: If you handle rejection really well, you get additional attempts later. Magic!
What makes you draw a line between what I've said to a fear of rejection? I have a philosophy of always trying to stretch my limits but I know the difference between reckless foolishness and planning ahead. The main plan is to do it. The smaller details are basically the steps. I've been rejected a position I'd really have liked to have today. I'll try to negotiate next time (not on the dates I guess cause that really feels like I'm kissing her ass) I do need something though. Also great in case the person rejected me for some devious reason. (I'm looking for a another job now. No reason to dwell on a no.)
But here's another question in addition to the line-drawing one: Assuming I get this rejection thing done and I'm not fearful or rejections, how does that one-up my chances? How much am I going to get other than the bonus prize?
It also seems this rejection thing is heading towards quantity and not quality. Also, it sounds like the thing being rejected doesn't seem to haave much weight. You'd definitely feel worse if you've been rejected something that's important to you. Naturally, that's no reason to dwell too much on it, but sometimes I honestly wonder if I did a few things better, would I have a better outcome?
EDIT: Also I'm going to try this rejection thing for the laughs of it. Let's see how funny it can get.
The quality of your query isn't entirely unimportant - you can lose a chance with poor quality - but the person asked will usually have lots of other reasons that play into their decision, and most of them you'll never know. In the absence of this information, what you have is an opinion on the quality of your request, so naturally that's what you focus on to optimize; but that doesn't mean this is the decisive variable in the average case.
It makes it easier to actually try. As long as you still feel "now's not the time", worrying about the quality of what you'd say if you actually did is not an efficient use of your attention.
You're right, the rejection game is about quantity not quality, and that's because people have found quantity makes more of a difference.
You're saying that I'm dwelling too much on avoiding rejection even though I'm thinking I'm optimizing my chances, right?
Oh fucking hell. Maybe I did miss a few chances now that I think about it.
I'm Sam, 22. Lurked here for two years after first stumbling upon the Sequences. Since then, I've been trying to curb inaccurate or dishonest thought patterns or behaviors I've noticed about myself, and am trying to live my life more optimally. I'm making an account to try to hold myself more accountable.
The person behind this account is not at all new to the Less Wrong community. He has read all of the sequences multiple times, as well as much of the output of many non-Eliezer figures associated with or influenced by LW, and has been around for more than half the time the site has existed. Suffice it to say he knows his stuff. He used to comment and then stopped for reasons which remain unclear.
The obvious question is, why the new account, especially since I'm not trying to hide who I was? I decline to answer.
Less Wrong is important to me. Reading the sequences caused in me a serious upgrade. LW inspired a lot of meetup groups, one of which I attend every week. It's not the group I wish I was attending, but it's better than the alternative: none. Things fall apart. Roko exploded. Vladimir_M vanished, Yvain seceded; many others of import including Eliezer have abandoned LW. They all have their reasons, some common and others not. There are forces, it seems, driving the best away, leaving behind a smattering of dunces.
I aim to turn the tide. Nate Soares didn't show up until 2013. Less Wrong is still at least theoretically a place that can attract good people. Less Wrong has been navel-gazing about its own demise for a long time, and the wails have gotten stronger while nothing else has. What is more, the widespread perception that "X is dead," is a self-fulfilling prophecy. But I think it can be done, I think I can lay down a gauntlet, for myself and others, the Less Wrong Rejuvenation Project. Why do I think it can be done? Wei Dai is still here. He is my benchmark. The day he goes off to greener pastures is the day I give up.
The name refers to inferential distance, something I want myself and my audiences to keep in mind.
Hello, I am Connor (18) from Victoria, Australia. I have been at LW a few times before but usually only as a brief look after being drawn into it from a link. As of today, I have decided to actually stay and properly look into it all (The Sequences, discussions, etc) and learn.
I am a student learning economics and business management. I mostly got interested in rationalism because of two fundamental reasons. Firstly through my upbringing and in extension personality, where my father taught me to be highly sceptical of assumptions (Ironically, he himself is relatively irrational as his beliefs fall under being overly cynical and paranoid) and claims made by any person or organization without first thinking on it myself. My questioning of baseless or fallacious assumptions (Including from the person who taught me to be as such) and desire for adapting my mindset to factor in evidence led me to find rationalism something worth inspecting to improve my thinking process.
Secondly, the other reason I am here is my interest in learning how the world and its many systems work, particularly societal(Which I am learning) and natural/scientific (Which I am sadly limited in). While I myself am not a scientist and have little knowledge of (hard) sciences, I put high value in said fields and would like to talk to (or at least quietly learn from) people who actually ARE knowledgeable in those area's.
I am a fan of technology (particularly cybernetics, robotics, space technology and energy technology) , literature, history, military strategy and art. I also occasionally dabble in philosophy. On a less serious note I love video games, watch anime and occasionally read fanfiction (HPMoR did not bring me here but I have read it). Finally, I am a futurist and transhumanist eagerly awaiting the singularity and am an ardent advocate of renewable energy (Along with my father, we plan on starting up an energy company built around algae bio-fuel once we have enough investors).
And that's me, off to continue reading the Sequence's.
Hi. I'm a long time lurker (a few years now), and I finally joined so that I could participate in the community and the discussions. This was borne partly out of a sense that I'm at a place in my life where I could really benefit from this community (and it could benefit from me), and partly out of a specific interest in some of the things that have been posted recently: the MIRI technical research agenda.
In particular, once I've had more time to digest it, I want to post comments and questions about Reasoning Under Logical Uncertainty.
More about me: I'm currently working as a postdoctoral fellow in mathematics. My professional work is in physics-y differential geometry, so only connected to the LW material indirectly via things like quantum mechanics. I practice Buhddist meditation, without definitively endorsing any of the doctrines. I'm surprised meditation hasn't gotten more airtime in the rationalist community.
My IRL exposure to the LWverse is limited (hi Critch!), but I gather there's a meetup group in Utrecht, where I'm living now.
Anyway, I look forward to good discussions. Hello everyone!
Hi everyone! I've been a lurker for a while now, this is my first real interaction. Found LessWrong through HPMOR (read the whole thing over a single weekend; read it again a month later).
I'm sixteen and have just graduated from a high school in India (I'm a US citizen, though). Currently applying to American universities, working through some online college courses and Godel, Escher, Bach; teaching myself Python, writing a novel, and continuing to teach myself Japanese (5th language). Also partying shamelessly.
I'm very undecided about my future, but to generalize, I'm probably going to go into either the film industry or physics, while writing fiction on the side. I have no doubt LessWrong can help immensely in each of my pursuits, and I aim to finish reading all the sequences by the end of the year (currently halfway through How to Actually Change Your Mind).
I love this site. At times while reading the articles I have a feeling of obscure deja vu, almost outright indignation. Like someone has stolen MY personal insights, expanded them exhaustively, and posted them online. (Yes, I realize the actual research is decades old and not solely by EY.) I find my own thought patterns in these articles. Some just click instantly, and I understand every aspect. Others I have to reread a few times to really get. Anyone else know this feeling, or does everyone just understand it with ease?
Can't thank my lucky stars enough that a site like this actually exists: it's a veritable compendium for ascending to godhood.
Newcomer, mathematician by species; freethinker, secularist and rationalist by nature. Abrasive and irreverent, if I haven't annoyed off at least five pompous people in any given day, it's a day utterly wasted.
Hey... I'm Babblefish. Having posted elsewhere I've been directed to this helpful Welcome thread.
How I got here? friends->HPMOR->Lesswrong blogs-> Project suggestion-> Forum.
Much as I'd love to claim I'm here to meet all you lovely folks, the truth is, I'm mainly here for one reason: I was recently re-reading the original blogs (e-reader form and all that), and noticed a comment by Eliezer something to the effect of "Someone should really write 'The simple mathematics of everything' ". I would like to write that thing.
I'm currently starting my PhD in mathematics (appears common here), with several relevant side interests (physics, computing, evolutionary biology, story telling), and the intention of teaching/lecturering one day.
Now... If someone's already got this project sorted out (it has been a few years), great... however I notice that the wiki originally started for it is looking a little sad, (diffusion of responsibility perhaps), and various websearches have turned up nothing solid.
So... if the project HAS NOT been sorted out yet, then I'd be interested in taking a crack at it: It'll be good writing/teaching practice for me, and give me an excuse to read up on the subjects I HAVEN'T got yet, and it'll hopefully end up being a useful resource for other people by the time I'm finished (and hopefully even when I'm under way)
I am here, because I figure this is probably a pretty good place to get additional information. In particular: 1) Has "the simple mathematics of everything" already been taken care of? If so, where? 2) Does anyone know what wiki/blog formats/providers might be useful (and free maybe?) and ABLE TO SUPPORT EQUATION. 3) Any other comments/advice/whatever?
Cheers, Babblefish.
Hello everyone – I’m a new member of LessWrong. I consider myself to be a rationalist and humanist. I’m interested in applying rational analysis to help the general public understand complex problems. To help achieve this goal, I’ve been working on a wiki-style website to explain the key nuances of various controversial issues.
The concept is designed to provide meaning and clarity to a wide variety of complex issues, rather than simply enumerating the facts as Wikipedia already does decently well.
I’m wondering if: 1) Anyone in the LessWrong community has thought about something like this; and 2) If there is any interest in learning more about this project
Best, WS
I'm not new to the site, but new to actually posting. Long time reader, first time poster, etc. I am a somewhat-regular member of the Los Angeles Less Wrong meetup, and I'm excited to keep learning more about rationality in general and Bayesian probability in particular.
Welcome from the depths of lurking! What made you decide to start posting?
(I'm curious partially because there seem to be a few people who lurk and go to meetups and I don't fully understand the psychology of that.)
Well, I'm actually helping to plan an event for this LA Meetup, and I can't post the Meetup Event Topic Thingy without having Karma, so that's basically what pushed me towards actually posting. Which is funny, because I've been a regular meetup attendee for almost a year at this point.
I discovered lesswrong.com because someone left a printout of an article on the elliptical machine in my gym. I started reading it and have become hooked.
I'm a formally uneducated computer expert. The lack of formal education makes me a bit insecure, so I obsess over improving my thinking through literature on cognitive dissonance and biases, such as books from the library and also sites like this.
Nowadays I get paid to be a middle-manager at technology companies. Most of my career has been in Linux system administration as well as functional programming.
I'm a bit of a health nut. I adopted a whole-food plant-based diet (the "China Study" diet) because it seems most well supported in the literature, although a broad consensus on the topic has not emerged. I base this decision in part on my trust of experts with titles after their names, since I'm too out of my element to read and interpret most of the literature on my own. At the same time I have a personal anecdote that this works well, so those two are enough to convince me for now.
There are times when I find reading about rational thinking rather sobering. It's clear that we were born with an irrational, "defective", brain and that we would be so lucky if we could even make a small dent in improving our decision making. Improvements seem very hard to come by, I worry that all I'm really doing is learning to distrust my beliefs.
So that's a nutshell full. How's everyone else? :)
Are you aware of Denise Minger's dissection of the China Study?
Yes. I spent a lot of time reviewing critiques of The China Study (TCS), including Minger's. At the end of it I came to the following conclusions.
So, those are my reasons. I admit they're not very satisfying. I'm spoiled by fields where, once you grok the formal proof you can be highly confident that the claim is correct.
No such luck with something as squishy as nutrition, it would seem.
I disagree with your approach (basically, trust authority), but that's just me.
Sounds to me that you're trusting authority that just happens to be of a different sort.
No, I do not. I actually read the papers and see if they make sense. One of my long-standing complaints is that in medical research no one releases the data -- it would be very useful to reanalyze it is a bit less brain-dead fashion.
Then why'd you recommend Minger's criticism? Because as far as I can tell it doesn't make sense.
Makes a lot of sense to me. What is it that doesn't make sense to you?
Let's start with the sturm and drang over Tuoli, I suppose. Why aren't they an obvious outlier?
Um, it is.
To quote Minger
Also, to continue quoting Minger,
When you know next to nothing about the topic at hand and the only choice is to trust authority or to rely on your own, almost certainly flawed judgment, I'd go with authority.
When the topic is an important one, like health and nutrition, I'll go learn about the topic.
By "learn", I assume you mean read existing literature on the topic. In the case of health and nutrition (and most other medical topics), high-quality literature is rather sparse, both because of frequently bad statistical analyses and the fact that practically no one releases their raw data--only the results. (Seriously, what's up with that?)
Also around the topic, not to mention that learning necessarily involves a fair amount of one's own thinking.
I agree which makes relying of authority (and, usually, on mass media reinterpretations of authority) particularly suspect.
I think the usual explanation is privacy and medical ethics, but my cynical mind readily suggests that it's much harder to critique a study if you can't see the data...
Given that the experts in the field are precisely those learning from and producing that same literature, the fact that the literature is generally low-quality doesn't make me more inclined to trust them. (Though, as bad as academic nutrition science is, conventional wisdom and pop nutrition science seem to be worse.)
It does make it exceptionally hard to gain a good understanding of the field yourself, though. Unlike Lumifer, I'd say the correct move, unless you are yourself a nutritionist or a fitness nerd or otherwise inclined to spend a large portion of your life on this, is to reserve judgment.
You can't -- you've got to eat each day :-/
Ah, the old "choosing not to choose is itself a choice" move. Never was too convinced by that.
You can reserve judgment on the theory while taking some default stance on the practical issue. Depending on where you're standing this might mean the standard diet for your culture (probably suboptimal, but arguably less suboptimal than whatever random permutations you might apply to it), or "common sense" (which I'm skeptical of in some ways, but it probably picks some low-hanging fruit), or imitating people or populations with empirically good results (the "Mediterranean diet" is a persistently popular target), or adopting a cautious stance toward dietary innovations from the last forty years or so (about when the obesity epidemic started taking off).
In terms of statistics and data, yes, the papers they produce are fairly low-quality. In terms of domain-specific knowledge, however, I'd trust an expert over pretty much anyone else. That being said, I do agree with you here:
Although I prefer trusting expert authority to making my own judgments on unfamiliar topics, gaining a good-enough understanding to figure out which experts to trust is still hard, especially with so many conflicting conclusions out there. This being the case, the strategy you propose--reserve judgment--is precisely what I do.
That doesn't help you when experts disagree.
I'm skeptical this is a great strategy for topics in general.
Nutrition, for example, doesn't appear to be the kind of topic where you can just learn its axioms and build up an optimal human diet from first principles. It's far too complicated.
Instead you need substantial education, training, experience and access, as well as a community that can help you support and refine your ideas. You need to gather evidence, you need to learn how to determine the quality of the evidence you've gathered, and you need to propose reasonable stories that fit the evidence.
Since I haven't made health and nutrition my career most of these things will be hard or even impossible for me to come by. As such, my confidence in the quality of any amateur conclusions I come to must necessarily be low.
So, the most reasonable thing for me to do is trust authorities when it comes to nutrition.
And rightly so :-) This is an approach that should be reserved for important topics.
I think you're setting the bar too high. What you describe will allow one to produce new research and that's not the goal here. All you need to be able to do is to pass a judgement on conflicting claims -- that's much easier than gathering evidence and proposing stories.
In nutrition, for example, a lot of claims are contested and not by crackpots. Highly qualified people strongly disagree about basic issues, for example, the effects of dietary saturated fat. I am saying that you should read the arguments of both sides and form your opinion about them -- not that you should apply to the NIH for a grant to do a definitive study.
Of course that means reading the actual papers, not dumbed down advice for hoi polloi.
General advice: learn causal inference. Getting strong causal claims empirically is not so simple...
What article was that?
http://lesswrong.com/lw/xk/continuous_improvement/
I do economics, working on an interesting problem that might involve computer logic and recursion, but I am not a computer logic and recursion man. Thought to write a series of articles on economics aimed at building up to my current confusion, then thought to post them somewhere, would be convenient if audience with some knowledge of computer logic and recursion...
...oh.
~12 articles in, should be fun....
New to this site... Have studied very little about logic and philosophy starting with some big famous papers that talk about how we know nothing for certain (thanks, Descartes), going through whether All Ravens are Black, studying the Perfect Island argument, learning about Famine, Affluence, and Morality, and ending somewhere along the lines of whether justified true belief is knowledge. That is to say, I'm not that educated on logic or rationality, but entertaining ideas is a great hobby of mine.
I came to Less Wrong because I found it on Harry Potter MOR (I haven't read HPMOR, or HP for that matter, but I find both interesting nonetheless, and I just got really excited when I found that a site like this existed.).
My beliefs: I am a theist, and I do not affiliate with a religion or political party. Of course, that is to say, the mark of an educated mind is to be able to entertain ideas without fully accepting them. :) I also like to assume that the majority of the population is evil and has ulterior motives, but that's just me... I'm a high school student who's just looking for something to write about and something to learn about. Just a new perspective altogether.
Nice to be here.
Welcome! I just want to comment on the "everyone is evil" idea - "Never attribute to malice that which is adequately explained by stupidity." Or broken incentive systems. Or something in that vein. :p
There seems to be a lot of other high school students on this site lately. If you like this stuff, you may also like the International Baccalaureate class Theory of Knowledge, which you can often take as an elective even if you're not an IB student.
Kind of curious about your theism, don't feel required to answer: A lot of nonreligious people who believe in a god are deists or pantheists. Are you either of those? If not, would you be willing to give more detail about your beliefs?
Also, I'm kind of starting to wonder if some people don't really like classifying themselves into groups. Is the reason you don't affiliate with a political party because you want one that better matches your positions on policy, or because you wouldn't associate with one even if you agreed with them on all policy proposals?
Most people define "evil" as "wants evil things", not "has evil revealed preferences". If you're looking at social behavior, we all have ulterior motives (I want to talk about things regardless of how annoyed a listener is, I want a strong support structure so that if something goes wrong I can get help, I want people to entertain me), but the actions those motives lead to are pretty low on the scale of bad stuff, somewhere close to EY's dust speck.
See as far as my beliefs, I have a strong religious background... Catholic elementary and middle school (I go to non-sectional, public high school now), Hindu dad, Protestant (Lutheran) mom... I mean, I generally end up changing my mind every year or so, but right now, I believe that God exists as the Universe working within itself... and that as each of us live, we each experience God... I don't know, I can't seem to get my head wrapped around the idea of a nonexistent god because of my strong religious background. Not very "rational", I guess, but that's just me personally, and there's really no should or shouldn't as far as faith goes, so I've just been rolling with it. So, I sort of just been changing my perspective based on what I learn and hear about the world.
I don't know if that really affiliates with deism or pantheism, really, but if what I explained above affiliates with one of them, would you (or anyone) explain how?
And as far as political parties go, there was this time when I tried to identify myself as Republican (though I really would be more of a Conservative Democrat) because I was tired of saying "No affiliation." It also kind of seemed like a fun little experiment because then I would be going against pretty much everyone else (most of the people I know tend to be democrat). I couldn't really hold out that long because, I don't know, being affiliated with Republican--or Democrat for that matter--makes people regard you as some political freak and not merely a person just agreeing with one more. Another thing, when I found myself affiliating with Republican, I found that I began to care more about what party supports what position, and I feel like that's something that just shouldn't matter.
In the end, I'm also somewhat ignorant and not very confident about my positions just yet either.
And as far as ulterior motives, saying that I don't trust people could be seen as my ulterior motive to not have to be generous and charitable (it's a pretty lame excuse to not empathize with charities sometimes.).
That matches to my interpretation of your stated beliefs.
Most of the atheism stuff on this site has more to do with a god that is a discrete being with supernatural capabilities than the thing you describe. However, if the main reason that you're not an atheist is that you have trouble picturing a godless universe, and you change beliefs based on what you learn and hear about the world (good work, by the way), chances are good that you'll end up being an atheist if you spend enough time on this site. ;)
If you actually want to clarify your beliefs, it could help to imagine some different worlds and see whether they count as having God in them or not, in order to consider what constitutes the absence of God. If there's no scenario that counts as God not existing, then I'm not sure what your belief that "God exists" is supposed to represent, and what information about the world someone could derive from that belief, given that it was true.
Thanks so much for the data about party affiliation!
Also, if you count subconscious desires to act in one's own interest as "ulterior motives", you may like what Robin Hanson on Overcoming Bias has to say about signaling.
I don't like classifying myself into groups. You try to crawl into a pigeonhole and you get scrapes and bruises, and sometimes things get torn off...
As a 2001 IB Diploma Graduate, I have to disagree very strongly with this advice (unless the curriculum for the Theory of Knowledge course has changed substantially over the last 15 years).
I remember taking this course and being immensely frustrated by how almost every discussion was obviously just disagreement about semantics. This completely killed my interest in epistemology and philosophy, it was only when I read the "Human's Guide to Words" sequence several years later that I realized there were people who were thinking seriously about these issues without getting into pointless discussions about whether items are rubes or bleggs.
Courses in mainstream philosophy that get stuck on confusion about the meaning of words have the effect of turning rigorous thinkers away from thinking about philosophical questions. As for myself, if it hadn't been for reading Overcoming Bias years later, the IB course on Theory of Knowledge could have permanently killed my interest in epistemology.
It's been better than that so far (first few weeks). We haven't argued much over meanings of things yet.
The one disappointment is that I get really defensive every time we discuss whether doing whatever empathy tells you to do is moral, because that's half of the argument that says autistics are evil mass murderers (not actually the position of anyone in the class), and I get mildly annoyed when people mischaracterize utilitarianism or have clearly never heard of it before. (The situation in which all the available options are rule-violating and you choose the utility-maximizing one is different from the situation in which all the high-utility options are rule-violating, and you violate the rules and then choose a low-utility action.)
Hi! Lesswrong first came to my attention when I read HPMOR. I took a 2-year course in Knowledge and Inquiry - which includes critical thinking and epistemology (also includes philosophy of science). I was a Christian but reading some articles on Lesswrong and reading counter-arguments to Christianity convinced me otherwise (trying to reduce confirmation bias and trying to falsify the belief of Christianity).
Pardon me for taking this opportunity to express one concern I've had for more than a year. I'm a college student and I am concerned that I am not smart enough expect a net gain in utility by aspiring to rationality (added in edit).
I don't do well in Math (about 60th percentile for multivariable calculus), but consistently do relatively well in Physics, Chemistry, Engineering and Programming modules. (consistently in top 8 percent of students in top university in Asia). I'm in a double degree in Chemical Engineering and Business and on track to receive at least Second Upper in both (First Class in one and Second Upper in the other seems to be the most likely outcome, though of course I am striving for First Class in both. I am usually too pessimistic when it comes to grades and honours).
Yet I find it difficult to multiply two 2-digit numbers in my head. I always forget what numbers I was trying to multiply and the progress of the multiplication so far. I tried Dual N-Back and had to work for half an hour to pass n=2. I can't remember numbers and always make tons of errors in my mathematics work (not switching signs for one or more terms when factoring out a negative number, for example, or just plain getting stuck).
I'm worried that my fluid intelligence just isn't enough. I'm also quite sad at the expectation that my fluid intelligence will decrease throughout my adulthood. I can't find any convincing evidence (maybe a study or something) that fluid intelligence cannot be fully described by mathematical ability (if effort exists). Should I aspire for rationality or am I too stupid?
Thanks!
Edit: I'm also in another predicament - I am no longer a Christian, but I still go to church every week. I treasure the friendship and companionship of my Christian friends. They are really nice and caring people. I cannot predict reliably what their reactions would be to me revealing the current status of my beliefs. I meet them only once a week in church and if I were to stop going to church, our friendship would most likely perish.
There are other benefits to going to church as well: Here, church is a marketplace for contacts and relationships. I believe going to church would help me in the future if I were to go into business.
However, my parents and all my relatives who are descended from my paternal and maternal grandparents are Christians, and most of my extended family beyond that are Christians. My parents are devoted Christians and it would break their hearts to find out that I am no longer a Christian. My relatives would judge me and proclaim me a failure. Most of our Asian community would do the same (Where I'm from, it is considered odd, or mad not to have a mainstream religious belief. We are categorized by religion as much as we are categorized by race). Even if I were to succeed financially, they would say that I am not someone to be trusted because I am somehow immoral for rejecting Christianity.
Would anyone care to offer my some advice?
...
You are not too stupid.
You are really, really, seriously, not too stupid.
That's something that you might want to work on, but it's not a general intelligence failure. There are some tricks that can be learned (or discovered) and employed to multiply by specific numbers more quickly; alternatively, practice will help to speed up your mental multiplication.
Thanks for the encouragement!
I will try my best to work through the sequences. I have just finished map and territory and mysterious answers to mysterious questions. I noticed that many articles in the sequences confuse me at times because I can think of multiple interpretations of a particular paragraph but have no idea which was intended. Also, many actions/thoughts of Harry in HPMOR confuse me. I might have interpretations of the events but I don't think those interpretations are likely to be correct. Is this normal?
I have edited the post though, I think that saying that I am on track to receive First Class Honours in both is too optimistic. I can say with quite a high degree of certainty that I am on track to receive at least Second Upper in both. But then again, I tend to be too pessimistic when it comes to grades and honours.
I just really don't get why I don't do well in math, which I assume would be the best measure of one's fluid intelligence. Things such as why dividing by zero doesn't work confuses me and I often wonder at things such as the Fundamental Theorem of Calculus. It seems that my mind lights up with too many questions when I learn math, many of which are difficult to answer. (My professor does not have much time to meet students for consultations and I don't think I want to waste his time). It seems that I need to undergo suspension of disbelief just to do math, which doesn't seem right given that a lot of it has been rigorously proven by loads of people much smarter than me. (But yes, I understand there is the Gödel's theorem as well). Is this normal too?
The thing is, I can't find any convincing evidence (maybe a study or something) that fluid intelligence cannot be fully described by mathematical ability (if effort exists).
Thanks again for your encouragement!
This seems normal to me. What is intended is very often not an easy question to answer.
The mere fact that you have been accepted for and expect to pass a double degree tells me that you are really not too stupid. (I'm not actually sure what the difference between Second Upper and First Class Honours is - I assume that's because you're referring to the education system of a country with which I am not familiar).
Theory: You had a poor teacher in primary-school level maths, and failed to learn something integral to the subject way back there. Something really basic and fundamental. Despite this severe handicap, you have managed to get to the point where you're going to pass a double degree (which implies good things about your intelligence).
I... don't actually know. Throughout my entire school career, I was the guy for whom maths came easily. I don't know what's normal there.
Actually, it may be possible to narrow down what you're missing in mathematics. (If we do find it, it won't solve all your math problems immediately, but it'll be a good first step)
Let's start here:
Define "division".
Wow! Thank you so much for your time and effort in typing out that reply!
Well, About 3-5 percent of the best students in a cohort can expect to get First Class Honours. It basically means 97th percentile, or 95th percentile, depending on the quality of the students. The 75th to 95th percentile can expect to get Second Class Honours.
I must admit that this question stunned me. I don't actually know. What first came to my mind is that it is some sort of algorithm (case 1: two integers that divide cleanly, case 2: two integers that divide to make a fraction, case 3: an unknown ...) that has useful applications (e.g. it is useful to know that you can divide 6 apples by 3 people to allocate 2 to each person). This is my shot at a definition: division is an operation that gives the ratio of number/function F and another number/function G. The ratio can be determined by seeing how many of G can be added together to comprise F. It can be a fraction/real number/complex number/function. Argh. I am stumped. This definition seems like Swiss cheese.
Which implies that I can, tentatively, estimate you to be in the top 10% of people who are accepted for a degree. That's really good.
...I think we've found the start of the problem. Your foundations have a few holes.
Dividing X by Y, at its core, means that I have X objects, I want to place them in Y exactly equal piles, how many objects do I place per pile? (At least, that's the definition I'd use). In this way, the usefulness of the operation is immediately apparent; if I have six apples, and I want to divide them among three people, I can give each person two apples.
I can use the same definition if I have five apples and three people; then I give each person one and two-thirds apples.
This also works for negative numbers; if I have negative-six apples (i.e. a debt of six apples) I can divide that into three piles by placing negative-two apples in each pile.
Division by zero then becomes a matter of taking (say) six apples, and trying to put them into zero piles. (I hope that makes the problem with division by zero clear).
And yes, there is a fancy algorithm that I can put X and Y in and get the quotient out... but that algorithm is not a particularly good basic definition of division. (Interestingly, I note that your definition jumps straight to setting out separate cases and then trying to apply a different algorithm to each individual case. This would make it very hard to work with in practice; I've worked with division algorithms on computers, and they're far simpler, conceptually, than what you had there. If that's what you've been working with, then I am really not surprised that you've been having trouble with maths).
Now let's see how far this goes...
Define "multiplication", "addition", and "subtraction".
Thank you so much for this, CCC. You really made my day.
I think I overcomplicate things. When I read your answer, I was thinking, (seriously no offense because I know you are really smart) I don't know for sure that this definition works for complex numbers. I was wondering how I could conceptualize it.
And then I was thinking that mathematics relies on definitions and deductive reasoning and intuition cannot give the certainty of deductive reasoning, thus it might be a fallacy to think that something simple and intuitive is an accurate model of mathematical reality... then I remembered that it was taught in kindergartens even... Sucks to have my mindset, doesn't it?
I also keep thinking that I can't be sure that I covered all possible cases with my definition - another major problem of mine.
X*Y : I have Y sets of X objects, how many objects do I have? Works with fractions, and negative numbers (thinking in terms of debts).
X+Y : I have X objects. I am given Y objects. How many objects do I have? Again works with fractions and negative numbers. It's easy to visualize imaginary numbers as another type of object 'x', and I am given y objects. So I have x + y imaginary objects and X + Y real objects.
X-Y : I have X objects. Y objects are taken away from me. Again, same question, and works with fractions and negative numbers, and having 'x' and 'y' objects helps me deal with imaginary numbers.
What I've been wondering is why y – y1 = m(x – x1) works but m = (y - y1)/(x-x1) does not include the point when x = x1. After learning what you've taught me, it is intuitive that these two equations are very different (in terms of giving and taking apples).
But before today, it shocked me to think that we can't always manipulate algebra by dividing both sides by something, and I have to be extremely careful. Then it makes me wonder what other exceptions to manipulation there is, and what kind of deductive reasoning is in use here, if there are exceptions.
I also wonder that teachers and professors have not been telling me all the exceptions to different types of manipulations and that I don't know the limits of my mathematical knowledge, or whether even any of it is completely true. I guess it's similar to how a rotten apple makes the entire basket go bad.
It does; complex numbers are just another type of number. We'll get to them shortly.
To be fair, sometimes the intuitive answer is wrong; one does have to take care. But sometimes, as in these cases, the intuitive model does work.
Exactly.
Perfect.
You could do it that way, and it leads to the correct answers, but I think it's fundamentally problematic to see complex numbers as intrinsically different to real numbers. (For one thing, real numbers are a subset of complex numbers in any case).
Right.
There's only one that I can think of off the top of my head; if x^z=y^z, this does not mean that x=y (i.e. we can't just take the z'th root on both sides of the equation). This can be clearly demonstrated with x=2, y=-2 and z=2. Two squared is four, which is equal to (negative two) squared, but two is not equal to negative two.
Now, as to complex numbers. Let me start by asking you to define a "complex number".
My best guess: A ball with a radius X and a rotation Y. Inflate it when multiplying with a real number. Rotate it when multiplying with an imaginary part.
//My thoughts: Rotation of objects? another type of object that interacts with ordinary numbers in multiplication and division? i is a number that can be visualised running perpendicular to a real number line. Euler formula?
//I have Y objects. I can allocate them to X sets and get X objects in each set. X is the root of Y. If I owed Y objects, then I can allocate ... Ok I don't know where to go from here.
//A complex number is a number of objects, where some or none of those objects are roots of debts.
Okay, those are all - well, I think I can kind of see some relation to complex numbers in there, but it's very vague.
So, let me describe how I understand complex numbers. To do that, we'll have to go right back to the very basics of mathematics; numbers.
Imagine, for a moment, an infinite piece of paper. (Or you can get a piece of paper and draw this, if you like; you won't need to draw the whole, infinite thing, just enough to get the idea)
Take a point, nice and central. Mark it "zero".
Select a second point (traditionally, this point is chosen to the right of zero, but the location doesn't matter). Mark it "one".
Now, let us call the distance between zero and one a "jump". You start from zero, you move a jump in a particular direction, you get to "one". You move another jump in the same direction, you get to "two". Another jump, "three". Another jump, "four". And so on, to infinity. These are the positive integers.
Now, consider an operation; addition. If I apply addition to any pair of positive integers, I get another positive integer. Any of these numbers that I add gives me a number I already have; I can add no new numbers with addition.
However, I can also invert the addition operation, to get subtraction. If I want to find X+Y, I hop X jumps from the zero point,then Y more jumps. But if I want to find X-Y, I must jump X jumps to the right, then Y jumps to the left; and this gives me the negative integers. Add them to the mental numberline.
At this point, multiplication gives us no new numbers. Division , however, does.
You will now notice, there are still gaps between the numbers. To fill these gaps, we turn to division; X/Y gives us a plethora of new numbers (1/2, 2/3, 3/4, 4/5, so on and so forth), hundreds and millions and billions of little dots between each point on the numberline. These are the rational numbers.
Is the numberline full yet? Hardly; it turns out that the rational numbers are so small a proportion of the numberline that it's still more empty space than marked point. I could say that there's billions of irrational numbers for every rational number, but that severely underestimates the number of irrational numbers that there are.
But let's add all the irrational numbers as well. (If you're actually drawing this, just take a ruler and draw a line across the page, such that all your integers fall on the line).
This line, then, is the famous numberline. I'm sure you've seen it before, on classroom walls and similar. It contains all the real numbers and, now that we've added the irrational numbers, it is full; there is no space on the line where another number can be added.
Now, let's consider squaring. The square of one is one. The square of any positive number greater than one is an even greater positive number (for example, two squared is four). The square of any positive number between zero and one is a positive number closer to zero (0.5 squared is 0.25).
The square of zero is zero.
The square of any negative number is equal to the square of the corresponding positive number; thus the square of negative two is four.
Therefore, four has two square roots; 2 and -2. Similarly, one's square roots are one and minus one.
So, a question then emerges; where are the square roots of minus one?
They cannot be on the numberline. There is no space for new numbers on the line, and the square of every number on the line is a positive number (or zero).
Let us call the square roots of minus one i and -i (somwhat arbitrary notation that was used once and stuck) Where do we put them on the line?
Since the line is full, we cannot put them on the line. If you place the line such that the zero is in front of you, the positive numbers head off to the right, and the negative numbers go to the left, then i is found one jump directly up from zero. Similarly, -i is one jump directly down from zero.
So, they are numbers, but they are not on the number line.
And now that we have placed i and -i, we can apply the same operation as we used earlier.
Addition: adding 1 is a jump to the right. Similarly, adding i is a jump upwards. There is a 2i two jumps above zero; a 3i three jumps above zero, and so on.
In fact, by following the same steps as were used to construct the original, real number line, we can create an imaginary number line at right angles to it; so that we can point to, say, 2.5i, or even pi i.
Then, if we want t find the point where (say) 3+4i is, we first jump three jumps to the right, then we jump four jumps up; adding the numbers 3 and 4i. 3+4i is thus a clearly defined point on the numberplane (since it's no longer one-dimensional, "numberline" is not exactly accurate anymore).
Adding and subtracting complex numbers on this plane is perfectly straightforward (though actually describing what i apples look like is beyond me). Multiplication follows the rules for multiplying additive expressions; that is, (a+b)*(c+d) = ac+ad+bc+bd. So, therefore:
(3+4i)*(2+5i) = (3*2)+(3*5i) + (4i*2) + (4i*5i) = 6 + 15i + 8i + 20*i*i
But since i is defined such that i*i=-1, that means:
(3+4i)*(2+5i) = 6 + 15i + 8i + 20(-1) = 23i-14
Voila, multiplication.
I recommend chapter 22 ("Algebra") of volume 1 of The Feynman Lectures on Physics. Here's a PDF.
My summary (intended as an incentive to read the Feynman, not a replacement for reading it):
We start with addition of discrete objects ("I have two apples; you have three apples. How many apples do we have between us?"). No fractions, no negative numbers, no problem.
We get other operations by repetition -- multiplication is repeated addition, exponentiation is repeated multiplication.
We get yet more operations by reversal -- subtraction is reversed addition, division is reversed multiplication, roots and logarithms are reversed exponentiation. These operations also let us define new kinds of numbers (fractions, negative numbers, reals, complex numbers) that are not necessarily useful for counting apples or sheep or pebbles but are useful in other contexts.
Rules for how to work with these new kinds of numbers are motivated by keeping things as consistent as possible with already-existing rules.
Scholastic math is a different beast. I can say that a lot of professors have issues with the "standard" math curriculum. I have taught university calculus myself and I don't think that the curriculum and textbook I had to work with had much to do with "fluid intelligence".
Sounds like one source for your troubles. It's a lot harder to succeed at school math and go through the motions if you have unanswered questions about why the method works (and aren't willing to blindly follow formulas). By all means bring your questions up to the professor. If he's teaching, there's probably some university policy that he be available to students for a certain amount of hours outside of class (i.e. it's part of his job). You lose nothing by trying. Even an e-mail wouldn't be a bad idea in the last resort. In my experience, professors tend to complain about students who never seek help until they show up the day before the final at their wits' end (or, worse still, after the final to ask why they failed). By that point it's too late.
We like our multiplication rules to work nicely and division by zero causes problems. There's no consistent way to define something like 0/0 (you could say that since 1 x 0 = 0, 0/0 should be 1, but this argument works for any number). With something like 1/0, you could say "infinity", but does that then mean 0 x infinity = 1? What's 2/0 then?
A very easy way to improve your writing would be to separate your text into paragraphs. It doesn't take any intelligence but just awareness of norms.
Math.stackexchange exists for that purpose.
Not everybody is good at math. That's okay. Scott Alexander who's an influential person in this community writes on his blog:
Math is about abstract thinking. That means "common sense" often doesn't work. One has to let go of naive assumptions and accept answers that don't seem obvious.
In many cases the ability to trust that established mathematical finding are correct even if you can't follow the proof that establishes them is an useful ability. It makes life easier.
In addition to what CCC wrote http://math.stackexchange.com/questions/26445/division-by-0 is a good explanation of the case.
I hope you don't mind that I have now separated my comment into paragraphs. It's such an obvious problem in hindsight.
Thank you for your reply! It encouraged me a lot!
Accepting feedback and directly applying it is great :)
While yes, that can make life easier, it also means that if the reason why you can't follow the proof is because you're misunderstanding the finding in question, then you're not applying any error checking and anything that you do that depends on that misunderstanding is going to potentially be incorrect. So, if you're going into any field where mathematics is important, it can also make life significantly harder.
It's hard to put in words what I mean. There a certain ability to think in abstract concepts that you need in math. Wanting things to feel like you "understand" can be the wrong mode of engaging complex math.
That doesn't mean that understanding math isn't useful but it's abstract understanding and trying to seek a feeling of common sense can hold people back.
On the contrary, failing to feel common sense is usually a sign that you don't really understand what's going on. Your understanding of an abstract concept is only as good as that of your best example. The abstract method in mathematics is just a way of taking features common to several examples and formulating a theory that can be applied in many cases. With that said, it is a useful skill in math to be able to play the game and proceed formally.
There's an anecdote about a famous math professor who had to teach a class. The first time, the students didn't understand. A year later, he taught it again. Learning from experience, he made it simpler. The students still didn't understand. When he taught it a third time, he made it simple enough that even he finally understood it.
I will concede that in practice it can be expedient to trust the experts with the complications and use ready-made formulas.
This doesn't seem to be true for anything that's normally analyzed statistically: the stock market, for example, or large-scale meteorology.
I... think I learnt math in a very different way to you. If I didn't feel that I understood something, I went back until I felt that I did.
I do not understand the difference between an "abstract understanding" and a "feeling of common sense". Is a feeling of common sense not a subtype of an abstract understanding (in the same way that a "square" is a subtype of a "rectangle")?
For folks who post here morale and akrasia are usually much bigger problems than brain hardware.
No one is smart enough.
But if you mean, specifically, smart enough to
then I think the question is kinda backwards. "Am I too stupid to try to improve my thinking?" -- it's like "am I too sick to try to improve my health?" or "am I too weak to try to improve my strength?" or "am I too poor to try to get more money?".
Now, no doubt all those things are possible. If you really can't reason at all, maybe you'd be wasting your time trying to reason better. And there are such things as hospices, and maybe some people are so far in debt that nothing they do will get them out of poverty.
But those are unusual situations, and someone who is headed for a good result in a challenging subject at a good university is absolutely not in that sort of situation, and if the stuff on Less Wrong is too hard for you to understand the fault is probably in the material, not in you.
A fine example of the kind of "easy" task human brains (even good ones) are shockingly bad at. I just attempted a randomly-chosen 2-digit multiplication in my head. I got the wrong answer. Am I just not very intelligent? Well, I represented the UK at two International Mathematical Olympiads, have a PhD in mathematics from the University of Cambridge, and have been gainfully employed as a mathematician in academia and industry for most of my career. So far as I can tell from online testing, my IQ is distinctly higher than the (already quite impressive) Less Wrong average. It is OK not to be very good at mental arithmetic.
(Having said which: If there were something important riding on it, I'd be more careful and I'm pretty sure I could do it reliably. I did a few more to check this and it looks like it's true. So I may well in fact be better at multiplying 2-digit numbers than you are. But the point is: this is not something you should expect to be easy, even if it seems like it should be. And the other point is: Even if you are, in some possibly-useful sense, less intelligent than you would like to be, that is not reason not to aspire to rationality. And the other other point is: It's clear that your intelligence is, at the very least, perfectly respectable.)
Thank you for the reminder that precision in language is very important. I learnt that in the Knowledge and Inquiry course I was enrolled in. Thank you also for taking time and effort to type out that reply. It is deeply comforting and a great encouragement to me.
Dear All (or whatever is the appropriate way to address the community here),
Reading Star Slate Codex kindled my interest in this community. I do not (yet) consider myself a Rationalist, largely because I don't put a disproportionately high value on the truth value of statements as opposed to their other uses, but I might be something sort of a fellow traveller because I think we have one thing in common: curiosity and the desire to investigate and analyze everything.
About me: not actually Dutch (although European, never been to the USA), my nickname is a bit of an in-joke I cannot explain without compromising my privacy. ESL, but hopefully fluent enough.
Things I would like to discuss and please guide me to the right places for this:
1) Why do you place such a high value on the truth value of statements as opposed to their other uses? For example when you are grieving for a loved one, don't you rather want to hear some comforting, soothing half-truths?
2) Same, with a focus on religion. Why do you care so much about whether they are true, as opposed to caring about whether they are socially useful or harmful, for a huge variety of purposes and optimization goals?
2/B) Shouldn't a species with a generally Low Sanity Waterline rather construct something along the lines of lest harmful / most useful Designer Religion (parallel: designer drugs) as opposed to trying to overcome it entirely? What would be the ideal features, goals, deliverables of a proper Designer Religion?
3) How can we approach the problem of ego-centrism / narcissisism rationally, which is NOT the same problem as selfishness or egoism? It is rather the problem of a disproportionate focus / attention to the self, which can be entirely coupled with unselfish altruism, for example giving charity but not focusing on the recipient but on your own virtue. This a problem, I think this is a growing problem, I think in politics narcissism or ego-centrism has traditionally been a problem of the Left and the most intelligent conservatives and religious writers (Chesterton, Burke, Oakeshott, Lewis etc.) can be seen as anti-narcissists, but they were not systematic, not principled enough - and ignored narcissism on their own side of course. This deserves a rational analysis but I don't even know where to begin! Is there something like a narcissism test for example?
4) Value judgements and personal choices. Is the Future You always right? You face the choice between going to the gym to lose weight or stay in comfortably and read. Your short term goals conflict with your long term ones. Your time preference conflicts with your other preferences. Current You would feel better staying in, Future You prefers to not be overweight. Generally it is said wise people who have self-control and whatnot, respected people choose the preferences of Future You. But if you keep pleasing Future You, you will very literally never be happy. And if you keep pleasing Current You, you end up an unhealthy addicted trainwreck. What is the rational strategy?
5) Testosterone and masculinity. I used to be the typicial intellectual "gamma rabbit" man who dislikes it, see Carl Sagan on testosterone poisoning. I used to be influenced by Redpillers to the opposite, then I realized they are, how to put it, not the kind of people I want to take my advice from. Vox Day does a "great job" of inadvertedly convincing people like me to not want to have ANYTHING to do with people like them. Now I stand confused in the middle. Right now I try to play both sides of the game, be a good husband and dad at home and a fierce fighter in the boxing gym (the keyword is "try", as in, fiercely trying not to collapse from exhaustion during sandbag work). I don't know if anyone tried to analyze this rationally, what is best etc.
6) Discuss Jack Donovan. Dude be crazy. Also, intelligent and writing well-researched stuff. Also, he is evil. What not to like?
7) Thomas Aquinas. Theist or not theist, he was a genius. Even if you see theology as a form of fantasy fiction, he was leaps and bounds the best, most structured, most logical fantasy writer. You want superhuman machine intelligence? It will probably have to cross through the phases of very high human intelligence. One phase of your AI will be "AIquinas".
8) Pet topic: how to un-fuckup Eastern Europe? I intend to live there, so quite motivated. Example: how to convince people that thinking in categories of players and suckers is not such a good idea or cooperation is a good one? Is there such a thing as escaping the corruption spiral?
This is a great question, I think about this a lot too. My intuitions are: a bit of reaction, e.g. getting in touch with the glorious past. This might work w/ e.g. Poland/Lithuania, may work even on Russia, if Russia remembers how the Novgorod republic worked. But Russia is a hard nut to crack.
But yes once there is a society-wide defection norm, it is hard to get out of.
Isn't that what Putin is doing? I am not sure this is a great idea. The past glories tend to be associated with nationalistic wars.
Another issue is what would unfucking entail -- turning East Europeans into Scandinavians? National cultural characteristics tend to be pretty persistent :-/ Otherwise, the canonical answer seems to be a long period of civil society, rule of law, etc. I am not holding my breath.
Russia is a super interesting special case. An interesting alternative history to ponder, re: Russia, is what would have happened had Novgorod predominated and not Moskva. Novgorod was sort of "the Lowlands of the East" in terms of the way they did things. Moskva was quite culturally nasty, and they got ahead by being basically the tax collectors for the Mongols.
One solution to this is to develop, through force if necessary, a small group of people where cooperation is enforced, then expand that group. For example, anarchy advances to despotism when a single powerful despot dominates and prevents anyone but him from using force. City-states advance to empire when a single city (e.g. Rome) conquers them and forces cooperating within its borders (Pax Romana). The analogy might be for a rich, powerful Russian with a clean reputation to make lots of friends who also have a clean reputation and go found a city somewhere in unincorporated Russian land with an able, honest police force and strongly enforced cooperation norms. Of course, in this age you win with industry, so maybe you'd also want lots of smart people starting software companies.
(Or why start it on Russian land, even? Russian is one of the coldest places on Earth, right? Is just moving everyone who doesn't like corruption out of Russia a viable solution?)
Are there anonymous online forums where Russians can discuss corruption?
It is, and is in fact what happened once the Iron Curtain fell. (This is an oversimplification, obviously).
Are you referring to the collectivization of agriculture in Russia? X-D
Ain't no such animal.
Anonymity is on the speaker's end, not on the forums end. But you might be interested in Alexei Navalny who is politically active on the anti-corruption platform.
On the subject of AIquinas, there's a story: The Quest for St. Aquin.
And while I'm thinking about Aquinas, I remember I once wrote this pastiche of the method:
Whether the composition of the Summa Theologiae was an act of bizarre monomania divorced from reality?
Objection 1: The Angelic Doctor was learned in all of the theology and scripture that preceded him, and drew it into a single coherent work that has not been superceded. Therefore, this was a valuable and mighty deed, and not an act of bizarre monomania divorced from reality.
Objection 2: The Church has blessed his work and canonised its author. Therefore, etc.
On the contrary, It is written that the author himself, after seven years labour cast his work aside, saying that it was of straw, and did not pick up his pen again before he died soon after.
I answer that, It was an act of bizarre monomania divorced from reality. For it is written that there is only One Holy Book, the manuscript of nature, the only scripture which can enlighten the reader. And the Summa makes no reference to anything but the writings and philosophical speculations of the past. Therefore, it fails to read of that Book which alone can enlighten the reader.
Furthermore, the form in which the Summa is written, listing for each point of doctrine objections, contrary objection, verdict, and refutation of the opposing objections, lends itself to argument in favour of any view whatever; in contrast to the method of logic and experiment, which does not lend itself to argument in favour of any view whatever, but only (save for our fallible natures), in favour of that which is true and can be tested. Therefore the Summa proves no point of doctrine, but rather provides only a form of catechism to be recited in favour of the official doctrine.
Reply to Objection 1. The writings of the past are valuable as a source of truth, only in so far as they ultimately rest on observation of nature. Neither theology nor scripture rest upon observation of nature.
Reply to Objection 2. Those who themselves value a work, do not by that act prove the value of that work.
I can handle my feeling on their own and I don't need someone to lie to me to comfort myself. Fully accepting reality, allows processing of emotions much better.
Being conscious about ideology is useful but I don't think it's very useful to think in terms of religion. For Muslims rules about how inheritance works are part of their religion. For Christians that's not true.
Effective Altruism does fulfill some social values of religion. It doesn't need a God to do so, or forbid it's followers to believe in Gods. If you want spiritual experiences there are various practices that don't need any decision to believe in Gods that might even better at providing spiritual experiences than Christian religion.
Of course. Academic psychology attempts to measure a variety of traits.
Hello all, I'm new to this site. I've stumbled across this website a few times, and have been interested in its implications on philosophy. I am here in a position of scepticism about the claims and projects this site wishes to advance. I suspect most of my posts in the recent future will be critiques of other things found on this website. I hope I make some friends, and not too many enemies.
What do you understand those to be?
I do not fully know yet.
What do you mean when you say you are skeptic of ideas that you don't know?
You do not need to fully understand something to approach it with skeptically.
Yes, but then it says more about your general approach to things you don't understand then it says something about the subject.
You also didn't answer the question. What do you actually mean when you say, that you are skeptic?
No; it tells you about my approach to LessWrong based on what I know of LessWrong. I hope you are familiar with the word skeptic. If not, I recommend you read a dictionary entry on it, and perhaps look up its usage in literature. If you mean "what precisely do I mean when I say I am a approaching LessWrong skeptically", I mean that I will be reading carefully through articles on LessWrong, looking for potential flaws and failings, and generally maintaining a high degree of doubt over anything said or implied.
I have to add that this welcoming thread isn't very welcoming.
This is generally referred to around here as "maintaining good epistemic hygiene", and it's considered a fairly normal practice. There's no particular need to give it a special name like "skepticism", especially when that word already has a philosophical meaning.
Moreover, if you come onto any website (not just LessWrong) and say something like "I am here in a position of scepticism about the claims and projects this site wishes to advance," naturally people will think you are referring to specific claims. If they then ask you which claims you are referring to, and you say "I don't know," it's only expected that people will react with confusion and (probably) will not warm up all that much to you. It's almost like a sort of bait-and-switch; you start off (seemingly) claiming one thing (either explicitly or implicitly) and then reveal that you were talking about something else all along. We have a name for that on this site as well: logical rudeness.
In general, saying (or implying, at least) in your first comment on a new site you are joining that you disagree with many of its claims is not likely to lead to welcoming responses. This is not because residents are trying to be unpleasant; rather, it is because they are simply following the flow of the conversation. Consider the following exchange:
B's response is something of a nonsequitur, and in fact does not address what most people would perceive to be the meat of A's comment: that A is skeptical of many of the claims this site has to offer. More realistic would be the following conversation:
And if you look closely at the first two comments in this thread, you'll see that this is exactly what happened. Nothing hostile going on. If A then goes on to reply "I don't know", well, then people might start to find A's position slightly strange. But there's no "unwelcoming" vibe going on here, I don't think.
(But since you are correct that no one actually welcomed you, let me be the first: Welcome to LessWrong!)
I am aware of the philosophical meaning. If you don't mind, I'd prefer to just use regular terminology rather than your site-specific terminology. I've been around the block of debating sites, and none of them have gotten so defensive when I've simply stating I'm approaching their claims skeptically. Stating you wish to approach something skeptically without stating exactly what you are approaching seems sensible to me.
Also, it seems rather silly to me that your response to me effectively saying "I feel unwelcome" is "Every reply has been legitimate!!!!". I didn't say anyone had been unreasonable, I just said I feel unwelcome. And your reply certainly hasn't changed that.
This is no debating side. It a side for rational discourse about how to reason. As such we talk about the subject of how to reason. Not to defend something but because we care about how to reason and your particular way of reasoning.
You said that you don't understand what the website is about and people try to explain it to you. If you don't want to understand the local terminology you won't understand LW.
You didn't you said people acted unwelcome. That's something different than saying you feel unwelcome even by conventional standards of language.
Then you should be aware that the way in which you used the term is not in line with its philosophical meaning.
This was not, in fact, your original wording. From your original comment:
Specifically, you singled out "this site", i.e. LessWrong, as the one whose claims you were approaching skeptically, suggesting that there was something in particular about LessWrong which you found disagreeable. The connotations of your original comment and the ones you are offering now are radically different, even if they may be denotatively similar. The practice of picking up on (and sending) said connotations is a crucial element of any social interaction, so if people are apparently interpreting your words incorrectly, you should take that as evidence that you were unclear and seek to be more clear in the future, rather than waste time defending your original wording. A simple "Sorry, you misunderstood me; this is what I actually meant" would have sufficed.
Again, your original wording:
This is not a statement about your own state of mind; rather it is a claim of what (presumably) you regard as an objective aspect of this thread (whether it is "welcoming" or not). Again, your time could better be spent simply providing a clarification rather than arguing that said clarification is what you said in the first place. No need to bring up "I didn't say this; I said that"; instead, just say "I meant to say that".
As a more general statement: LessWrong as a community places extremely great emphasis on clear communication. Often, we find that a good majority of disagreements can be avoided simply by having all participants state their position clearly in the beginning, rather than having said position remain unclear or nebulously defined, eventually devolving into arguments about the definition of a word, or some such. If you view this thread in light of this, you'll see that none of this is intended as an attack, as you (seem to) have been perceiving it as. We are simply trying to encourage clear communication, and clean up misunderstandings.
I'm familiar enough to know that different people use it to mean different things. Asking people to explain in detail what they mean is called "tabooing" on LW. It helps with rational thinking.
Of course your are skeptic about the value of explaining what you mean. That's alright. It takes mental effort to value clear thinking and most people are not used to engage in that effort.
This might seem unwelcoming because I don't allow you to easily get away with a vague statement and confront you on an intellectual level. But that's not the point, I welcome you by engaging you.
Yeah, you would not make a good host if you welcomed your guests by interrogating them. 'Of course your are skeptic about the value of explaining what you mean' - what on earth does this mean? 'It takes mental effort to value clear thinking and most people are not used to engage in that effort' - great concealed insult. Not quite obvious enough to make you look bad, but with enough "I'm superior to you"-ness to put me down. 'This might seem unwelcoming because I don't allow you to easily get away with a vague statement and confront you on an intellectual level' - nope, it's unwelcoming because you are excessively pedantic, and because you aren't very nice (e.g. the concealed insult).
As a note, I do not have the time nor patience to look through everything linked to me. Also, how do you quote on this website?
Being a good host means creating an environment in which the right people feel welcome. On LW the right people happen to be people who like to explain how they reason.
You started by saying you are skeptical about this website way of handling things.
I answered with a standard way of this websites way of handling things. Asking you to taboo a term you used, without specifically using the word "taboo" because it's internal jargon.
As you said at the beginning you are indeed skeptical of ideas of this website. Tabooing happens to be one of them. It's a new concept for you and for you being skeptical is not about philosophical skepticism but about having a high bar to adopting new concepts.
What you call "pedantry", some people call "clear communication".
I don't want to sound condescending, but to understand discussions, you may have to. This is not an absolute rule, but it is a good rule of thumb that when someone links you somewhere, it's a good idea to at least click on that link.
Quotes are written by prefacing whatever you want to quote with a "greater-than" character: ">". For instance, "> Hello." would appear as
EDIT: Also, note that this notation only works if you begin your quote on a new line. Using a ">" symbol in the middle of a paragraph, for instance, won't do anything.
Hey everyone! I'm a longtime lurker but I've never gotten around to making an account before now. I think my introduction to this site was actually someone linking to the Baby-Eating Aliens story a few years ago, which I guess isn't a common way to find this site. I've since read all of the sequences twice, and most of the other posts. Recent (unfounded, I hope) discussion about the site dying have made me finally get an account.
I'm a physics PhD student working in biophysics and computer simulations, and I also read philosophy and psychology in my free time. Hopefully I'll have some interesting things to contribute; maybe a few posts or mini-sequences about just how useful learning a programming language is to your ability to think and plan, or about the gulf between the scientific methods employed by physics versus biology. Or maybe some clarifications on the interpretations of quantum mechanics. Hopefully there's something I can say that hasn't been said already and much better by someone else, even if it's just links to interesting articles I find as I scour the net.
In any case, Less Wrong has been insanely useful to me over the past few years. Reading it is how I was introduced to Anki, sleep hygiene, methods of avoiding procrastination, and all sorts of useful information I have successfully employed in my daily life.
Hi all,
I've been following EY and LW for about four years now. I'm fairly new to posting though. I started out as a "republican" in elementary school, then turned into a "libertarian" in high school because I didn't care for many conservative positions. Then an "objectivist" in college, because I didn't care for the fact that libertarianism only extended to politics and not ethics. Then I became frustrated with the Objectivist community and their inability to adapt to the real world so I became a "all the people I've met who self-identify as one of these labels has turned out to be really obnoxious so I really don't want to convolute discussions by using a label"-ist. It wasn't until recently that I discovered Rationalism and so far it has been the most accurate label and also the most complete system so far.
My end-game is to end death (and if entropically possible, reverse it). Which is a pretty big practical problem. As such, I don't have a ton of interest in many of the ethical questions because more often than not, my answer is: "If we can end or reverse death, it doesn't matter." .Short-term, my goal is to become rich enough to retire fairly early and have a significant amount of money that can be used to fund various worthy causes and allow me to continue this path full-time. I'm probably 75% of the way there. When I'm not trying to build wealth, most of my free time is spent tinkering with various AI algorithms, exploring number theory, or building prototypes of various gadgets (my latest one is a hard drive that stores data using energy rather than matter. Nevermind the fact that it can only store about 16 bytes.).
Hey everyone!
My name is Tim Cohen and I wanted to say hello! I am new to lesswrong and I am excited to be here.
Did you spell your screenname like that on purpose?
Hello everyone,
I'm Xavier, a 20 year old student from France. I've known about this site for a while. A week ago, I finally decided to start digging into the sequences for some useful insights. I'm interested in various topics such as philosophy, futurology, history and science. However, I'm almost certain my understanding of the world is seriously lacking compared to the average poster here. For example, I have no STEM background at all aside from the most basic knowledge, which is likely to become a problem in the future.
I've been obsessed with the idea of living a rational life for years. I've failed spectacularly to achieve this lofty goal, instead falling prey to what some of the sequences have described as akrasia. I've also been "dunning-krugered" many times due to a tendency to overestimate my abilities. I hope that by reading more Less Wrong and following the discussions here I will be able to eventually correct some of these issues and become a bit less amateurish in the process! Who knows?
Looking forward to meeting you guys.
Hello, Xavier, and welcome to LessWrong!
Hello to all, although I am quite new to this site I have been exploring it ever since I first found it. I am an undergraduate mathematics and physics student with the goal to get a PhD in mathematics with a specialization in game theory and/or decision theory. Throughout my schooling I have constantly been bored with the lackluster mathematics that have been shoved in my face so consequently I have constantly been doing extra studying and research on my own. During one of my information binges I came across what is known as 'timeless decision theory' that I found on this website and after reading the article I was hooked on the plethora of talking points that I found on this website. Though i have done much research on my own on topics such as behavior analysis, game theory of popular board games, and group theory I do not plan on trying to contribute right away, though I hope I will end up posting some great arguments, I feel I need to learn the jargon and protocol before I can sufficiently contribute. As for the more personal side of things, my hobbies include a very healthy dose of board games and math ( yeah, i count it as a hobby). I have what I think is a good sense of humor and my philosophy is that offence is taken, not given, meaning their is no such thing as an intrinsic offensive statement. If anyone has a desire to chat about any of the previously mentioned topics I would be happy to indulge (especially board games, which if you couldn't already tell is a favorite of mine). Thank you and have a nice day to all.
Good [insert-time-of-day-here]! My name is Tighe, I'm 16 years old, and I found this site through one of my friends at school. I'm not the most intelligent person, but I am interested in becoming less wrong. I don't expect myself to compare very well to most people on this site, but hey, that's what the point of being an "aspiring" rationalist is, right? Some of my interests in life so far have been writing, programming, math, and science (though I'm not very good at the last two). I've been told that this site helps to improve one's thinking skills, ones that aren't offered in most high schools (or any high schools, really), and I think that could really help me improve in the aforementioned areas. Well, hello.
Hello everyone!
I just registered and I don't quite know how this works, but the HPMOR Wrap Party Organizers Handbook said to post here, and so here I am.
Venue: Griffith Observatory front lawn
2800 E Observatory Rd, Los Angeles, CA 90027
Date/Time: March 14, 2015: 6:00pm
Cost: Free access to the complex, planetarium shows are $7
Facebook event page: https://www.facebook.com/events/1585754024996915/
Contact email: ladyastralis at gmail youknowtherest
Please bring: A (picnic) blanket, some snacks/food, some way to read HPMOR that has its own light source (I called the observatory -- they turn off the lights pretty early), and a thermos of hot cocoa. Don't forget a coat!
Notes: The final planetarium show is at 8:45pm. A fitting tribute.
The complex closes at 10:00pm.
I will be wearing my Ravenclaw scarf.
Looking forward to finally meeting other HPMOR fans!
Amanda
Welcome, Amanda!
You might want to post your event info as a comment here so it gets attention from the wrap party coordinator. Or you could send it to the coordinator as a private message.
Thanks Gondolinian! I took your advice. Also, Oliver is definitely aware of this party.
Hello folks! I'm new to your site here and still trying to get my bearings. :) The navigation is pretty nonstandard, hence somewhat confusing to me. I found this website from a link my friend posted on a Facebook discussion we had. Since then I've got one question that keeps bugging me, so I decided to ask it here. As I understand, this thread (is this the equivalent of a forum thread?) is a good place to do it. :)
The question is this: I've got a theory which seems (to me) so simple and obvious and able to explain all human behavior that I'm surprised that it hasn't been already accepted as the golden standard. In fact, when browsing Wikipeda it seems there are dozens different competing theories about human motivation, and some of the more popular ones (like the one that Daniel Pink is promoting) are really skirting around the truth (according to my theory). So, obviously I'm full of doubts about how correct I am. There must be something I'm missing here.
Furthermore the idea isn't exactly mine - it's just a slightly modified (or maybe not even modified, depending how you look at it) totally classical idea dating back to Freud himself. I tried to find counterexamples on this site but couldn't find any that I couldn't explain with my theory.
So, the theory is this: humans will always choose to do the action which they think will bring them most pleasure/least pain. As I said - totally classical. The "modification" however is the "they think" part. We cannot see into the future so we cannot choose with absolute certainty the actions what will bring us the maximum enjoyment. Instead we try to predict the likely outcomes of our choices - and quite often we get it totally wrong. Many times every day, in fact.
The reasons for getting it wrong are many. We don't have complete information (or our memory didn't recall it in time; or recalled it incorrectly); we value consequences that arrive sooner as more important than those that arrive later; we can only correlate a limited number of items (memory limitation); etc.
Also we don't only take external things into account but also try to predict our own emotions, because those are quite real pleasure/pain sources too. For example, when I decide to organize my desk, I do it because I anticipate the sense of accomplishment and order (everything in its place and a place for everything) when I've completed the task.
But at the end of the day when all is said and done, the decision mechanism will just sum up all the predicted positive outcomes (and their magnitudes) and all the negative ones, and choose the option with the greatest value.
And this way I've so far been able to explain any example I've come across. Now, if this was the truth, I'm sure there wouldn't be such an eternal debate over it and there wouldn't be so many other competing theories. So where is my mistake? Can anyone come up with a counterexample that I won't be able to explain with my theory?
Welcome!
Short introduction to navigation: Clicking the "Discussion" link at the top of the page will show you (most of) the new articles. If you write comments there, you are most likely to receive replies.
If there is something called "Open Thread", that pretty much means: feel free to ask or say anything (as long as it is at least somewhat relevant to this website, but even that is not always necessary). Also, posting in the most recent open thread will give you more visitors and thus more replies than posting in a three months old article. As of today, the most recent open thread is here, but tomorrow a new one will be started, and it may be strategic to wait.
Well, if you put it this way, it is almost impossible to find a counterexample, because for literally any situation where "a person X did Y", you can say "that's because X somehow believed Y will bring them most pleasure / least pain", and even if I say "but in this specific situation that doesn't make any sense", you can say "well, this is one of those situations when X was totally wrong".
Better approach than "can you find a situation that my theory cannot explain?" is "can you find a situation that my theory cannot predict?" The difference between explanation and prediction is that explanation is what you do after the fact, when you already know which outcome you need to explain, while predictions are done before the fact. For example, if in the next American elections the Democrats will win, I can explain you why. However, if Republicans will win, I can also explain you why. But if you ask me to predict who will win, then I am in trouble, because here my verbal skills cannot save me.
Analogically, if we have a situation "Joe spends his afternoon reading Reddit", it is easy to explain: Joe believed that reading Reddit will bring him most pleasure. But if we have a situation "Joe decided not to read Reddit, and instead learned a new programming language", it is also easy to explain: Joe believed that learning will bring him most pleasure in long term. The problem is if Joe is starting his computer right now, and your theory has to predict whether he will read Reddit (as he usually does, but not always), or whether he will learn a new programming language (which is what he procrastinated doing for a long time, but today he feels slightly more motivated than usually). What will Joe do? This is the difficult question. But once he does something, it will be extremely easy to explain in hindsight why did he choose this option, instead of the other option.
More info here: Making beliefs pay rent. But the general idea is: if your theory can explain anything, but predict nothing, what exactly is the point of having such theory?
Ahh, I see. Thank you! This is exactly what I was looking for! :) Back to thinking. :)
Hmm... I've given it some thought (more to come later, for sure), but there's already one thing I've found this theory useful for. There have been times when I've caught myself doing/desiring things that I should not do/desire. I then asked myself the question - so why do I do/desire this thing? What pleasure/pain motivates me here? Answers to these questions were not immediately available, but after some time doing introspection, I've come up with them. After that it was a simple matter of changing these motivators to rid myself of the unwanted behavior.
So... yes, I think it can be used for predicting stuff (like, "if I change X, then behavior Y will also change"). Now, the information needed for these predictions is hard to come by (but not impossible!). Essentially you need to know/guess what a person is thinking/feeling. But once you have that, you can predict what they will do and how to influence them.
What's your opinion on this?
An alternative explanation I can think of is the placebo effect. It's possible that your behaviour Y changed after changing X, because you believed behaviour Y would change. Especially as you wanted to change those behaviours in the first place.
Also, even if this was not due to placebo effect, it's only evidence on how your mind works. Other people's minds might work differently. (And I suspect it's also quite weak as evidence goes, though I can't seem to articulate why I think so. At the very least I think you'd need a very big sample size of behaviour changes, without forgetting to consider also the failed attempts at changing your behaviour.)
What you describe as "simple" here, is extremely difficult for me. (There are many possible explanations for why it is so, and I am not sure which one of them is the correct one.) Generally what you described seems like a part of the correct explanation... but there are other parts, such as biology, environment, etc.
For example, if my goal is to exercise regularly, I should a) think about my goals, imagine the consequences, think about the costs, and solve the internal conflicts... but also b) do some strategic activities, such as find where the nearest gym is, or maybe buy some exercise equipment to home, and c) check my health to see there is no biological problem such as e.g. anemia making me chronically tired.
Hello All!
I'm not exactly new: I discovered this at around time HPMOR started (wow 3 years ago). I've always liked thinking about how thought works; Hofstadter's GEB was a big influence. I've started the sequences several times, but never seem to finish. So I'm actually registering to see if that helps motivate me to read them all.
Hello everyone, first post. My education level is Associate's. My special skills include mathematics and reading comprehension.
I come to this website, because as I look at the rationalist techniques I can't think to myself, "This is a skill that would be beneficial to learn." I have done some preliminary reading of some of the posts here and find that while a lot of it is rather chewy (that is, taking extra time to process mentally), it is genuinely enjoyable to peruse and be made to think.
I have a question. Considering that I am religious, and I fully intend to stay that way, despite any evidence that might otherwise suggest to change that, how much are any rationalist skills that I may build up hampered? I don't want to summon a religious discussion, so if it seems that I might be, please just think of it as Fixed Belief X. I understand that the ability to update beliefs is central to rationality, but one such belief doesn't seem crushing.
I ask because I want to make sure that I am actually obtaining value out of my time. I don't want to find some arbitrary time down the road that my efforts have borne no fruit, and it was impossible from the beginning.
Depends on how much effect your religion has on you. I doubt you'll be any less rational if you go to church every day although you may end up loathing it one day.
If anybody has a link to the post that Eliezer told a story about how he was told to "pray and (literally) stfu" you'll have a good example of how religion can screw up reasoning. You can still reason effectively in religion irrelevant to how true it is, but you're probably going to encounter something you'll say "this doesn't make sense" and you will one day encounter someone who WILL do something entirely paradoxial while wearing their chosen religious headwear.
To be fair, this kind of example is a bit extreme. I used to read edwardfeser.blogspot.com and he fails at being empiricist, but does not fail at logical reasoning. His only - albeit catastrophic - failure is "X follows from the premises we accepted to be true, hence reality works like X". Map-terrain... However, even Feser could not make a useful ratonalist because of this failure at empiricism. Unwilling to step over the map-terrain gap, the language-reality gap.
Really, the primarily problem of Feser type smart theists is not that they cannot reason, it is that they believe too much in language. Theism almost follows from that failure mode, as language is a mind-product, so when they believe reality works so that that the arguments expressed in words, which tend to convince human minds also happen to be true out there in reality, almost assumes there is a human-like mind behind the universe. Proper atheism starts with the idea of accepting the universe does not give half a shit about our logic, reasoning and intellectuality and we can find ideas perfectly convincing and we can admit they are true and out there still they aren't: but that is really hard as it means really throwing out much of our intellectual history and tradition. It is an incredible huge gap for a culture that got shaped by e.g. Plato to say - and we MUST say this - "Your ideas convinced me perfectly. They are still not true."
No, it's a great example of EVERYTHING (not just religion) going to shit because it basically says "don't think, do".
It's not any less harmful even if we remove religion from there. It can apply to.. practically everything. I think it's sound personal philosophy to know what the fuck you're actually doing. Hell, it's probably the first step in making a plan and it's a step in every process of it.
Well, the meta-level of what you said is "Updating beliefs when evidence is against them is not always beneficial." I think there are articles here that challenge this kind of meta although I cannot point to them, I am fairly new here. But I still see the issue namely how exactly do you decide, by what algorithm, what other beliefs of yours you want to update when evidence is against them and what not? So it seems you will have to competing motives, to execute the truth-seeking algorithm and the belief-defending one and they may weaken each other. Yet, I think with some compartmentalization it can work but it may be difficult.
To put it different words, you can simply put a taboo on full-on truth-seeking wrt religion and let the truth-seeking algorithm run elsewhere, but you have a reason, an algorithm for that taboo, maybe not fully conscious and that may conflict your truth-seeking algorithm in other fields in more subtle ways: perhaps not handing out an clear obvious taboo, but biasing results. Or to be blunt: non-rationality has reasons and methods too, and thus leaks out from compartments and contaminates.
Just my 2 cents, I am also a beginning learner here.
Hello. New to the active part of the site, I've been lurking for a while, reading much discussions (and not always agreeing, which might be the reason I'm going active). I've come to the site thanks to HPMOR and the quest towards less bias.
I'm a (soon starting a PhD) student in molecular dynamics in France, skeptic (I guess) and highly critical of many papers (especially in my field). Popper is probably the closest to how I define, although with a few contradictions, the philosophy of what I'm doing.
I'm in the country of wine, cheese and homeopathy, don't forget it :)
Hello world.
I am new to the community, but I have read through the most part of the major sequences before I registered. I found this site by reading Eliezer´s Harry Potter fanfic hpmor. It was really good by the way. I am happy to learn about biases and how to overcome them and how to optimize certain things.
I am fairly intelligent and I am a VERY philosophical person.
Have you considered the effect of selection? From USA, you usually hear about the stupid stuff, because that's what makes interesting news. (Also, exaggerating all the bad things from the "decadent West" has a long tradition in Russian propaganda.) From your own country, you spend most time with your circle of educated people.
Yes, I have considered it. We have no russian propaganda in Finland. Overall, finnish people don´t like Russia very much. I don´t spend much time with educated people right now either. But I agree that selection somehow may have something to do with it.
Hello, I'm new to LessWrong. I was hoping someone could help me with a technical problem I'm having. I posted this same problem on the open thread under the discussion page, but I thought I'd be more likely to get a response here. It's to do with the LessWrong wiki. I made an account called Tryagainslowly on it; it wouldn't let me use my LessWrong account, instead making me register for the wiki independently. I wanted to post in the discussion for the wiki page entitled "Rationality". The discussion page didn't have anything posted in it. I wrote out my post, and attempted to post it, but it wouldn't let me, telling me new pages cannot be created by new editors. What do I need to do in order to submit my post? I'm happy to show what I was intending to post here if anyone wants me to.
It works now! It just required waiting a bit.
Did you realize that "here" refers to a three months old article (on a website that has new articles every day)?
For future, you are more likely to get a quick response in the most recent Open Thread. (There are articles called "Open Thread" in the "Discussion" section of the website.)
Hello!
My name is Tommi, and I'm a 34-years old Finn living in Berlin at the moment. I work as a freelance developer, focusing on the Unity development environment, making educational games, regular games, virtual art galleries, etc. for an hourly fee (so that's the skill set I bring into the community). I found Less Wrong some years ago via HPMOR (I forget how I found HPMOR). I've read it occasionally, but over the last year or so I've been slowly gravitating towards it, and decided now to make the effort to try on this community.
I've always valued reason and science over hearsay and guessing, but so far it's manifested mostly in terms of what I like to read and who I vote for. I also participated in the Green party of Finland for some years, in order to advance scientific decision making and a long-term, global approach to things (the Greens in Finland have a fairly strong scientific leaning despite hanging onto some dogmas). However, as an introvert my effect was, as far as I could tell, minimal. Now that I've learned that lesson, and am also in a good position financially and in terms of available time, I'm looking at my life goals again, and would like to see if this community could help me reach them.
As I understand them now, my goals are as follows:
1) Live a comfortable life materially. I'm not willing to sacrifice all of life's comforts to serve a higher goal. However, my material desires are lowish compared to my ability to earn (I'm a freelance programmer and apparently a pretty good one).
2) Have a fullfilling social life. One reason I've been looking at Less Wrong are the articles on improving social skills. However, I'm not certain if improving them is worth the effort - perhaps it would be better to settle for the kind of social life I can get with my current skills, and focus on other things. (Romance seems to be particularly hard to achieve - I think it's particularly hard because I'm gay and I haven't found many social circles that are simultaneously gay and nerdy enough to feel comfortable to me.)
3) Have a high net positive impact on the world. Unless I suddenly lose my income, I intend to pay 10% of my income this year to charity. I'll probably go for a GiveWell approved charity, although I have some reservations on the utilitarian leanings of it. I believe in more complex ethics than a simple sum total of utility. For example, I believe that debt exists: If someone loses utility because of me (either they helped me or I did them harm), I'm obligated to compensate them (if they want that) instead of helping some other person. So I tend to think I should become carbon-neutral before contributing to other charities, unless those charities help the same people damaged most by carbon emissions (something that may well be true). I also believe that the utility of people who do harm to others is worth less than the utility of those who don't. The application of this second rule isn't as clear, though.
4) Artistic aspirations. I wish to advance the field of interactive storytelling. Basically, I'd like to make a game/games that offer the player/players meaningful choices. Meaningful in storytelling, moral, and strategic sense. Such games already exist, of course, but I want to make the choices more open-ended than in an RPG like Mass Effect, and more real and personal than in a strategy game. Ideally, I'd like to make the player feel like they're interacting with and affecting the lives of real people in an imaginary setting. My ambitions are similar to Chris Crawford's (http://www.erasmatazz.com/library/interactive-storytelling/what-is-interactive-storyte.html) but my approach is not as puristic as his. My other role models are the people behind the game King of Dragon Pass.
Initially, I was thinking about this in terms of the usual heroic stories that are being made into games over and over (just doing it better, of course). However, now I'm thinking about combining this ambition with another ambition, which was turning one of my old roleplaying campaigns into a novel/series. I wrote a few chapters a couple of years ago, and it was very well received at the creative writing workshop I showed it at. Some of the honor goes to the failed MMO Seed which my roleplaying campaign was fanfiction of. Seed, and by extension the campaign, had strong rationalistic leanings - it's a science fiction story about a group of colonists on another planet sorting out various problems via science and technology, and having political games about which way to steer the colony. The characters tend to be very analytic and look at things with a long perspective.
My campaign was pre-HPMOR, though, so it wasn't that super-deep about rationalism. But now I think it might be interesting to combine the writing goal with the interactive story goal, and strive to deepen the thinking involved as much as I can. Ideally, the game would reward the player/players for thinking rationally, while also making them care about the characters and the unfolding story - without turning it into a series of rationality puzzles with only one right answer.
So, I'd like to see if digging deeper into the Less Wrong community would help me with these goals.
Just what I want to do!!!
I believe social skills make a huge difference in one's life. I also believe that most people underestimate this because they are not aware of the benefits that being popular could bring them.
Sometimes changing your environment brings better results. But these two options are not mutually exclusive. You can have a great preferred environment and be able to navigate successfully the rest of the world -- because you have to interact with the rest of the world to achieve many things you want. Even to explore it to find the good parts of the environment.
Wow, I'm so glad I stumbled onto slatestarcodex, and from there, here!!! You guys are all like smarter, cooler versions of me! It's great to have a label for the way my brain is naturally wired and know there other people in the world besides Peter Singer who think similarly. I'm really excited, so my "intro" might get a little long...
Part 1-Look at me, I'm just like you!
I'm Ellen, a 22 year old Spanish major and world traveling nanny from Wisconsin, so maybe not your typical LWer, but actually quite typical in other, more important ways. :)
I grew up in a Christian home/bubble, was super religious (Wisconsin Evangelical Lutheran Synod), truly respected/admired the Christians in my life, but even while believing, never liked what I believed. I actually just shared my story plus some interesting studies on correlations between personality, intelligence, and religiosity, if anyone is interested: http://magicalbananatree.blogspot.com/2015/02/christian-friends-do-you-ever-feel.html The post is based almost entirely on what I've come to learn is called "consequentialism" which I'm happy to see is pretty popular over here. I subscribe to this line of thinking so much that I used to pray for a calamity to strengthen my faith. I chose a small Lutheran school despite having great credentials to get into an Ivy, because with an eye on eternity, I wanted to avoid any environment that would foster doubt. My friends suggested I become a missionary, but to me, it made far more sense to become a high profile lawyer and donate 90% of my salary to fund a dozen other missionaries. (A Christian version of effective altruism?) No one ever understood!
Some people might deconvert because they can't believe in miracles, or they can't get over the problem of evil. These are bad reasons, I think, and based on the presupposition that God doesn't exist. Personally, the hardest thing for me was believing that God was all-powerful. Like, if God were portrayed as good, but weak, struggling against an evil god and just doing the best he could to make a just universe and make his existence known, I probably would never have left the faith. It took me long enough as it is!
Part 2-A noob atheist's plea for help
Anyway, now I've "cleared my mind" of all that and am starting fresh, but my friends have a lot of questions for me that I'm not able to answer yet, and I have a lot of my own, too. I'm starting by reading about science (not once had I even been exposed to evolution!) but have a lot of other concerns on the back burner, and maybe you guys can point me in the right direction:
Who was the historical Jesus? As a history source, why is the Bible unreliable?
How can I have morality?? Do I just have to rely on intuition? If the whole world relied on reason alone to make decisions, couldn't we rationalize a LOT of things that we intuit as wrong?
Does atheism necessarily lead to nihilism? (I think so, in the grand scheme of things? But the world/our species means something to us, and that's enough, right?)
What about all the really smart people I know and respect, like my sister and Grandma, who have had their share of doubts but ultimately credit their faith to having experienced extraordinary, miraculous answers to prayer? Like obviously, their experiences don't convince ME to believe, but I hate to dismiss them as delusional and call it a wild coincidence...
Are rationalists just as guilty of circular reasoning as Christians are? (Why do I trust human reason? My human reason tells me it's great. Why do Christians trust God? The Bible tells them he's great.)
Part 3-Embarrassingly enthusiastic fan mail
Yay curiosity! Yay strategic thinking! Yay honesty! Yay open-mindedness! Yay opportunity cost analyses! Yay common sense! Yay tolerance of ambiguity! Yay utilitarianism! Yay acknowledging inconsistency in following utilitarianism! Yay intelligence! Yay every single slatestarcodex post! Yay self-improvement! Yay others-improvement! Yay effective altruism!
Ahhh this is all so cool! You guys are so cool. I can't wait to read the sequences and more posts around this site! Maybe someday I'll even meet a real life rationalist or two, it seems like the Bay Area has a lot. :)
There's now a portal into the meatspace Bay rationalist community if this is something you're interested in.
Wow, you guys even play board games? Nice. Thanks!! I'll try to come to the Friday meetup next Friday!
There's also a Less Wrong meetup group in Madison, if you still live in Wisconsin! (They also play lots of board games.)
Thanks! I'm from Janesville, so not far from Madison. Maybe I'll stop in next time I'm home for Christmas break!
That is awesome!
If you haven't heard of HPMOR, check it out here. Anyway, there's this great sequence where Harry teaches the ways of science to Drako Malfoy... it's great! And I think very worthwhile for a beginner to read.
Eliezer talks about a lot of this in the Metaethics Sequence, particularly in the post Where Recursive Justification Hits Bottom.
If you haven't already heard of it, check out the idea of terminal values. Something tells me that you understand it (at least on some level) though. Anyway, Eliezer seems to say something about Occam's Razor justifying our intuitive feelings about what's moral. Personally, I don't really get it. I don't see how a terminal value could ever be rational. My understanding is that rationality is about achieving terminal values, not choosing them. However, I notice confusion and don't have strong opinions.
Welcome :) LessWrong has had a huge positive impact on my life. I hope and suspect that the same will be true for you!
Wait, what? Do you mean Simplified Humanism? I hope that's more of a description than a full argument. One could perhaps turn it into an argument by showing that our root values come from evolution - causally, not in the sense of moral reasons - and making a case that you would not expect them to have exceptions in those exact places.
Eliezer also makes a brief attempt to explain his opponents' motives. This may be true, but I don't think we should dwell on it.
Honestly I don't really know.
Thanks for the welcome!!
I just read Where Recursive Justification Hits Bottom, and it was perfect and super relevant, thanks. "What else could I possibly use? Indeed, no matter what I did with this dilemma, it would be me doing it. Even if I trusted something else... it would be my own decision to trust it." This is basically what I've been telling people who ask me how I can trust my own reason, but it's great to have more good points to bring up. All the posts I've read so far have been so clear and well-written, I can't help but smile and nod as I go.
I'm going to start with the e-book, and once I finish that, I'll probably look into HPMOR! I've seen it mentioned a lot around here, so I figure it must be great, but um, should I read the original Harry Potter first? Growing up, I was never allowed to.
I clicked the terminal values link, and then another link, and then another, and then another... then I googled what Occam's razor is... my questions about morality are still far from settled, but all this gives me a lot to think about, so thank you :)
If these are the questions weighing heavily on your mind, then you would probably enjoy Gary Drescher's Good and Real. I suggest reading the first Amazon review to get a good idea of the topics it covers. It is very similar to some of the content in the Sequences. (By the way, if you purchase the book through that link, 5% goes to Slate Star Codex.)
Also, the Sequences have recently been released as an ebook entitled Rationality: From AI to Zombies. (You can download the book for free in MOBI, EPUB, and PDF format if you follow the 'Buy Now' link at the bottom of that page and enter a price of $0.00. If you do this, it won't request any payment information. If you pay more than that, the money will go to the Machine Intelligence Research Institute.) I have found that Rationality is much, much easier to read than the Sequences.
You may not yet have the background knowledge necessary to understand it, and if that's the case then you can always return to it later, but I think that the most relevant post on this topic is Where Recursive Justification Hits Bottom. It's chapter 264 in Rationality. (That's a daunting number but the chapters are very short. Rationality is Bible-length but you can hack away at it one chapter at a time, or more at a time, if you please.) To be frank, you're asking the Big Questions and you might have to read a bit before you can answer them.
When I read that, I'm reminded of something that Luke Muehlhauser, a prominent LessWrong user and former devout Christian, once wrote:
As you said yourself, "Yay tolerance of ambiguity!" Although their beliefs are false, their experiences can certainly be real. Even if there exists no God, that doesn't mean that the Presence-of-God Quale isn't represented by the patterns of neural impulses of some human brains. It's easy, nay, the default action, to view others with false beliefs in a negative light, but if rationalism were always intuitively obvious, then the world would be a very different place. I try not to make myself feel bad by overestimating my ability to convince others of the value of rationalism. That doesn't mean that I keep my mouth shut all of the time, but I do take it a day at a time, and it seems to work; sometimes I talk about something and it doesn't seem to go anywhere, and then a friend will bring it up days or weeks later and say something like, "You know, I was thinking about that, and I realized it made a lot of sense." And then I privately jump up and down. Sometimes it doesn't work, but for me, there's definitely a middle ground between falling in line and abandoning All I Have Ever Known. I also often see Paul Graham's essay What You Can't Say linked here when new atheists ask about how to maintain ties with religious family members.
EDIT: Oh, and welcome to LessWrong!
Thanks for the welcome!! Good and Real does seem like a good read. I'm going to read Rationality first, which I'm guessing will help me work through some of my questions, but I'll definitely keep that one in mind for later.
Where Recursive Justification Hits Rock Bottom was really relevant, thanks for the link. I'm still digesting Occam's Razor, I think that was the only concept completely new to me.
Thanks for the link to Luke's story. It seems like we went through the same difficult process of desperately wanting to believe, but ultimately just not being able to. I find it super encouraging that his doubts stemmed from researching the Historical Jesus, since that's one thing that my old high school track coach/religion teacher insists I have to look into. He claims no atheist has ever been able to answer any of his questions. The atheists I know all credit a conflict with science as the reason they left Christianity, and I credit...I don't even know, my personal thoughts, I guess... but it's great to know that researching history will also lead there. I'll have to go through the same resources he used so I can better explain myself to Christian friends.
"Although their beliefs are false, their experiences can certainly be real. Even if there exists no God, that doesn't mean that the Presence-of-God Quale isn't represented by the patterns of neural impulses of some human brains." Thanks for that!! It does make me feel better.
Hahaha, wow, I haven't even considered trying to convince others of the value of rationalism yet. Especially after my deconversion, I've been totally on the defensive, almost apologizing for my rationality. ("It's not my fault; it's the personality I was born with. If you guys really believe, you should feel lucky not just for having been born into Christian homes, but also, more importantly, for having been born with the right personalities for faith." and "You think my prayers for a stronger faith weren't answered because my faith wasn't strong enough, but I was doing everything possible to strengthen my faith to no avail." and "Believing isn't a choice, no matter how much I wanted it, I couldn't believe. So if any brand of Christianity is true, Calvinism is your best bet, and I wasn't among the elect.")
So far this strategy is doing remarkably, remarkably well in maintaining ties with friends and family. People understand where I'm coming from, and they feel just awful, sorry for me since they think I'm going to hell, but for the most part, not finding me at fault. Pity is slightly annoying when I'm so happy, but hopefully their pity will eventually lead them to find God unfair, which will lead them to dislike their beliefs, which will lead them to question why they bother believing something they don't like...and then, they won't find much reason at all aside from upbringing/community. Those were actually pretty much the steps of my deconversion process, only I didn't need a personal connection with a particular unbeliever to get there. Anyway, if nothing else, the defensive strategy works wonders for relations. I helped a friend share her doubts with her family in this way, and she said it worked for her too.
I just thought to point out that there's going to be a Rationality reading group; basically, it's a planned series of posts about each Part in the book, where you have the opportunity to talk about it and ask questions. You clearly are very curious, (it's the only way you could survive so many hyperlinks) so it seems like just the thing for you.
Just to give you words for this, and from what I read in the blog post that you linked to in your first comment (which I found very amusing), I think you're trying to verbalize that Christianity was inconsistent. You don't have to prefer consistency, but most people claim to prefer it, and apparently you do prefer it. (I know I do.) You didn't like it as a system because it was a system that said that God was perfectly benevolent and ridiculously selfish (though the second statement was only implicit) at the same time. You can always look at other subjects like science and history and come to the conclusion that religion conflicts with those things when it shouldn't; but you can also just look at religion and see how it conflicts with itself. I think that's what you did.
I saw some of your other comments about meaning, and meaninglessness in the absence of God, and nihilism. Notice that when you ask "Does life have meaning in the absence of God?", everyone says that it depends on what you mean, offers some possible interpretations, and shares their viewpoints and conclusions on what it means. The simplest way to give you a clue as to some of the problems with the question is something that you wrote yourself:
Vagueness is part of the problem, but there are other parts as well. Even though I've never been religious and therefore don't know what it's like to lose faith, worrying about "meaninglessness" is something that I dealt with. I promise that atheists aren't all secretly dead inside. (I actually used to wonder about that.) Rationality Parts N and P deal with questions like that.
I also want to say that I agree with Viliam_Bur's comments on you doing research to defend your new beliefs: It's a lot cheaper time- and resource-wise to act like a skeptic than it is to do research, and you never have to tolerate that awful feeling that you might be wrong. Even when you return with evidence contrary to their beliefs, their standards of evidence are too high for it to matter. I think it's telling that your coach sat around waiting for unusually knowledgeable, atheistic passersby to tell him about the Historical Jesus instead of doing any research on his own.
Hi els!
I just wanted to welcome you and perhaps start a discussion. I have lurked around the Less Wrong boards for years (three, I think, recently made a new account because I forgot my username) and there is a lot of helpful and exciting discussion going on here and so long as you communicate clearly even dissenting opinions are valued.
You came from the jean-skirt Lutherans. I too came from a bubble, and I know it can be tough to find people around whom you feel comfortable talking about big questions like religion, metaphysics, and truth, and logic. But I believe once you start looking, you will find people who are curious about the world and want to increase their quality of life and mind too!
I don't think atheism leads to nihilism. An atheist doesn't have to be a strict materialist! For example, logic probably exists as part of the universe's fabric whether or not humans are thinking or even exist. Yet logic is not made of brain matter or any material. It is mind-independent. So are all the qualities that help people achieve their goals, such as courage, perseverance, honest self-reflection, charity, or whatever else. These are part of the human universe, even though they aren't essentially made of stuff. Well that's my perspective. And I, like the other guys and gals here, am always up to discuss these topics further and try to deepen our understanding and practice of rationality.
Hope you enjoy hanging around LW!
Cheers!
Thanks for the welcome! :) You're right, so many great conversations taking place here! I feel like I'm going to be doing a LOT more reading before I really post anywhere else, but I look forward to lurking too.
I guess when I think about nihilism, I don't necessarily think about strict materialism. That's an interesting point about logic being mind-independent though. I guess I just think about the simple definition of nihilism as meaninglessness. All my life, the "meaning" of life had come from Jesus, which in my mind, meant a relationship with God and eternity in heaven. Now, there's no afterlife. Is there still meaning? Do I even care what happens after I die? I think I do, but why? I could just go out and do more good than bad and enjoy my meaningless days under the sun; is it really worth the mental energy to think about all this stuff, and if so, why? I'm realizing one thing people love about Christianity is how easy it is, once you can get past the whole childlike faith thing.
This puzzled me, since it sounds a lot like the problem of evil. I take it you were describing the argument you lay out at the link?
For completeness - since I'm about to bash Christianity - I should note that Paul does not write like he has even an imagined revelation on the subject of Hell. He writes as if people in the Roman Empire often talked about everyone going to Hades when they died, and therefore he could count on people receiving as "good news" the claim that belief in Jesus would definitely send you to Heaven. (Later, the Gospels implied that your actions could send you to Heaven or Hell regardless of what you believed. Early Christians might have split the difference by reserving baptism for those they saw as living a 'Christian' life.) Clearly one can be a Christian in Paul's sense without believing in Hell.
We don't know. I have some qualms about Richard Carrier's argument (eg in On the historicity of Jesus: Why we might have reason for doubt). But plugging different numbers into his calculations, I come out with no more than a 54% chance Jesus even existed. We can't answer every factual question; some information is almost certainly lost to us forever.
This one seems fundamental enough that if people insist on the truth of miracles - and reports that you can move mountains if you have faith the size of a mustard seed - I don't know what to tell them. But besides directing people to mainstream scholarship (which by the way places the date of Mark after the destruction of the Temple), I can note that Mark inter-cuts the story of the fig tree with Jesus expelling the money-lenders from the Temple. The tree seems like a straightforward metaphor. Then we have later Gospels openly changing the narrative for their own purposes. Mark says Jesus could give no sign to those who did not believe, and they would not have believed (says Jesus in a parable) even if some guy named Lazarus had returned from the dead. John says Jesus performed signs all the time, and as you would expect this led many people to believe in him, especially when he brought Lazarus back from the dead. Though the resurrected disciple who Jesus loved disappears from the narrative after the period John depicts, and even Acts shows no awareness of this important witness.
If you want to have morality, you can just do it. By this I mean that any function assigning utility to outcomes in a physically meaningful way appears consistent. But yes, I've come to agree that simple utility functions like maximizing pleasure in the Universe technically fail to capture what I would call moral. For more practical advice, see a lot of this site and perhaps the CFAR link at the top of the page.
This depends. I would normally use the term "nihilism" to mean a uniform utility function, which does not distinguish between actions. This is equivalent to assigning every outcome zero utility. As the previous link shows, plenty of non-uniform utility functions can exist whether Yahweh does or not.
If you mean the lack of a moral authority you can trust absolutely, or that will force you to behave morally, then I would basically say yes. There is no authority anywhere.
Do they seem smarter and more worthy of respect than Gandhi? Perhaps he's not the best example, but putting him next to the many people from non-Christian religions who have made similar claims to religious experience may get the point across. (Aleister Crowley made a detailed study of mystical experience and how to produce it, but you may find him abrasive at best.)
That also depends on what you mean.
Oh, oops, I can see why that would be puzzling. But yeah, you figured it out. Do you really think my link was an argument though? A lot of people have accused me of trying to deconvert my friends, but I really don't think I was making an argument so much as sharing my own personal thoughts and journey of what led me away from the faith.
You correctly point out that not all Christians believe in hell, but I didn't want to just tweak my belief until I liked it. If I was going to reject what I grew up with, I figured I might as well start with a totally clean slate.
I'm really glad you and other atheists on here have bothered looking into Historical Jesus. Atheists have a stereotype of being ignorant about this, which actually, for those who weren't raised Christians, I kind of understand, since now that I consider myself atheist, it's not like I'm suddenly going to become an expert on all the other religions just so I can thoughtfully reject them. But now that my friends have failed to convince me atheism is hopeless, they're insisting it's hallucinogenic, that atheists are out of touch with reality, and it's nice (though unsurprising) to see that isn't the case.
Okay, I know that I personally can have morality, no problem! But are you trying to say it's not just intuition? Or if I use that Von Neumann–Morgenstern utility theorem you linked, I'm a little confused, maybe you can simplify for me, but whose preferences would I be valuing? Only my own? Everyone's equally? If I value everyone's equally and say each human is born with equal intrinsic value, that's back to intuition again, right? Anyway, yeah, I'll look around and maybe check out CFAR too if you think that would be useful.
Oh! I like that definition of nihilism, thanks. Personally, I think I could actually tolerate accepting nihilism defined as meaninglessness (whatever that means), but since most people I know wouldn't, your definition will come in handy.
Also, good point about Gandhi. I had actually planned on researching whether people from other religions claimed to have answered prayers like Christians do, but bringing up the other alleged "religious experiences" of people of other faiths seems like a good start for when my sister and I talk about this. Now I'm curious about Crowley too. I almost never really get offended, so even if he is abrasive, I'm sure I can focus on the facts and pick out a few things to share, even if I wouldn't share him directly.
Thanks for your reply! Hopefully you can follow this easily enough; next time I'll add in quotes like you did...
The theorem shows that if one adopts a simple utility function - or let's say if an Artificial Intelligence has as its goal maximizing the computing power in existence, even if that means killing us and using us for parts - this yields a consistent set of preferences. It doesn't seem like we could argue the AI into adopting a different goal unless that (implausibly) served the original goal better than just working at it directly. We could picture the AI as a physical process that first calculates the expected value of various actions in terms of computing power (this would have to be approximate, but we've found approximations very useful in practical contexts) and then automatically takes the action with the highest calculated expected value.
Now in a sense, this shows your problem has no solution. We have no apparent way to argue morality into an agent that doesn't already have it, on some level. In fact this appears mathematically impossible. (Also, the Universe does not love you and will kill you if the math of physics happens to work out that way.)
But if you already have moral preferences, there shouldn't be any way to argue you out of them by showing the non-existence of Vishnu. Any desires that correspond to a utility function would yield consistent preferences. If you follow them then nobody can raise any logical objection. God would have to do the same, if he existed. He would just have more strength and knowledge with which to impose his will (to the point of creating a logical contradiction - but we can charitably assume theologians meant something else.) When it comes to consistent moral foundations, the theorem gives no special place to his imaginary desires relative to yours.
I mentioned above that a simple utility function does not seem to capture my moral preferences, though it could be a good rule of thumb. There's probably no simple way to find out what you value if you don't already know. CFAR does not address the abstract problem; possibly they could help you figure out what you actually value, if you want practical guidance.
Note that he doesn't believe in making anything easy for the reader. The second half of this essay might perhaps have what you want, starting with section XI. Crowley wrote it under a pseudonym and at least once refers to himself in the third person; be warned.
Thanks a lot for explaining the utility theorem. So just to be sure, if moral preferences for my personal values (I'll check CFAR for help on this, eventually) are the basis of morality, is morality necessarily subjective?
I'll get to Crowley eventually too, thanks for the link. I've just started the Rationality e-book and I feel like it will give me a lot of the background knowledge to understand other articles and stuff people talk about here.
If "subjective" means "a completely different alien species would likely care about different things than humans", then yes. You also can't expect that a rock would have the same morality as you.
If "subjective" means "a different human would care about completely different things than me" then probably not much. It should be possible to define a morality of an "average human" that most humans would consider correct. The reason it appears otherwise is that for tribal reasons we are prone to assume that our enemies are psychologically nonhuman, and our reasoning is often based on factual errors, and we are actually not good enough at consistently following our own values. (Thus the definition of CEV as "if we knew more, thought faster, were more the people we wished we were, had grown up farther together"; it refers to the assumption of having correct beliefs, being more consistent, and not being divided by factional conflicts.)
Of course, both of these answers are disputed by many people.
There is a set of reasonably objective facts about what values people have, and how your actions would impact them, That leads to reasonably objective answers about what you should and shouldn't do in a specific situation. However, they are only locally objective,..what value based ethics removes is globally objective answers, in the sense that you should always do X .or refrain from Y irrespective of the contexts,
It's a bit like the difference between small g and big G in physics,
Nope. It leads to reasonably objective descriptive answers about what the consequences of your actions will be. It does not lead to normative answers about what you should or should not do.
Okay, I guess I'm still confused. So far I've loved everything I've read on this site and have been able to understand; I've appreciated/agreed with the first 110 pages of the Rationality ebook, felt a little skeptical for liking it so completely, and then reassured myself with the Aumann's agreement theorem it mentions. So I feel like if this utility theorem which bases morality on preferences is commonly accepted around here, I'll probably like it once I fully understand it. So bear with me as I ask more questions...
Whose preferences am I valuing? Only my own? Everyone's equally? Those of an "average human"? What about future humans?
Yeah, by subjective, I meant that different humans would care about different things. I'm not really worried about basic morality, like not beating people up and stuff, but...
I have a feeling the hardest part of morality will now be determining where to strike a balance between individual human freedom and concern for the future of humanity.
Like, to what extent is it permissible to harm the environment? If something, like eating sugar for example, makes people dumber, should it be limited? Is population control like China's a good thing?
Can you really say that most humans agree on where this line between individual freedom and concern for the future of humanity should be drawn? It seems unlikely...
Not necessarily subjective, in the sense that "what should I do in situation X" necessarily lacks an objective answer.
Even if you treat all value as morally relevant, and you certain dont have to, there is a set of reasonably objective facts about what values people have, and how your actions would impact them, That leads to reasonably objective answers about what you should and shouldn't do in a specific situation. However, they are only locally objective,..
My two cents:
Who cares? Okay you obviously do, but why? If the religion is false and reports of miracles are lies, is there really an impotant difference between a) "Yes, once there was a person called Jesus, but almost everything that Bible attributes to him is completely made up" and b) "No, everything about Jesus is completely made up"?
In other words, if I tell you that my uncle Joe is the true god and performs thousand miracles every thursday, why would you care about whether a) I have a perfectly ordinary, non-divine, non-magical uncle called Joe, and I only lied about his divinity and miracles, or b) actually I lied even about having an uncle called Joe? What difference would it make and why?
Because it was written by people who had an agenda to "prove" that they are the good ones and the divinely chosen ones? Maybe even because it contains magic?
I don't fully trust even historical books written recently. It can be funny to read history textbooks written by two countries which had conflicts recently; how each of them describes the events somewhat differently. And today's historical books are much more trustworthy than the old ones, because today people are literate, they are allowed to read and compare the competing books, they are allowed to criticize without getting killed immediately.
Sorry for the offensive comparison, but trusting Bible's historical accuracy would be as if in the parallel universe Hitler would win the war, then he would write his own historical book about what "really happened" and make it a mandatory textbook for everyone... and then a few thousand years later people would trust his every written word to be honest and accurate.
Exactly. You already know what you care about. Atheism simply means there is no higher boss who could tell you "actually, you should like this and hate that, because I said so", and you would have to shut up and obey.
On the other hand; people can be wrong about their preferences, especially when their decisions are based on wrong assumptions. But "being wrong" is different from "disagreeing with the boss".
I would recommend the PDF version. It is better organized; you can read it from the beginning to end, instead of jumping through the hyperlinks. And it does not include the comments, which will allow you to focus on the text and finish it faster (the comments below the original articles are 10x as much text as the articles themselves; they are often interesting, but then it is really extremely lot of text to read).
Thanks for replying!
Why do I care about Historical Jesus? I actually wouldn't, I guess, except that I absolutely need to have a really well thought out answer to this question in order to maintain the respect of friends and family, some of whom credit Historical Jesus as one of the top reasons for their faith.
Good point about the authors being biased, thanks, no offense taken! I still don't like when people say miracles/magic definitively prove the Bible wrong though, since if a God higher than our understanding were to exist, of course he could do magic when he felt like it. Still, based on our understanding of the world, there is no good reason/evidence at all to believe in such a God.
I got the Rationality ebook, and it is great! Sooo well-written, well-organized, and well thought out! I just started today and am already on the section "Belief in Belief." I love it so much so far that it's a page-turner for me as much as my favorite suspense/fantasy novels. Definitely worth sharing and going back to read and re-read :)
Be careful about distinguishing two very different propositions:
(1) There was a preacher named Jesus of Nazareth who lived in a certain time in a certain place.
(2) Jesus of Nazareth rose from the dead and was the Son of God.
Specifically, evidence in favor of (1) usually has nothing to do (2).
That doesn't sound quite right to me, at least if you mean "nothing" literally", given that not-(1) logically implies not-(2).
I think the much smaller posterior probability of (2) than (1) has more to do with the much smaller prior than with the evidence.
A fair point, though "normal" people have a strong tendency to jump from "not-(1) logically implies not-(2)" to "therefore (1) implies (2)".
Ah, yes, the ever-popular fallacy of the inverse.
No worries, I knew what you meant. I am pretty good at logic though, so no need to worry about illogical jumps here. I may not have very much background knowledge about terminology or history or science or anything (yet), and I may not be a very articulate writer (yet), but the one thing I can usually do very well is think clearly. I am even feeling a bit smug after finding the mammography Bayesian reasoning problem that apparently only 15% of doctors get correct to be easy and obvious. :)
Yep. On the social level I get it, but on another level, it's a trap.
The trap works approximately like this: "I will allow you not to believe in my bullshit, but only if you give me a free check to bother you with as many questions as I want about my bullshit, and you have to explore all of these questions seriously, give me a satisfactory answer, and of course I am allowed to respond by giving you even more questions".
If you agree on this, you have de facto agreed that the other side is allowed to waste unlimited amounts of your time and attention, as a de facto punishment for not believing their bullshit. -- Today you are asked to make to make a well-researched opinion about Historical Jesus, which of course would take a few weeks or months to do a really serious historical research; and tomorrow it will be either something new, e.g. a well-researched opinion about the history of the Church, or about the history of Crusades, or about the history of Inquisition, or whatever. Alternatively, they may point at some parts of your answer about the Historical Jesus and say: okay, this part is rather weak, you have to bring me a well-researched opinion about this part. For example, you were quoting Josephus and Tacitus, so now give me a full research about both of them, how credible they are, what other claims they made, etc.
Unless the other side gives up (which they have no reason to; this games costs them almost nothing), there are only two ways this can end. First, you might give up, and start pretending to be religious again. Second, after playing a few rounds of this game, you refuse to play yet another round... in which case the other side will declare their victory, because it "proves" your atheism is completely irrational.
Well, you might play a round or two of this game just to show some good will... but it is a game constructed so that you cannot win. The real goal is to manipulate you into punishing yourself and feeling guilty. -- Note: The other side may not realize they are actually doing this. They may believe they are playing a fair game.
Good point, thanks!! I can't get too caught up in this; there are things I'd rather be learning about, so I need a limit. I'd like to think I can win, though, but this is probably just self-anchoring fallacy (I'm learning!)
Just because I would have been swayed by an absence of positive evidence doesn't mean everyone will be, even people who seem decently smart and open-minded with a high view of reason, like my old track coach and religion teacher. I just made a deal though, that I would read any book of his choice about the Historical Jesus (something I probably would have done anyway!) if he reads Rationality: AI to Zombies :)
I'm a long-time user of LW. My old account has ~1000 karma. I'm making this account because I would like it to be tied to my real identity.
Here is my blog/personal-workflowy-wiki. I'd like to have 20 karma, so that I can make cross-posts from here to the LW Discussion.
I'm working on a rationality power tool. Specifically, it's an open-source workflowy with revision control and general graph structure. I want to use it as a wiki to map out various topics of interest on LW. If anybody is interested in working on (or using) rationality power tools, please PM me, as I've spent a lot of time thinking about them, and can also introduce you to some other people who are interested in this area.
EDIT: First cross-post: Personal Notes On Productivity (A categorization of various resources)
EDIT: I've edited the LW-wiki to make a list of LWers interested in making debate tools..
Hello,
I am a month long lurker who finally decided to make an account.
I'm 24, and am living as a US expat in Beijing right now. I have a BA in Economics from a top 5 university, where the most important thing I learned was just how little that actually meant. I got pretty disillusioned with academia, and I've only been able to start enjoying intellectual pursuits again in the last year or so; hence, it is nice to find a non-university community where I might be able to discuss interesting ideas without all of the self-important swagger.
I would say that the other important thing my econ background influenced is my rational decision making: I do not vote; I was involved in effective altruism (until I became an ethical nihilist); etc. I think I've experience some significant emotional blunting from this, and have mixed feelings about it. Hopefully being in a community of similarly oriented people (and getting more information about typical outcomes) will help me work through whether this is something that I need to address or not.
I lean somewhat classical-liberal (or pro-market left of center, with significant room for government provisioning for market failure) at the moment, but lately I've fallen into a more libertarian heuristi, which I want to become more aware of and counteract as I disagree with that political philosophy on several formal issues. Hopefully I can use the resources at LW to recalibrate on this issue in particular.
My interests are pretty broad: - Public finance / policy - Game theory / auction theory / voting theory (especially wrt collective decisionmaking / policy) - Epistemology (especially regress / Munchausen Trilemma) - Dynamics of social identity (especially the ethics of statistical discrimination) - Aesthetics (especially w.r.t. visual art) - Psychology and personal identity (especially antipsychiatry) - Consciousness, continuity of experience, and personhood - Literature (especially Latin American)
Additionally, I enjoy learning math, though I am not very talented at it (I was a single Algebra/Galois Theory class away from a math degree though). Recently, I've been going back through some old analysis / algebra / number theory books to give it another shot; I'm still bad at it, but it's nonetheless rewarding.
One of the things about LW that seems really awesome is the deep programming knowledge. I enjoyed the few programming classes I took, and look forward to learning more about its applications to modelling decision making.
Anyways, I look forward to engaging with you, and if anyone has anything they want to point me towarda here, I'd love the tip.
See Vaniver on decision theory!
Hello, everyone!
I am a long-time lurker and reader of LessWrong, and I have finally worked myself up to making an account and writing some comments. I am looking forward to participating in the discussions more, and hopefully writing some posts and contributing to the thought-bank here. So far, LessWrong have been a great resource for me, helping me to get a sturdier basis for my ideological framework, and exposing me to some good new ideas to think about.
For a little bit about myself: I am 29 years old, Russian, bachelors’ degree in Chemistry and Math and a Masters’ in Nuclear Chemistry from an American university. Currently I live in Russia, working as an instructor in IT / software development for a business analytics software company. The job is pretty much another step of school, only going into a “job experience” slot on the resume, instead of the “education” one – we study a topic for a month, then we go and teach it to our developers. My first year was our company’s software applications, then development and coding, now I am on the databases part. Eventually, I am hoping to return to a sciencier sort of work, though.
Religion-wise, I am an atheist, formerly going through all kinds of interesting religious searches (maybe I will make a separate comment on the rationalist origin thread about that). Politics – wise, I find it hard to classify myself as going with any traditional views (call me an effective anarchist, maybe?). Or maybe I am hoping for a better set of political ideas to emerge someday in the future.
My interests are the following:
Reading everything I can get my hands on, preferably science and science-pop literature, fiction and science fiction.
Science and self-education. When I found Less Wrong, it sparked yet again my interest in the more arcane parts of IT, and I am currently working through the basics part of the Miri research guide posted here, while also keeping up with my job-related applied IT studies. In the past, I found myself sometime venturing into the evolution theory field (still hoping to find some time some day to make a study of evolutionary algorithms and maybe program some fun simulation with evolving pseudo-life), basics of quantum (well, that was in my school program), biology, sometimes philosophy, religion and applied ethics.
As for less science and reading-related interests – I enjoy camping, rafting, the general summery outdoors stuff. In my city, summer is short, so we try to squeeze as much goodness as possible out of it.
Anyways, I am looking forward to having some fun discussions here. Nice to meet you, guys!
Hello, everyone! I've been lurking for about a year and I've finally overcome the anxiety I encounter whenever I contemplate posting. More accurately, I'm experiencing enough influences at this very moment to feel pulled strongly to comment.
I've just tumbled to the fact that I may have an instinctive compulsion against the sort of signalling that's often discussed here and by Robin Hanson. In the last several hours alone I've gone far out of my way to avoid signalling membership in an ingroup or adherence to a specific cohort. Is this sort of compulsion common amongst LWers? (I'm aware that declaring myself an anti-signaller runs the risk of an accusation of signalling itself but whadayagonnado.)
I'm also very interested in how pragmatism, pragmaticism, and Charles Sanders Peirce form (if at all) the philosophical underpinnings of the sort of rationality that LW centers on. It seems like Peirce doesn't get nearly as much attention here as he should, but maybe there are good reasons for that.
Welcome! I myself recently dared to step in and become an active member here. Have you read Dewey and Wright Mills? In that case, what do you think about them?
Speaking for myself, (a) I am not good at playing social games, therefore I hate environments where things like signalling are the only important thing, and (b) joining any faction feels to me like indirectly supporting all their mistakes, which I would rather avoid.