Welcome to Less Wrong!
If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, or how you found us. Tell us how you came to identify as a rationalist, or describe what it is you value and work to achieve.
If you'd like to meet other LWers in real life, there's a meetup thread and a Facebook group. If you've your own blog or other online presence, please feel free to link it. If you're confused about any of the terms used on this site, you might want to pay a visit to the LW Wiki, or simply ask a question in this thread. Some of us have been having this conversation for a few years now, and we've developed a fairly specialized way of talking about some things. Don't worry -- you'll pick it up pretty quickly.
You may have noticed that all the posts and all the comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. Try not to take this too personally. Voting is used mainly to get the most useful comments up to the top of the page where people can see them. It may be difficult to contribute substantially to ongoing conversations when you've just gotten here, and you may even see some of your comments get voted down. Don't be discouraged by this; it happened to many of us. If you've any questions about karma or voting, please feel free to ask here.
If you've come to Less Wrong to teach us about a particular topic, this thread would be a great place to start the conversation, especially until you've worked up enough karma for a top level post. By posting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
A note for theists: you will find LW overtly atheist. We are happy to have you participating but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false. If you'd like to know how we came to this conclusion you may find these related posts a good starting point.
A couple technical notes: when leaving comments, you may notice a 'help' link below and to the right of the text box. This will explain how to italicize, linkify, or quote bits of text. You'll also want to check your inbox, where you can always see whether people have left responses to your comments.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site.
(Note from MBlume: though my name is at the top of this page, the wording in various parts of the welcome message owes a debt to other LWers who've helped me considerably in working the kinks out)
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (1953)
Hello, my friends. I'm a brazilian man, fully blind and gay...
I knew Fanfiction.net, HP MOR and LessWrong. I hope to learn more :)
Hey everyone,
My name is Owen, and I'm 17. I read HPMOR last year, but really got into the Sequences and additional reading (GEB, Thinking Fast and Slow, Influence) around this summer.
I'm interested in time management, with respect to dealing with distractions, especially with respect to fighting akrasia. So I'm trying to use what I know about how my own brain operates to create a suite of internalized beliefs, primers, and defense strategies for when I get off-track (or stopping before I get to that point).
Personally, I'm connected with a local environmental movement, which stems from a fear I had about global warming as the largest threat to humanity a few years ago. This was before I looked into other ex-risks. I'm now evaluating my priorities, and I'd also like to bring some critical thinking to the environmental movement, where I feel some EA ideals would make things more effective (prioritizing some actions over others, examining cost-benefits of actions, etc.).
Especially after reading GEB, I'm coming to realize that a lot of rather things I hold central to my "identity" are rather arbitrarily decided and then maintained through a need to stay consistent. So I'm reevaluating my beliefs and assumptions (when I notice them) and ask if they are actually things I would like to maintain. A lot of this ties back to the self-improvement with regards to time management.
In day-to-day life, it's hard to find others who have a similar info diet/ reading background as me, so I'm considering getting more friends/family interested in rationality a goal for me, especially my (apparently) very grades-driven classmates. I feel this would lead to more constructive discussions and a better ability to look at the larger picture for most people.
Finally, I also perform close-up coin magic, which isn't too relevant to most aspects of rationality, but certainly looks pretty.
I look forward to sharing ideas and learning from you all here!
Welcome!
snip
Hi! I'm Gabriel, and i'm a 20 year old medical student in London. I (like many of you maybe) found my way here through HPMOR. Having spent the last few years of university mentally stagnating due to the nature of my studies this site and the resources were a breath of fresh air. I'm currently working my way through the sequences, where one comment led me to this thread - apologies if commenting on old posts is frowned upon.
I was born to an educated Muslim family, and until recently I had been blindly following the beaten track, although with little interest in the religion itself. It is only now that I have begun to think about what I know, and how I know it that I am forcing myself to adopt an objective and skeptical standpoint. Over the next 12 months or so, I plan to fully examine the texts and writings of both Islam and it's opponents, and aim to come to an unbiased, rational conclusion. It is my hope that I can update my map to define the true territory and I must thank Mr. Yudkowsky for being a catalyst for my intellectual re-awakening. Although perhaps I will get some flak when I try to give him some constructive criticism: I cannot be the only person in the position I find myself in, wanting to examine my religion and come to a true conclusion. It is abundantly helpful to read arguments for both sides which are logical, and well reasoned...but most of all, courteous. There are parties far more guilty than Mr. Yudkowsky, but it really is horrible when atheist writings have a strong undertone of contempt for those who follow a religion. I am indeed delighted that those atheists have given the matter some thought and come to conclusions that satisfy them (and indeed as Mr. Yudkowsky mentions above, consider theism an open-and-shut case!) but perhaps for those younger students of rationality such as myself, it would be wonderful if we could read these writings without being looked down upon as mind-numbingly stupid. Despite this, I very much enjoy reading Mr. Yudkowsky's writings and I look forward to reading much more!
I suppose I feel very comfortable with the anonymity provided by the internet and since I have given relatively little information about myself on a relatively smaller website I doubt anyone I know will see this and cry out in horror that I could potentially leave my faith. I would love to have some discussions with anyone on this website that has been in a similar position to me, although I have noticed a bias towards discussions around Christianity as I suppose many users here are American and that is a major force for you guys over the pond.
I think above all else, the reason i'm so happy to have found this intellectual sanctuary is because I don't have much else other than my trusty mind. Unlike my peers, chasing girls would be a fruitless effort, and small talk always seemed a bit pointless. Books, learning and thinking have always been my allies and I cannot wait to read about the biases I can try to eliminate. At the end of the day, if you have only one treasure in life it would be prudent to look after and improve it wherever possible.
Well met gents, and again I apologise if I shouldn't be commenting on a very old post!
We don't really mind deadposting here. You might consider this a useful resource in your examination of the Quran.
Hello all. I'm Lauryn, a 15-year old Christian- and a Bayesian thinker. Now, I'm sure that I'm going to get criticized because I'm young and Christian, but I undertand a lot more than you might first think (And a lot less than I'd like to). But let me finish first, yeah? I found LessWrong over a year ago and just recently felt that I just might fit in just enough to begin posting. I'd always considered myself clever (wince) and never really questioned myself or my beliefs, just repeated them back. But then I read Harry Potter and the Methods of Rationality, and was linked over here... And you can guess most of the story from there on. I devoured the sequences in less than a month, and started reading gobs and gobs of books by people I'd never heard of- but once I did, I realized that they were everywhere. Freud, Feynman, Orwell... And here I am, a good year later, a beginning rationalist. Before I get attacked, I'd like to say that I have seriously questioned my religon (as is implied above), and still came back to it. I do have ways that I believe Christianity could be disproved (I already posted some here ), and I have seen quite a bit of evidence for evolution. All right, NOW you may attack me.
Hey. Me too.
Just so you know you're not the only one. I think I've seen one or two others around as well.
Hello. My name is not, in fact, Gloria. My username is merely (what I thought was) a pretty-sounding Latin translation of the phrase "the Glory of the Stars", though it would actually be "Gloria Siderum" and I was mixing up declensions.
I read Three Worlds Collide more than a year ago, and recently re-stumbled upon this site via a link from another forum. Reading some of Elizier's series', I realized that most of my conceptions about the world were were extremely fuzzy, and they could be better said to bleed into each other than to tie together. I realized that a large amount of what I thought of as my "knowledge" is just a set of passwords, and that I needed to work on fixing that. And I figured that a good way to practice forming coherent, predictive models and being aware of what mental processes may affect those models would be to join an online community in which a majority of posters would have read a good number of articles on bias, heuristic, and becoming more rational, and will thus be equipped to some degree to call flaws in my thinking.
Hello, please call me 'Taily' (my moniker does not refer to a "tail" or a cartoon character). I'm an atypical 30 year old psychology student, still working to get my PhD. I also spend a significant amount of time on thinking, writing, and gaming. Among other things.
One reason I am joining this community is my mother, oddly. She is a stay-at-home mom with few (if any) real life friends. She interacts on message boards. I...well I don't want to be like that at all honestly, and I've only on occasion been a part of a message-board community. But I recognize the value of social exchange and community, and as my real-life friends are limited by time and location in how we can meet, this forum may be a good supplement.
My MAIN reason though - why did I choose this 'place'? It seems Very Interesting. I've read a bunch of EY's writings (the five PDF files?), and I've gotten to the point where I've wanted to interact - to ask questions and give opinions and objections - and I'm hoping that's some of what this message board is about.
Also to note:
-I first became acquainted through this community via Harry Potter and the Methods of Rationality, as, Harry Potter fan-fiction is a 'guilty pleasure' of mine.
-I am not an atheist, although I personally cannot stand organized religion. I very much respect the idea of coming to conclusions and developing opinions without the aid of "religion" or "spirituality" though.
-I have read a significant amount of each of those aforementioned PDF files - fun theory and utopias, quantum physics, Bayesian, all that - but I'm not done yet (and I don't yet get quantum mechanics nearly as much as I would like to).
-I consider myself rather well-versed in psychology and associated theories and I am sure I have something to contribute in that area. I wish I were an expert on all the cognitive theories and heuristics/biases, but I'm not (yet). But that's one reason I became interested specifically in EY's writings and this message board.
-One of my main personal philosophies is on doubt and possibility. Nothing's a 100% certain, and considering the way the universe is made, I have trouble believing anything we 'know' is 100% accurate. Conversely, I don't believe 'anything' is 100% inaccurate. So...I tend to hedge a lot.
-I think that the general use of statistics in current psychological research is flawed, and I'm looking to learn more about how to refine psychology research practices, such as by using Bayes' Theorem and all that.
-That's probably more than enough of an introduction for now. I hope I find a place to fit in!
Hello! I'm a first-year graduate student in pure mathematics at UC Berkeley. I've been reading LW posts for awhile but have only recently started reading (and wanting to occasionally add to) the comments. I'm interested in learning how to better achieve my goals, learning how to choose better goals, and "raising the sanity waterline" generally. I have recently offered to volunteer for CFAR and may be an instructor at SPARC 2013.
I've read your blog for a long time now, and I really like it! <3 Welcome to LW!
Thanks! I'm trying to branch out into writing things on the internet that aren't just math. Hopefully it won't come back to bite me in 20 years...
Hi all. I'm a scientist (postdoc) working on optical inverse problems. I got to LW through the quantum sequence, but my interest lies in probability theory and how it can change the way science is typically done. By comparison, cognitive bias and decision theory are fairly new to me. I look forward to learning what the community has to teach me about these subjects.
In general, I'm startled at the degree to which my colleagues are ignorant of the concepts covered in the sequences and beyond, and I'm here to learn how to be a better ambassador of rationality and probability. Expect my comments to focus on reconciling unfamiliar ideas about bias and heuristics with familiar ideas about optimal problem solving with limited information.
I'll also be interested to interact with other overt atheists. In physics, I'm pretty well buffered from theistic arguments, but theism is still one of the most obvious and unavoidable reminders of a non-rational society (that and Jersey Shore?). In particular, I'm expecting a son, and I would love to hear some input about non-theistic and rationalist parenting from those with experience.
By the way, I wonder if someone can clear something up for me about "making beliefs pay rent." Eliezer draws a sharp distinction between falsifiable and non-falsifiable beliefs (though he states these concepts differently), and constructs stand-alone webs of beliefs that only support themselves.
But the correlation between predicted experience and actual experience is never perfect: there's always uncertainty. In some cases, there's rather a lot of uncertainty. Conversely, it's extremely difficult to make a statement in English that does not contain ANY information regarding predicted or retrodicted experience. In that light, it doesn't seem useful to draw such a sharp division between two idealized kinds of beliefs. Would Eliezer assign value to a belief based on its probability of predicting experience?
How would you quantify that? Could we define some kind of correlation function between the map and the territory?
I always understood the distinction to be about when it was justifiable to label a theory as "scientific." Thus, a theory that in principle can never be proven false (Popper was thinking of Freudian psychology) should not be labeled as a "scientific theory."
The further assertion is that if one is not being scientific, one is not trying to say true things.
Thanks Tim.
In the post I'm referring to, EY evaluates a belief in the laws of kinematics based on predicting how long a bowling ball will take to hit the ground when tossed off a building, and then presumably testing it. In this case, our belief clearly "pays rent" in anticipated experience. But what if we know that we can't measure the fall time accurately? What if we can only measure it to within an uncertainty of 80% or so? Then our belief isn't strictly falsifiable, but we can gather some evidence for or against it. In that case, would we say it pays some rent?
My argument is that nearly every belief pays some rent, and no belief pays all the rent. Almost everything couples in some weak way to anticipated experience, and nothing couples perfectly.
I think you are conflating the issue of falsifiability with the issue of instrument accuracy. Falsifiability is just one of several conditions for labeling a theory as scientific. Specifically, the requirement is that a theory must detail in advance what phenomena won't happen. The theory of gravity says that we won't see a ball "fall" up or spontaneously enter orbit. When more specific predictions are made, instrument errors (and other issues like air friction) become an issue, but that not the core concern of falsifiability.
For example, Karl Popper was concerned about the mutability of Freudian psychoanalysis, which seemed capable of explaining both an occurrence and its negative without difficulty. But contrast, the theory of gravity standing alone admits that it cannot explain when an object falls to Earth at a different rate than 9.88 m/s^2. Science as a whole has explanations, but gravity doesn't.
Committing to falsifiability helps prevent failure modes like belief in belief.
There are a couple things I still don't understand about this.
Suppose I have a bent coin, and I believe that P(heads) = 0.6. Does that belief pay rent? Is it a "floating belief?" It is not, in principle, falsifiable. It's not a question of measurement accuracy in this case (unless you're a frequentist, I guess). But I can gather some evidence for or against it, so it's not uninformative either. It is useful to have something between grounded and floating beliefs to describe this belief.
Second, when LWers talk about beliefs, or "the map," are they referring to a model of what we expect to observe, or how things actually happen? This would dictate how we deal with measurement uncertainties. In the first case, they must be included in the map, trivially. In the second case, the map still has an uncertainty associated with it that results from back-propagation of measurement uncertainty in the updating process. But then it might make sense to talk only about grounded or floating beliefs, and to attribute the fuzzy stuff in between to our inability to observe without uncertainty.
Your distinction makes sense - I'm just not sure how to apply it.
Strictly speaking, no proposition is proven false (i.e. probability zero). A proposition simply becomes much less likely than competing, inconsistent explanations. To speak that strictly, falsifiability requires the ability to say in advance what observations would be inconsistent (or less consistent) with the theory.
Your belief that the coin is bent does pay rent - you would be more surprised by 100 straight tails than if you thought the coin was fair. But both P=.6 and P=.5 are not particularly consistent with the new observations.
Map & Territory is a slightly different issue. Consider the toy example of the colored balls in the opaque bag. Map & Territory is a metaphor to remind you that your belief in the proportion of red and blue balls is distinct from the actual proportion. Changes in your beliefs cannot change the actual proportions.
When examining a belief, ask "What observations would make this belief less likely?" If your answer is "No such observations exist" then you should have grave concerns about the belief.
Note the distinction between:
Observations that would make the proposition less likely
Observations I expect
I don't expect to see a duck have sex with an otter and give birth to a platypus, but if I did, I'd start having serious reservations about the theory of evolution.
I found this extremely helpful as well, thank you.
That's very helpful, thanks. I'm trying to shove everything I read here into my current understanding of probability and estimation. Maybe I should just read more first.
Yes. As a more general clarification, making beliefs pay rent is supposed to highlight the same sorts of failure modes as falsifiablility while allowing useful but technically unfalsifiable beliefs (e.g., your example, some classes of probabilistic theories).
TL;DR: I found LW through HPMoR, read the major sequences, read stuff by other LWers including the Luminosity series, and lurked for six months before signing up.
My name, as you can see above if you don't have the anti-kibitzing script, Daniel. My story of how I came to self-identify as a rationalist, and then how I later came to be a rationalist, breaks down into several parts. I don't remember the order of all of them.
Since well before I can remember (and I have a fairly good long-term memory), I've been interested in mathematics, and later science. One of my earliest memories, if not my earliest, is of me, on my back, under the coffee table (well before I could walk). I had done this multiple times, I think usually with the same goal, but one time in particular sticks in my memory. I was kicking the underside of the coffee table, trying to see what was moving. This time, I moved it, got out, and saw that the drawer of the coffee table was open; this caused me to realize that this was what was moving, and I don't think I crawled under there again.
Many years later, I discovered Star Trek TNG, and from that learned a little about Star Trek. I wanted to be more rational from the role models of Data and Spock, and I did not realize at the time how non-rational Spock was. It was very quickly, however, that I realized that emotions are not the opposite of logic, and the first time I saw the TOS episode that Luke references [here][http://facingthesingularity.com/2011/why-spock-is-not-rational/], I realized that Spock was being an idiot (though at the time I thought it was unusually idiotic, not standard behavior; I hadn't and still haven't seen much of the original series). It was around this time that I thought I myself was "rational" or "logical".
Of course, it wasn't until much later that I actually started learning about rationalism. Around Thanksgiving 2011, I was on fanfiction.net looking for a Harry Potter fanfic I'd seen before and liked (I still haven't found it) that I stumbled upon Harry Potter and the Methods of Rationality. I read it, and I liked it, and it slowly took over my life. I decided to look for other works by that author, and went to the link to Less Wrong because it was recommended (not realizing that the Sequences were written by the same person as HPMoR yet). Since then, I've read the sequences and most other stuff written by EY (that's still easily accessible and not removed), and it all made sense. I finally understood that yes, in fact, I and the other "confused" students WERE correct in that probability class where the professor said that "the probability that this variable is in this interval" didn't exist, I noticed times when I was conforming instead of thinking, and I noticed some accesses of cached thoughts. At first I was a bit skeptical of the overly-atheistic bit (though I'd always had doubts and was pretty much agnostic-though-I-wouldn't-admit-it), until I read the articles about how unlikely the hypothesis of God was and thought about them.
I did not know much about Quantum Mechanics when I read that sequence, but I had heard of the "waveform collapse" and had not understood it, and I realized fairly quickly how that was an unnecessary hypothesis. When I saw one of the cryonics articles (I'm cryocrastinating, trying to get my parents to sign up) taking the idea seriously, I thought "Oh, duh! I should have seen that the first time I heard of it, but I was specifically told that the person involved was an idiot and it didn't work, so I never reevaluated" (later I remembered my horror at Picard's attitude in the relevant TNG episode, and I've always only believed in the information-theoretic definition of "death").
After I read the major sequences, I read some other stuff I found through the Wiki and through googling "Less Wrong __" for various things I wanted the LW community opinion on. I found my favorite LW authors (Yvain, Luke, Alicorn, and EY) and read other things by them (Facing the Singularity and Luminosity). I subscribed to the RSS feed (I don't know how that'll work when I want to strictly keep to anti-kibitzing), and I now know that I want to help SIAI as much as possible (I was planning to be a computer scientist anyway); I'm currently reading through a lot of their recommended reading. I'm also about to start GEB, followed by Jaynes and Pearl. I plan to become a lot more active comment-wise, but probably not post-wise for a while yet. I may even go to one of the meetups if one is held somewhere I can get to.
Now we've pretty much caught up to the present. Let's see... I read some posts today, I read Luke's Intuitive Explanation to EY's Intuitive Explanation, I found an error in it (95% confidence), I sent him an email, and I decided to sign up here. Now I'm writing this post, and I'm supposed to put some sort of conclusion on it. I estimate that the value of picking a better conclusion is not that high compared to the cost, so I'll just hit th submit button after this next period.
Edit: Wow, I just realized how similar my story is to parts of Comment author: BecomingMyself's. I swear we aren't the same person!
Hi Daniel, do you follow Yvian's blog? Also, the term is rationality, not rationalism. I wouldn't nitpick except that rationalism already refers to a fairly major thing in mainstream philosophy.
I recommend learning QM from textbooks, not blogs. This applies to most other subjects, as well.
I did not mean to imply that I had actual knowledge of QM, just that I had more now than before. If I was interested in understanding QM in more detail, I would take a course on it at my college. It turns out that I am so interested, and that I plan to take such a course in Spring 2013.
I also know that there are people on this site, apparently a greater percentage than with similar issues, who disagree with EY about the Many Worlds Interpretation. I have not been able to follow their arguments, because the ones I have seen generally assume a greater knowledge of quantum mechanics than I possess. Therefore, MWI is still the most reasonable explanation that I have heard and understood. Again, though, that means very little. I hope to revisit the issue once I have some actual background on the subject.
EDIT: To clarify, "similar issues" means issues where the majority of people have one opinion, such as theism, the Copenhagen Interpretation, or cryonics not being worth considering, while Less Wrong's general consensus is different.
I’m 20, male and a maths undergrad at Cambridge University. I was linked to LW a little over a year ago, and despite having initial misgivings for philosophy-type stuff on the internet (and off, for that matter), I hung around long enough to realise that LW was actually different from most of what I had read. In particular, I found a mix of ideas that I’ve always thought (and been alone amongst my peers in doing so), such as making beliefs pay rent; and new ones that were compelling, such as the conservation of expected evidence post.
I’ve always identified as a rationalist, and was fortunate enough to be raised to a sound understanding of what might be considered ‘traditional’ rationality. I’ve changed the way I think since starting to read LW, and have dropped some of the unhelpful attitudes that were promoted by status-warfare at a high achieving all-boys school (you must always be right, you must always have an answer, you must never back down…)
I’m here because the LW community seems to have lots of straight-thinking people with a vast cumulative knowledge. I want to be a part of and learn from that kind of community, for no better reason than I think I would enjoy life more for it.
Greetings!
My name is Dimitri Karcheglo, and I'm 22. I live in BC, Canada, having immigrated here from Odessa, Ukraine in 1998. I speak Russian as my first language though, not Ukrainian. Most of you likely don't know, but Odessa is a very Russian-speaking city in Ukraine.
I've been kinda lurking for a bit, but not very extensively or very consistently. I was directed here originally via HPMoR, which was recommended by a friend. I've known this site for probably around a year. originally i had read through the map and territory sequence and mysterious answers to mysterious questions sequence. after that i kind of didn't come to this site for a while.
Well I'm back now! I'm re-reading from the start since i have forgotten a lot. im also planning to go a lot deeper into LW this time around, and probably keep up with it on a day-to-day basis in the future. I am very much interested in improving my thinking, and hope to gain a lot of that here. I don't come very prepared like many people i see posting here. I have no degrees in programming, physics, mathematics, or whatnot.
Im currently studying civil engineering, about to enter my second year. I've done one year in computer programming and may do some self-education in this field down the line to improve my base. the motivation to do this likely wont show up for a while though.
You likely wont be seeing me posting much at all for quite a while, until i familiarize myself with the understandings presented on this site quite a bit more. I do hope to raise enough money next year to go visit one of these rationality camps, as i hope to have a better understanding of the subject by then, but with costs of education being what they are, I'm doubtful.
Hello!
I'm an 18 year old American physics undergraduate student (rising sophomore). I came here after reading HPMOR and because I think that being Rational will improve my ability as a scientist (and now I've realized, though I guessed it after reading Surely You're Joking, Mr. Feynman, that I need to get better at not guessing the teacher's password I know a bit of pure mathematics but little of cognitive sciences (take this category as you will. If you think something might be in this category, then I likely don't know much more about it than core Sequences + layperson's knowledge).
Also, please yell at me if I make claims about history and give no sources. (one of my friends growing up was a huge history buff, so I have a bunch of half remembered historical facts in my head (mostly WWII and Roman era) that I tend to assume are not only true but undisputed and common knowledge). Even in informal settings I should link to, at the least, Wikipedia. (This also ensures that I am not making false claims)
Hello!
I'm an 18 year old Irish high school student trying to decide what to do after I leave school. I want to make as much happiness as I can and stop as much suffering as I can but I'm unsure how to do this. I'm mostly here because I think reducing x risk may be a good idea, but to be honest I think there are other things which seem better, but anyway I hope to talk to people here about this!
Some of you may be members of 80000hours I imagine, so heres me on 80k : http://80000hours.org/members/ruairi-donnelly
Hey welcome to lesswrong.com fellow Irish person.
Your name means "hero" in Irish! I have actually used the same username as you! on youtube for example! where do you live if you don't mind me asking? :)
I live in D4 actually. My Irish has faded drastically but then again it never really was that good, tá brón orm.
Sure thats grand, personally I like speaking it and go to an Irish school but I think its pretty shocking that the government spends half a billion a year trying to keep it alive (and its not working) and apparently it takes up 15% of ones schooling time.
I live in Bray, Wicklow :) Maybe we can make a meet up sometime :)
An bhfuil Gaeilge agat? Ha, I bet we know each other in some way.
I'm a 24 year old PhD student of molecular biology. I arrived here trying to get at the many worlds vs copenhagen debate as a nonspecialist, and as part of a sustained campaign of reading that will allow me to tell a friend who likes Hegel where to shove it. I'm also here because I wanted to reach a decision about whether I really want to do biology, if not, whether I should quit, and if I leave, what i actually want to do.
I'm a 20 year old mathematics/music double major at NYU. Mainly here because I want to learn how to wear Vibrams without getting self conscious about it.
I get nothing but positive social affect from Ninja Zemgears. http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=zemgear
Cheaper than Vibrams, more comfortable, less durable, less agile, much friendlier looking.
Those combined with some toe socks and I have exactly what I want. I might actually order these... Thanks!
They actually work well enough with normal socks, scrunched in to seperate the big toe.
The ninja shoes are much less abominable than Vibrams.
Hi there!
This might help: http://www.psych.cornell.edu/sec/pubPeople/tdg1/Gilo.Medvec.Sav.pdf
Is this some kind of LW hazing, linking to academic papers in an introduction thread? (I joke, this looks super interesting).
It was either that or the Psychology Today article. (Pretty sure Psychology Today is where I learned about the concept, but googling found the paper.)
Hello. I've been browsing articles that show up on the front page for about a year now. Just recently started going through the sequences and decided it would be a good time to create an account.
Hi everyone, I've been reading LW for a year or so, and met some of you at the May minicamp. (I was the guy doing the swing dancing.) Great to meet you, in person and online.
I'm helping Anna Salamon put together some workshops for the meetup groups, and I'll be posting some articles on presentation skills to help with that. But in order to do that, I'll need 5 points (I think). Can you help me out with that?
Thanks
Mike
Yay 5 points! That was quick. Thanks everyone.
Hi everybody,
I've been lurking here for maybe a year and joined recently. I work as an astrophysicist and I am interested in statistics, decision theory, machine learning, cognitive and neuro-psychology, AI research and many others (I just wish I had more time for all these interests). I find LW to be a great resource and it introduced me to many interesting concepts. I am also interested in articles on improving productivity and well-being.
I haven't yet attended any meet-up, but if there was one in Munich I'd try to come.
Hey, I've been an LW lurker for about a year now, and I think it's time to post here. I'm a cryonicist, rationalist and singularity enthusiast. I'm currently working as a computer engineer and I'm thinking maybe there is more I can do to promote rationality and FAI. LW is an incredible resource. I have a mild fear that I don't have enough rigorous knowledge about rationality concepts to contribute anything useful to most discussion.
LW has changed my life in a few ways but the largest are becoming a cryonicist and becoming polyamorous (naturally leaned toward this, though). I feel like I am in a one-way friendship with EY, does anyone else feel like that?
I am also in a one-way friendship with EY.
Hello,
I am a world citizen with very little sense of identification or labelling. Perhaps "Secular Humanist" could be my main affiliation. As for belonging to nations and companies and teams... I don't believe in this thrust-upon, unchosen unity. I'm a natural expatriate. And I believe this site is awesomeness incarnate.
Though some lesswrongers really seem to go out of their way to make their readers feel stupid... though I'd guess that's the whole point, right?
I’ll introduce myself by way of an argument against material reductionism. This is an argument borrowed from Plato’s dialogue “Euthyphro”. I don’t intend this to be a knock down critique or anything. Rather, I think I might learn something about the idea of materialism (about which I’m pretty confused) from your replies should I receive any. Here goes:
Tom is carrying a bucket. There are two facts here: 1) that Tom is carrying the bucket, and 2) that the bucket is carried by Tom. (1) is something like the ‘active fact’, and (2) is something like the ‘passive fact’.
We’re material reductionists, so any true proposition is true because some material state of affairs obtains, and this is all it means to be a fact. But both fact (1) and fact (2) refer to the same state of affairs. Reduced to a material state of affairs (say the position and velocity of the molecules in the bucket and in Tom), we can’t distinguish between fact (1) and fact (2).
This is a problem because fact (1) and fact (2) are different facts: Tom is not carrying the bucket because the bucket is carried by Tom. Rather, the bucket is carried by Tom because Tom is carrying the bucket. Fact (1) has explanatory priority over fact (2).
But since there is no way to distinguish the two facts as material states of affairs, there must be more to fact (1) and fact (2) than the material state of affairs to which they refer.
What do you think? I’ve no doubt we can poke holes in this argument, but I need some help doing so.
Welcome to LW!
The key part of your argument is:
Why do you think this? I do not have this intuition at all. For me, if both (1) and (2) describe exactly the same material state of affairs, no more no less (rather than, e.g. (1) carrying a subtle connotation that the carrying is voluntary) then I would say that the difference between them is only rhetorical, and neither explains the other one more than vice versa.
Thanks for the welcome, and for the reply. My whole argument turns on the premise that the two facts are distinctive because one has explanatory priority over the other, so I'll try to make this a little clearer.
So, here are three sets of facts. The first set involves no explanatory priority, in the second the active fact is prior, and in the last the passive fact is prior.
A) Tom is taller than Ralph, Ralph is shorter than Tom. B) Tom praised Steve, Steve was praised by Tom. C) Tom inadvertently offended Mary, Mary was offended by Tom inadvertently.
In the first case, of course, the facts are perfectly interchangeable. In the second, it seems to me, the active fact explains the passive fact. I mean that it would sound odd to say something like "It is because Steve was praised that Tom praised him" but it seems perfectly natural to say "It is because Tom praised him that Steve was praised."
And in the last case, I think we are all familiar with the fact that Tom can hardly explain to Mary that he didn't try to offend her, and so she was not offended. Tom offended Mary because she was offended. Mary's being offended explains Tom's inadvertent offending.
Is that convincing at all? I know my examples of explanatory priority are pretty far from billiard ball examples, etc. but maybe the point can be made there as well. Let me know what you think.
Does phrasing the state of affairs as (2) instead of (1) have any effect on your anticipations?
If not, they're the same fact.
The article to which you refer presents a convincing case, but I think it's probably inconsistant with a Tarskian semantic theory of truth. (ETA: assuming it aims at defining truth, or at laying out a criterion for the identity of facts). We would have to infer from Tom's carrying the bucket to the bucket's being carried by Tom, since we couldn't offer the Tarskian sentence "'Tom is carrying the bucket' iff the bucket is being carried by Tom" up as a definition of the truth of "Tom is carrying the bucket."
I can see Eliezer's point on an epistemological level, but what theory of truth do we need in order to understand anticipations as bearing on the identity of facts themselves?
Suppose we say simply that an identical set of anticipations makes two facts identical. Now suppose that I'm working in a factory in which I must crack red and blue eggs open to discover the color of the yolk (orange in the case of red eggs, green in the case of blue. But suppose also that all red, orange yolked eggs are rough to the touch, and all blue, green yolked eggs are smooth. The redness and the roughness of an egg will lead to an identical set of anticipations (the orangeness of the yolk). But we certainly can't say that the redness and the roughness of an egg are the same fact, since they don't even refer to the same material state of affairs.
Apparently we're speaking across a large inferential distance. I don't know about Tarskian sentences, so I can't comment on those, but I can clarify the 'anticipation controller' idea.
Basically, you're defining 'anticipation' more narrowly than what Eliezer meant by the term.
If you tell me that an egg is rough, I will anticipate that, if I rub my fingers over it, my skin will feel the sensations I associate with rough surfaces.
If you tell me that an egg is red, I will anticipate that when I look at it, the cells in my retina that are sensitive to long-wavelength radiation will be excited more than the other cells in my retina.
Clearly, these are different anticipations, so we say that redness and roughness are two different facts.
If you say to me, 'Tom is carrying a bucket', I anticipate that if I were to look in Tom's direction, I would see him carrying a bucket. If you say to me 'a bucket is carried by Tom', I anticipate that if I were to look in Tom's direction, I would see... him carrying a bucket. In other words, whether you phrase it as (1) or (2), my anticipations are exactly the same, and so I claim they're the same fact.
But you seem to be telling me that not only are they different facts, but somehow one is more fundamental than the other, and I have no idea what you mean by that.
Thanks for clarifying the point about anticipations, that was very helpful and I'll have to give it more thought. I read Eliezer's article again, and while I don't think his intention was to give an account of the identity of facts, he does mention that if we're arguing over facts with identical anticipations, we may be arguing over a merely semantic point. That's very possibly what's going on here, but let me try to defend the idea that these are distinct facts one last time. If I cannot persuade you at all, I'll reconsider the worth of my argument.
In my comment to Alejandro1, I mentioned three sets of facts. I'll pare down that point here to its simplest form: the relationship between 'X is taller than Y' and 'Y is shorter than X' is different than the relationship between 'X carries Y' and 'Y is carried by X'. This difference is in the priority of the former and the latter fact in each set. In the case of taller and shorter, there is no priority of one fact over the other. They really are just different ways of saying the same thing.
In the case of carrying and being carried, there is a priority. Y's being carried is explained by X's carrying. Y is being carried, but because X is carrying it. It is not true that X is carrying because Y is being carried. In other words, X is related to Y as agent to patient (I don't mean agency in an intentional sense, this would apply to fire and what it burns). If we try to treat 'X carries Y' and 'Y is carried by X' as involving no explanatory priority (if we try to treat them as the same fact), the loose the explanatory priority, in this case, of agent over patient.
An example of this kind of explanatory priority (in the other direction) might be this set: 'A falling tree kills a deer' and 'a deer is killed by a falling tree'. Here, I think the explanatory priority is with the patient. It is only because a deer is such as to be killed that a tree could be a killer. We have to explain the tree's killing by reference to the deer's being killed. If the tree fell on a deer statue, there would be no explanatory priority.
But maybe my confusion is deeper, and maybe I'm just getting something wrong about the idea of a cause. Thanks for taking the time.
Apparently you're working in something that's akin to a mathematical system... you start with a few facts (the ones with high 'explanatory priority') and then you derive other facts (the ones with lower 'explanatory priority'). Which is nice and all, but this system doesn't really seem to reflect anything in reality. In reality, a deer getting killed by a tree is a tree killing a deer is a deer getting killed by a tree.
Well, I'm not intentionally trying to work with anything like a mathematical system. My claim was just that if by 'in reality' we mean 'referring to basic material objects and their motions' then we loose the ability to claim any explanatory priority between facts like 'X carries Y' and 'Y is carried by X'. Y didn't just get itself carried, X had to come along and carry it. X is the cause of Y's being carried.
But all that hinges on convincing you that there is some such explanatory priority, which I haven't done. I think perhaps my argument isn't very good. Thanks for the discussion, at any rate.
An interesting outside perspective on AspiringKnitter: http://www.reddit.com/r/atheism/comments/nzwtv/a_very_strange_discussion_with_an_originally/
I am a video game developer. I find most of this site fairly interesting albeit once in a while I disagree with description of some behaviour as irrational, or the explanation projected upon that behaviour (when I happen to see a pretty good reason for this behaviour, perhaps strategic or as matter of general policy/cached decision).
Uh...uhm...hello?
Hi!
Oh, hello. I've posted a couple of times, in a couple of places, and those of you who have spoken with me probably know that I am one: a novice, and two: a bit of a jerk.
I'm trying to work on that last one.
I think cryonics, in its current form, is a terrible idea, I am a (future) mathematician, and am otherwise divergent from the dominant paradigm here, but I think the rest of that is for me to know, and you to find out.
What do you think of cremation in its current form?
I think cryonics is a terrible idea, not because I don't want to preserve my brain until the tech required to recreate it digitally or physically is present, but because I don't think cryonics will do the job well. Cremation does the job very, very badly, like trying to preserve data on a hard drive by melting it down with thermite.
This obviously invites the conclusion that cryonics is a terrible idea in the same sense that democracy is the worst form of government.
Are you saying that cryonics is not perfect, but it is the best alternative?
I'm not sure I understand your point. I'll read your link a few more times, just to see if I'm missing something, but I don't quite get it now.
Just referring to the quote:
Ah, I see. I just don't think that cryonics significantly improves the chances of actually extending one's life span, which would be similar to saying that democracy is not significantly better than most other political systems.
What do you see as the limiting factors? * The technical ability of current best-case cryonics practice to preserve brain structure?
The ability of average-case cryonics to do the same?
The risk of organizational failure?
The risk of larger scale societal failure?
Insufficient technical progress?
Runaway unfriendly AI?
All of the above.
Hello. I expect you won't like me because I'm Christian and female and don't want to be turned into an immortal computer-brain-thing that acts more like Eliezer thinks it should. I've been lurking for a long time. The first time I found this place I followed a link to OvercomingBias from AnneC's blog and from there, without quite realizing it, found myself archive-binging and following another link here. But then I stopped and left and then later I got linked to the Sequences from Harry Potter and the Methods of Rationality.
A combination of the whole evaporative cooling thing and looking at an old post that wondered why there weren't more women convinced me to join. You guys are attracting a really narrow demographic and I was starting to wonder whether you were just going to turn into a cult and I should ignore you.
...And I figure I can still leave if that ends up happening, but if everyone followed the logic I just espoused, it'll raise the probability that you start worshiping the possibility of becoming immortal polyamorous whatever and taking over the world. I'd rather hang around and keep the Singularity from being an AI that forcibly exterminates all morality and all people who don't agree with Eliezer Yudkowsky. Not that any of you (especially EY) WANT that, exactly. But anyway, my point is, With Folded Hands is a pretty bad failure mode for the worst-case scenario where EC occurs and EY gets to AI first.
Okay, ready to be shouted down. I'll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?) I'll probably just leave soon anyway. Nothing good can come of this. I don't know why I'm doing this. I shouldn't be here; you don't want me here, not to mention I probably shouldn't bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
You know, I was right.
You guys are fine and all, but I'm not cut out for this. I'm not smart enough or thick-skinned enough or familiar enough with various things to be a part of this community. It's not you, it's me, for real, I'm not saying that to make you feel better or something. I've only made you all confused and upset, and I know it's draining for me to participate in these discussions.
See you.
Stick around. Your contributions are fine. Not everyone will be accusatory like nyan_sandwich.
Read through the Sequences and comment on what seems good to you.
It's fine, I'm not pitching a fit about a little crudeness. I really can take it... or I can stay involved, but I don't think I can do both, unlike some people (like maybe you) who are without a doubt better at some things than I am. Don't blame him for chasing me off, I know the community is welcoming.
And I'm not really looking for reassurance. Maybe I'll sleep on it for a while, but I really don't think I'm cut out for this. That's fine with me, I hope it's fine with you too. I might try to hang around the HP:MoR thread, I don't know, but this kind of serious discussion requires skills I just don't have.
All of that said, I really appreciate that sweet comment. Thank you.
But remember, fixing this sort of problem is ostensibly what we're here for.
If we fail at that for reasons you can articulate, I at least would like to know.
Education is ostensibly what high school teachers are there for, but if a student shows up who can't read, they don't blame themselves because they're not there to teach basic skills like that.
I know a few high school teachers. I think they'd consider illiteracy a freakin' emergency, not an annoying inconvenience. Blame wouldn't enter into it.
I hope you're not seeing the options as "keep up with all the threads of this conversation simultaneously" or "quit LW". It's perfectly OK to leave things hanging and lurk for a while. (If you're feeling especially polite, you can even say that you're tapping out of the conversation for now.)
(Hmm, I might add that advice to the Welcome post...)
Okay. I'm tapping out of everything indefinitely. Thank you.
Hi, AspiringKnitter!
There have been several openly religious people on this site, of varying flavours. You don't (or shouldn't) get downvoted just for declaring your beliefs; you get downvoted for faulty logic, poor understanding and useless or irrelevant comments. As someone who stopped being religious as a result of reading this site, I'd love for more believers to come along. My impulse is to start debating you right away, but I realise that'd just be rude. If you're interested, though, drop me a PM, because I'm still considering the possibility I might have made the wrong decision.
The evaporative cooling risk is worrying, now that you mention it... Have you actually noticed that happening here during your lurking days, or are you just pointing out that it's a risk?
Oh, and dedicating an entire paragraph to musing about the downvotes you'll probably get, while an excellent tactic for avoiding said downvotes, is also annoying. Please don't do that.
Uh-oh. LOL.
Normally, I'm open to random debates about everything. I pride myself on it. However, I'm getting a little sick of religious debate since the last few days of participating in it. I suppose I still have to respond to a couple of people below, but I'm starting to fear a never-ending, energy-sapping, GPA-sabotaging argument where agreeing to disagree is literally not an option. It's my own fault for showing up here, but I'm starting to realize why "agree to disagree" was ever considered by anyone at all for anything given its obvious wrongness: you just can't do anything if you spend all your time on a never-ending argument.
Haven't been lurking long enough.
In the future I will not. See below. Thank you for calling me out on that.
There isn't a strong expectation here that people should never agree to disagree - see this old discussion, or this one.
That being said, persistent disagreement is a warning sign that at least one side isn't being perfectly rational (which covers both things like "too attached to one's self-image as a contrarian" and like "doesn't know how to spell out explicitly the reasons for his belief").
I tried to look for a religious debate elsewhere in this thread but could not find any except the tangential discussion of schizophrenia.
Then please feel free to ignore this comment. On the other hand, if you ever feel like responding then by all means do.
A lack of response to this comment should not be considered evidence that AspiringKnitter could not have brilliantly responded.
What is the primary reason you believe in God and what is the nature of this reason?
By nature of the reason, I mean something like these:
inductive inference: you believe adding a description of whatever you understand of God leads to a simpler explanation of the universe without losing any predictive power
intuitive inductive inference: you believe in god because of intuition. you also believe that there is an underlying argument using inductive inference, you just don't know what it is
intuitive metaphysical: you believe in god because of intuition. you believe there is some other justification this intuition works
See here.
It's weird, but I can't seem to find everything on the thread from the main post no matter how many of the "show more comments" links I click. Or maybe it's just easy to get lost.
None of the above, and this is going to end up on exactly (I do mean exactly) the same path as the last one within three posts if it continues. Not interested now, maybe some other time. Thanks. :)
Talk of Aumann Agreement notwithstanding, the usual rules of human social intercourse that allow "I am no longer interested in continuing this discussion" as a legitimate conversational move continue to apply on this site. If you don't wish to discuss your religious beliefs, then don't.
Ah, I didn't know that. I've never had a debate that didn't end with "we all agree, yay", some outside force stopping us or everyone hating each other and hurling insults.
Jeez. What would "we all agree, yay" even look like in this case?
I suppose either I'd become an atheist or everyone here would convert to Christianity.
Beliefs should all be probabilistic.
I think this rules out some and only some branches of Christianity, but more importantly it impels accepting behaviorist criteria for any difference in kind between "atheists" and "Christians" if we really want categories like that.
The assumption that everyone here is either an atheist or a Christian is already wrong.
Good point. Thank you for pointing it out.
Hm.
So, if I'm understanding you, you considered only four possible outcomes likely from your interactions with this site: everyone converts to Christianity, you get deconverted from Christianity, the interaction is forcibly stopped, or the interaction degenerates to hateful insults. Yes?
I'd be interested to know how likely you considered those options, and if your expectations about likely outcomes have changed since then.
Well, for any given conversation about religion, yes. (Obviously, I expect different things if I post a comment about HP:MoR on that thread.)
I expected the last one, since mostly no matter what I do, internet discussions on anything important have a tendency to do that. (And it's not just when I'm participating in them!) I considered any conversions highly unlikely and didn't really expect the interaction to be stopped.
My expectations have changed a lot. After a while I realized that hateful insults weren't happening very much here on Less Wrong, which is awesome, and that the frequency didn't seem to increase with the length of the discussion, unlike other parts of the internet. So I basically assumed the conversation would go on forever. Now, having been told otherwise, I realize that conversations can actually be ended by the participants without one of these things happening.
That was a failure on my part, but would have correctly predicted a lot of the things I'd experienced in the past. I just took an outside view when an inside view would have been better because it really is different this time. That failure is adequately explained by the use of the outside view heuristic, which is usually useful, and the fact that I ended up in a new situation which lacked the characteristics that caused what I observed in the past.
There are additional possibilities, like everyone agreeing on agnosticism or on some other religion.
Can I vote Discordianism? Knowing how silly it all is is a property of the text, Isn't that helpful?
What do you aspire to knit?
Sweaters, hats, scarves, headbands, purses, everything knittable. (Okay, I was wrong below, that was actually the second-easiest post to answer.) Do you like knitting too?
Yes, I do. This year, I'm mostly doing small items, like scarves and hats.
Knitting is an over-learned skill for me, like driving, and requires very little thought. I like both the process and the result.
Wow. Some of your other posts are intelligent, but this is pure troll-bait.
EDIT: I suppose I should share my reasoning. Copied from my other post lower down the thread:
Classic troll opening. Challenges us to take the post seriously. Our collective 'manhood' is threatened if react normally (eg saying "trolls fuck off").
Insulting straw man with a side of "you are an irrational cult".
"Seriously, I'm one of you guys". Concern troll disclaimer. Classic.
Again undertones of "you are a cult and you must accept my medicine or turn into a cult". Again we are challenged to take it seriously.
I didn't quite understand this part, but again, straw man caricature.
Theres a rhetorical meme on 4chan that elegantly deals with this kind of crap:
'nuff said
classic reddit downvote preventer:
again implying irrational insider/outsider dynamic, hivemind tendencies and even censorship.
Of course the kneejerk response is "no no, we don't hate you and we certainly won't censor you; please we want more christian trolls like you". EDIT: Ha! well predicted I say. I just looked at the other 500 responses. /EDIT
And top it off with a bit of sympathetic character, damsel-in-distress crap. EDIT: Oh and the bit about hating God is a staw-man. /EDIT
This is not necessarily deliberate, but it doesn't have to be.
Trolling is a art. and Aspiring_Knitter is a artist. 10/10.
Wow, I don't post over Christmas and look what happens. Easiest one to answer first.
You don't need an explanation of 2, but let me go through your post and explain about 1.
Huh. I guess I could have come up with that explanation if I'd thought. The truth here is that I was just thinking "you know, they really won't like me, this is stupid, but if I make them go into this interaction with their eyes wide open about what I am, and phrase it like so, I might get people to be nice and listen".
That was quite sincere and I still feel that that's a worry.
Also, I don't think I know more about friendliness than EY. I think he's very knowledgeable. I worry that he has the wrong values so his utopia would not be fun for me.
Wow, you're impressive. (Actually, from later posts, I know where you get this stuff from. I guess anyone could hang around 4chan long enough to know stuff like that if they had nerves of steel.) I had the intuition that this will lead to fewer downvotes (but note that I didn't lie; I did expect that it was true, from many theist-unfriendly posts on this site), but I didn't think consciously this procedure will appeal to people's fear of the hivemind to shame them into upvoting me. I want to thank you for pointing that out. Knowing how and why that intuition was correct will allow me to decide with eyes wide open whether to do something like that in the future, and if I ever actually want to troll, I'll be better at it.
Actually, I just really need to learn to remember that while I'm posting, proper procedure is not "allow internal monologue to continue as normal and transcribe it". You have no idea how much trouble that's gotten me into. (Go ahead and judge me for my self-pitying internal monologue if you want. Rereading it, I'm wondering how I failed to notice that I should just delete that part, or possibly the whole post.) On the other hand, I'd certainly hope that being honest makes me a sympathetic character. I'd like to be sympathetic, after all. ;)
Thank you. It wasn't, but as you say, it doesn't have to be. I hope I'll be more mindful in the future, and bear morality in mind in crafting my posts here and elsewhere. I would never have seen these things so clearly for myself.
Thanks, but no. LOL.
I'd upvote you, but otherwise your post is just so rude that I don't think I will.
For what it's worth, I generally see some variant of "please don't flame me" attached only to posts which I'd call inoffensive even without it. I'm not crazy about seeing "please don't flame me", but I write it off to nervousness and don't blame people for using it.
Caveat: I'm pretty sure that "please don't flame me" won't work in social justice venues.
Note that declaring Crocker's rules and subsequently complaining about rudeness sends very confusing signals about how you wish to be engaged with.
Thank you. I was complaining about his use of needless profanity to refer to what I said, and a general "I'm better than you" tone (understandable, if he comes from a place where catching trolls is high status, but still rude). I not only approve of being told that I've done something wrong, I actually thanked him for it. Crocker's rules don't say "explain things in an insulting way", they say "don't soften the truths you speak to me". You can optimize for information-- and even get it across better-- when you're not trying to be rude. For instance,
That would not convey less truth if it weren't vulgar. You can easily communicate that someone is tugging people's heartstrings by presenting as a highly sympathetic damsel in distress without being vulgar.
Also, stuff like this:
That makes it quite clear that nyan_sandwich is getting a high from this and feels high-status because of behavior like this. While that in itself is fine, the whole post does have the feel of gloating to it. I simultaneously want to upvote it for information and downvote it for lowering the overall level of civility.
Here's my attempt to clarify how I wish to be engaged with: convey whatever information you feel is true. Be as reluctant to actively insult me as you would anyone else, bearing in mind that a simple "this is incorrect" is not insulting to me, and nor is "you're being manipulative". "This is crap" always lowers the standard of debate. If you spell out what's crappy about it, your readers (including yours truly) can grasp for themselves that it's crap.
Of course, if nyan_sandwich just came from 4chan, we can congratulate him on being an infinitely better human being than everyone else he hangs out with, as well as on saying something that isn't 100% insulting, vulgar nonsense. (I'd say less than 5% insulting, vulgar nonsense.) Actually, his usual contexts considered, I may upvote him after all. I know what it takes to be more polite than you're used to others being.
That doesn't sound right. Here's a quote from Crocker's rules:
Another quote:
Quote from our wiki:
There's a decision theoretic angle here. If I declare Crocker's rules, and person X calls me a filthy anteater, then I might not care about getting valuable information from them (they probably don't have any to share) but I refrain from lashing out anyway! Because I care about the signal I send to person Y who is still deciding whether to engage with me, who might have a sensitive detector of Crocker's rules violations. And such thoughtful folks may offer the most valuable critique. I'm afraid you might have shot yourself in the foot here.
I think this is generally correct. I do wonder about a few points:
If I am operating on Crocker's Rules (I personally am not, mind, but hypothetically), and someone's attempt to convey information to me has obvious room for improvement, is it ever permissible for me to let them know this? Given your decision theory point, my guess would be "yes, politely and privately," but I'm curious as to what others think as well. As a side note, I presume that if the other person is also operating by Crocker's Rules, you can say whatever you like back.
Do you mean improvement of the information content or the tone? If the former, I think saying "your comment was not informative enough, please explain more" is okay, both publicly and privately. If the latter, I think saying "your comment was not polite enough" is not okay under the spirit of Crocker's rules, neither publicly nor privately, even if the other person has declared Crocker's rules too.
When these things are orthogonal, I think your interpretation is clear, and when information would be obscured by politeness the information should win - that's the point of Crocker's Rules. What about when information is obscured by deliberate impoliteness? Does the prohibition on criticizing impoliteness win, or the permit for criticizing lack of clarity? In any case, if the other person is not themselves operating by Crocker's Rules, it is of course important that your response be polite, whatever it is.
Basically, no. If you want to criticize people for being rude to you just don't operate by Crocker's rules. Make up different ones.
A lot of intelligent folks have to spend a lot of energy trying not to be rude, and part of the point of Crocker's Rules is to remove that burden by saying you won't call them on rudeness.
Not all politeness is inconsistent with communicating truth. I agree that "Does this dress make me look fat" has a true answer and a polite answer. It's worth investing some attention into figuring out which answer to give. Often, people use questions like that as a trap, as mean-spirited or petty social and emotional manipulation. Crocker's Rule is best understood as a promise that the speaker is aware of this dynamic and explicitly denies engaging in it.
That doesn't license being rude. If you are really trying to help someone else come to a better understanding of the world, being polite helps them avoid cognitive biases that would prevent them from thinking logically about your assertions. In short, Crocker's Rule does not mean "I don't mind if you are intentionally rude to me." It means "I am aware that your assertions might be unintentionally rude, and I will be guided by your intention to inform rather than interpreting you as intentionally rude.
Right, I wasn't saying anything that contradicted that. Rather, some of us have additional cognitive burden in general trying to figure out if something is supposed to be rude, and I always understood part of the point of Crocker's Rules to be removing that burden so we can communicate more efficiently. Especially since many such people are often worth listening to.
OK.
FWIW, I agree that nyan-sandwich's tone was condescending, and that they used vulgar words.
I also think "I suppose they can't be expected to behave any better, we should praise them for not being completely awful" is about as condescending as anything else that's been said in this thread.
Yeah, you're probably right. I didn't mean for that to come out that way (when I used to spend a lot of time on places with low standards, my standards were lowered, too), but that did end up insulting. I'm sorry, nyan_sandwich.
Excellent analysis. I just changed my original upvote for that post to a downvote, and I must admit that it got me in exactly every way you explained.
I had missed this. The original post read as really weird and hostile, but I only read after having heard about this thread indirectly for days, mostly about the way how later she seemed pretty intelligent, so I dismissed what I saw and substituted what I ought to have seen. Thanks for pointing this out.
Upvoted
You've got an interesting angle there, but I don't think AspiringKnitter is a troll in the pernicious sense-- her post has led to a long reasonable discussion that she's made a significant contribution to.
I do think she wanted attention, and her post had more than a few hooks to get it. However, I don't think it's useful to describe trolls as "just wanting attention". People post because they want attention. The important thing is whether they repay attention with anything valuable.
I don't have the timeline completely straight, but it looks to me like AspiringKnitter came in trolling and quickly changed gears to semi-intelligent discussion. Such things happen. AspiringKnitter is no longer a troll, that's for sure; like you say "her post has led to a long reasonable discussion that she's made a significant contribution to".
All that, however, does not change the fact that this particular post looks, walks, and quacks like troll-bait and should be treated as such. I try to stay out of the habit of judging posts on the quality of the poster's other stuff.
I don't know if this is worth saying, but you look a lot more like a troll to me than she does, though of a more subtle variety than I'm used to.
You seem to be taking behavior which has been shown to be in the harmless-to-useful range and picking a fight about it.
Thanks for letting me know. If most people disagree with my assessment, I'll adjust my troll-resistance threshold.
I just want to make sure we don't end up tolerating people who appear to have trollish intent. AspiringKnitter turned out to be positive, but I still think that particular post needed to be called out.
Well Kept Gardens Die By Pacifism.
You're welcome. This makes me glad I didn't come out swinging-- I'd suspected (actually I had to resist the temptation to obsess about the idea) that you were a troll yourself.
If you don't mind writing about it, what sort of places have you been hanging out that you got your troll sensitivity calibrated so high? I'm phrasing it as "what sort of places" in case you'd rather not name particular websites.
4chan, where there is an interesting dynamic around trolling and getting trolled. Getting trolled is low-status, calling out trolls correctly that no-one else caught is high-status, and trolling itself is god-status, calling troll incorrectly is low status like getting trolled. With that culture, the art of trolling, counter-trolling and troll detection gets well trained.
I learned a lot of trolling theory from reddit, (like the downvote preventer and concern trolling). The politics, anarchist, feminist and religious subreddits have a lot of good cases to study (they generally suck at managing community, tho).
I learned a lot of relevant philosophy of trolling and some more theory from /i/nsurgency boards and wikis (start at partyvan.info). Those communities are in a sorry state these days.
Alot of what I learned on 4chan and /i/ is not common knowledge around here and could be potentially useful. Maybe I'll beat some of it into a useful form and post it.
That's interesting-- I've never hung out anywhere that trolling was high status.
In reddit and the like, how is consensus built around whether someone is a troll and/or is trolling in a particular case?
I think I understand concern trolling, which I understand to be giving advice which actually weakens the receiver's position, though I think the coinage "hlep" from is more widely useful--inappropriate, annoying/infuriating advice which is intended to be helpful but doesn't have enough thought behind it, but what's downvote preventer?
Hlep has a lot of overlap with other-optimizing.
I'd be interested in what you have to say about the interactions at 4chan and /i/, especially about breakdowns in political communities.
I've been mulling the question of how you identify and maintain good will-- to my mind, a lot of community breakdown is caused by tendencies to amplify disagreements between people who didn't start out being all that angry at each other.
On reddit there is just upvotes and downvotes. Reddit doesn't have developed social mechanisms for dealing with trolls, because the downvotes work most of the time. Developing troll technology like the concern troll and the downvote preventer to hack the hivemind/vote dynamic is the only way to succeed.
4chan doesn't have any social mechanisms either, just the culture. Communication is unnecessary for social/cultural pressure to work, interestingly. Once the countertroll/troll/troll-detector/trolled/troll-crier hierarchy is formed by the memes and mythology, the rest just works in your own mind. "fuck I got trolled, better watch out better next time", "all these people are getting trolled, but I know the OP is a troll; I'm better than them" "successful troll is successful" "I trolled the troll". Even if you don't post them and no-one reacts to them, those thoughts activate the social shame/status/etc machinery.
Not quite. A concern troll is someone who comes in saying "I'm a member of your group, but I'm unsure about this particular point in a highly controversial way" with the intention of starting a big useless flame-war.
Havn't heard of hlep. seems interesting.
The downvote preventer is when you say "I know the hivemind will downvote me for this, but..." It creates association in the readers mind between downvoting and being a hivemind drone, which people are afraid of, so they don't downvote. It's one of the techniques trolls use to protect the payload, like the way the concern troll used community membership.
Yes. A big part of trolling is actually creating and fueling those disagreements. COINTELPRO trolling is disrupting peoples ability to identify trolls and goodwill. There is a lot of depth and difficulty to that.
For one thing, the label "trolling" seems like it distracts more than it adds, just like "dark arts." AspiringKnitter's first post was loaded with influence techniques, as you point out, but it's not clear to me that pointing at influence techniques and saying "influence bad!" is valuable, especially in an introduction thread. I mean, what's the point of understanding human interaction if you use that understanding to botch your interactions?
There is a clear benefit to pointing out when a mass of other people are falling for influence techniques in a way you consider undesirable.
I'll bet US$1000 that this is Will_Newsome.
I said
I think it's time to close out this somewhat underspecified offer of a bet. So far, AspiringKnitter and Eliezer expressed interest but only if a method of resolving the bet could be determined, Alicorn offered to play a role in resolving the bet in return for a share of the winnings, and dlthomas offered up $15.
I will leave the possibility of joining the bet open for another 24 hours, starting from the moment this comment is posted. I won't look at the site during that time. Then I'll return, see who (if anyone) still wants a piece of the action, and will also attempt to resolve any remaining conflicts about who gets to participate and on what terms. You are allowed to say "I want to join the bet, but this is conditional upon resolving such-and-such issue of procedure, arbitration, etc." Those details can be sorted out later. This is just the last chance to shortlist yourself as a potential bettor.
I'll be back in 24 hours.
And the winners are... dlthomas, who gets $15, and ITakeBets, who gets $100, for being bold enough to bet unconditionally. I accept their bets, I formally concede them, aaaand we're done.
And thus concludes the funniest thread on LessWrong in a very long time. Thanks, folks.
What did they win money for?
You not being Will_Newsome. (I can't imagine how bizarre it must be to be watching this conversation from your perspective.)
Wait, but what changed that caused Mitchell_Porter to realize that?
I didn't exactly realize it, but I reduced the probability. My goal was never to make a bet, my goal was to sockblock Will. But in the end I found his protestations somewhat convincing; he actually sounded for a moment like someone earnestly defending himself, rather than like a joker. And I wasn't in the mood to re-run my comparison between the Gospel of Will and the Knitter's Apocryphon. So I tried to retire the bet in a fair way, since having an ostentatious unsubstantiated accusation of sockpuppetry in the air is almost as corrosive to community trust as it is to be beset by the real thing. (ETA: I posted this before I saw Kevin's comment, by the way!)
"Next time just don't be a dick and you won't lose a hundred bucks," says the unreflective part of my brain whose connotations I don't necessarily endorse but who I think does have a legitimate point.
I think he just gave up and didn't want to be the guy sowing seeds of discontent with no evidence. That kind of thing is bad for communities.
Mitchell asked Will directly at http://lesswrong.com/lw/b9/welcome_to_less_wrong/5jby so perhaps he just trusts Will not to lie when using the Will_Newsome account.
No idea. Don't have to show your cards if you fold...
Betting money. That is how such things work.
You're such a dick. Haha. Upvoted.
You know I followed your talk about betting but never once considered that I could win money for realz if I took you up on it. The difficulty of proving such things made the subject seem just abstract. Oops.
Thank you.
I'll stake $500 if eligible.
When would the answer need to be known by?
I'll stake $100 against you, if and only if Eliezer also participates.
(Replying rather than editing, to make sure that my comment displays as un-edited.)
I should also stipulate that I am not, nor have I ever been, Will Newsome.
It's not impossible that I was once Will Newsome, I suppose, nor even that I currently am. But if so, I'm unaware of the fact.
I am a known magus, so even an Imperius curse is not out of the question.
Or you've been neglecting to treat your Spontaneous Duplication.
Turns out LW is a Chesterton-esque farce in which all posters are secretly Wills trolling Wills.
Then I'm really wasting time here.
Yes, I all are!
I am interested.
Edit: Putting up $100, regardless of anyone else's participation, and I'm prepared to demonstrate that I'm not Will_Newsome if that is somehow necessary.
I have a general heuristic that making one on one bets is not worthwhile as a way to gain money, as the other party's willingness to bet indicates they don't expect to lose money to me. I would also be surprised if a bet of this size, between two members of a rationalist website, paid off to either side (though I guess paying off as a donation to SIAI would not be so surprising). At this point though, I am guessing the bet will not go through.
Was there supposed to be a time limit on that bet offer? It seems like as long as the offer is available you and everyone else will have an incentive not to show all the evidence as a fully-informed betting opponent is less profitable.
I'll take up to $15 of that, at even odds. Possibly more, if the odds can be skewed in my favor.
Why did you frame it that way, rather than that AspiringKnitter wasn't a Christian, or was someone with a long history of trolling, or somesuch? It's much less likely to get a particular identity right than to establish that a poster is lying about who they are.
Well, Newsome was a Catholic for a while at least! (Or something like one).
That's really odd. If there were some way to settle the bet I'd take it.
For what it's worth, I thought Mitchell's hypothesis seemed crazy at first, then looked through user:AspiringKnitter's comment history and read a number of things that made me update substantially toward it. (Though I found nothing that made it "extremely obvious", and it's hard to weigh this sort of evidence against low priors.)
Out of curiosity, what's your estimate of the likelihood that you'd update substantially toward a similar hypothesis involving other LW users? ...involving other users who have identified as theists or partial theists?
It used to be possible - perhaps it still is? - to make donations to SIAI targeted towards particular proposed research projects. If you are interested in taking up this bet, we should do a side deal whereby, if I win, your $1000 would go to me via SIAI in support of some project that is of mutual interest.
Unfortunately, I don't have the spare money to take the other side of the bet, but Will showed a tendency to head off into foggy abstractions which I haven't seen in Aspiring Knitter.
Will_Newsome does not seem, one would say, incompetent. I have never read a post by him in which he seemed to be unknowingly committing some faux pas. He should be perfectly capable of suppressing that particular aspect of his posting style.
Here is an experiment that could solve this.
If someone takes the bet and some of the proceeds go to trike, they might agree to check the logs and compare IPs (a matching IP or even a proxy as a detection avoidance attempt could be interpreted as AK=WN). Of course, AK would have to consent.
Why didn't you suggest asking Will_Newsome?
DIdn't think about it. He would have to consent, too. Fortunately, any interest in the issue seems to have waned.
Ask him what? To raise his right arm if he is telling the truth?
I missed where he explicitly made a claim about it one way or the other.
--A Wizard of Earthsea Ursula K. LeGuin
http://tvtropes.org/pmwiki/pmwiki.php/Main/YouDidntAsk
If he is AK then he made an explicit claim about it. So either he is not AK or he is lying - a raise your right hand situation.
I simply had not considered the logical implications of AspiringKnitter making the claim that she is not WillNewsome, and had only noticed that no similar claim had appeared under the name of WillNewsome.
It would be interesting if one claimed to be them both and the other claimed to be separate people. If WillNewsome claimed to be both of them and AspiringKnitter did not, then we would know he was lying. So that is something possible to learn from asking WillNewsome explicitly. I hadn't considered this when I made my original comment, which was made without thinking deeply.
Um? Supposing I'd created both accounts, I could certainly claim as Will that both accounts were me, and claim as AK that they weren't, and in that case Will would be telling the truth.
Me too.
ETA: And I really mean no offense, but I'm sort of surprised that folk don't immediately see things like this... is it a skill maybe?
But if Will is AK, then Will claimed both that they were and were not the same person (using different screen names).