If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.
(This is the fourth incarnation of the welcome thread, the first three of which which now have too many comments. The text is by orthonormal from an original by MBlume.)
A few notes about the site mechanics
Less Wrong
comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via
Markdown syntax (you can click the "Help" link below the text box to bring up a primer).
You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.
However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you've any questions about karma or voting, please feel free to ask here.
Replies to your comments across the site, plus
private messages from other users, will show up in your
inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.
Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.
EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like
common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.
A few notes about the community
If you've come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)
If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)
There's also a
Facebook group. If you've your own blog or other online presence, please feel free to link it.
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site.
Comments (843)
Hello, I'm a 21 year old undergraduate student studying Economics and a bit of math on the side. I found LessWrong through HPMOR, and recently started working on the sequences. I've always been torn between an interest in pure rational thinking, and an almost purely emotional / empathetic desire for altruism, and this conflict is becoming more and more significant as I weigh options moving forward out of Undergrad (Peace Corp? Developmental Economics?)... I'm fond of ellipses, Science Fiction novels and board games - I'll keep my interests to a minimum here, but I've noticed there are meetups regularly; I'm currently studying abroad in Europe, but I live close to Washington DC and would enjoy meeting members of the community face to face at some point in the future!
Edit: If anyone reads this, could you either direct me to a conversation that addresses the question "How has LW / rational thinking influenced your day to day life, if at all," or respond to me directly here (or via PM) if you're comfortable with that! Thanks!
Those are not at all at odds. Read e.g. Why Spock is Not Rational, or Feeling Rational.
Relevant excerpts from both:
and
Your purely emotion / empathetic desire for altruism governs setting your goals, your pure rational thinking governs how you go about reaching your goals. You're allowed to be emotionally suckered, eh, influenced into doing your best (instrumental rationality) to do good in the world (for your values of 'good')!
Thank you for the reading suggestions! Perhaps my mind has already packaged Spock / lack of emotion into my understanding of the concept of 'Rationality.'
To respond directly -
Though if pure emotion / altruism sets my goals, the possibility of irrational / insignificant goals remains, no? If for example, I only follow pure emotion's path to... say... becoming an advocate for a community through politics, there is no 'check' on the rationality of pursuing a political career to achieve the most good (which again, is a goal that requires rational analysis)?
In HPMoR, characters are accused of being 'ambitious with no ambition' - setting my goals with empathetic desire for altruism would seem to put me in this camp.
Perhaps my goal, as I work my way through the sequences and the site, is to approach rationality as a tool / learning process of its own, and see how I can apply it to my life as I go. Halfway through typing this response, I found this quote from the Twelve Virtues of Rationality:
There is no "correct" way whatsoever in setting your terminal values, your "ultimate goals" (other agents may prefer you to pursue values similar to their own, whatever those may be). Your ultimate goals can include anything from "maximize the number of paperclips" to "paint everything blue" to "always keep in a state of being nourished (for the sake of itself!)" or "always keep in a state of emotional fulfillment through short-term altruistic deeds".
Based on those ultimate goals, you define other, derivative goals, such as "I want to buy blue paint" as an intermediate goal towards "so I can paint everything blue". Those "stepping stones" can be irrational / insignificant (in relation to pursuing your terminal values), i.e. you can be "wrong" about them. Maybe you shouldn't buy blue paint, but rather produce it yourself. Or rather invest in nanotechnology to paint everything blue using nanomagic.
Only you can (or can't, humans are notoriously bad at accurately providing their actual utility functions) try to elucidate what your ultimate goals are, but having decided on them, they are supra-rational / beyond rational / 'rational not applicable' by definition.
There is no fault in choosing "I want to live a life that maximizes fuzzy feelings through charitable acts" over "I'm dedicating my life to decreasing the Gini index, whatever the personal cost to myself."
Hi, my name is Alex. I'm not that smart as ppl posting articles here. My ability to properly challenge the captcha only from 2nd attempt while registering here in LW proves this :) So I was learning math when being student, now working in IT. While typing this comment I've been thinking what is my purpose of spending time here and reading different info... and suddenly realized that i'm 29 already and life is too short to afford thinking wrong and thinking slow. So hope to improve myself to be able learn and understand more and more things. Cheers to everyone :)
Hi, I'm Edward and have been reading the occasional article on here for a while. I've finally decided to officially join as this year I'm starting to do more work on my knowledge and education (especially maths & science) and I like the thoughtful community I see here. I'm a programmer, but also have a passion for history. Just as I was finishing university, my thinking led me to abandon the family religion (many of my friends are still theists). I was going to keep thinking and exploring ideas but I ended up just living - now I want to begin thinking again.
Regards, Edward
Hello LW community. I'm a HS math teacher most interested in Geometry and Number Theory. I have long been attracted to mathematics and philosophy because they both embody the search for truth that has driven me all my life. I believe reason and logic are profoundly important both as useful tools in this search, and for their apparently unique development within our species.
Humans aren't particularly fast, or strong, or resistant to damage as compared with many other creatures on the planet, but we seem to be the only ones with a reasonably well developed faculty for reasoning and questioning. This leads me to believe that developing these skills is a clear imperative for all human beings, and I have worked hard all my life to use rational thinking, discourse and debate to better understand the world around me and the decisions that I make every day.
This is what drove me towards teaching as a career, as I see my profession as providing me with the opportunity to help young people better understand the importance of reason and logic, as well as help them to develop their ability to utilise them.
I'm excited to finally become a member of this community which seems to share in many of the values I hold dear, and look forward to many intriguing and thought provoking discussions here on LW!
I used to have a different account here, but I wanted a new one with my real name so I made this one.
I study computer and electrical engineering at the University of Nebraska-Lincoln, though I'm not finding it very gratifying (rationalists are rare creatures around here for some reason), and I'm trying as hard as I can to find some other way to get paid to code/think so I can drop out. Here's my occasionally-updated reading list, and my favorite programming language is Clojure.
You could start or attend a lesswrong meetup, maybe you'll find some like-minded people.
Or talk to some of your professors, some of them should be pretty smart. Maybe also try meeting new folks, maybe older students?
Go to okcupid, search for lesswrong, yudkowsky or rationality and meet some like-minded people. You don't have to date them.
I know, it's pretty hard, I myself don't click with 99,9% of all people and I'm definitely under +3 sigma.
What worked for me in a related situaton was leveraging comparative advantage by:
1) Finding somebody who isn't broken in the same specific way, 2) Providing them with something they considered valuable, so they'd have reason to continue engaging, 3) Conveying information to them sufficient to deduce my own needs, 4) Giving them permission to tell me what to do in some limited context related to the problem, 5) Evaluating ongoing results vs. costs (not past results or sunk costs!) and deepening or terminating the relationship accordingly.
None of these steps is trivial; this is a serious project which will require both deep attention and extended effort. The process must be iterated many times before fully satisfactory results can reasonably be expected. It's a very generalized algorithm which could encompass professional counseling, romance, or any number of other things.
First of all, I encourage you to take advantage of the counseling and psychological services available to you on campus, if you have not already done so. They're very familiar with psychological pain.
Second, I encourage you to go to a Less Wrong meetup when you get the chance. There's a good chance you'll find people there who are as smart as you and who care about some of the same things you care about. There are listings for meetups in Toronto, Albany, and New York City. I can personally attest that the NYC meetup is great and exists and has good people.
Finally, I wish I could point you to resources that are especially appropriate for trans people, but I don't know what they are.
I really hope that you will be okay.
Oh hey, you're girl!me. Maybe what helped me will help you?
Getting on bupropion stopped me being miserable and hurting all the time, and allowed me to do (some) stuff and be happy. That let me address my executive function issues and laziness; I'm not there yet, but I'm setting up a network of triggers that prompt me to do what I need.
This will hurt like a bitch. When you get to a semi-comfortable point you just want to stop and rest, but if you do that you slide back, so you have to push through pain and keep going. But once the worst is over and you start to alieve that happiness is possible and doing things causes it, it gets easier.
So I'd advise you to drag yourself to a psychiatrist (or perhaps a therapist who can refer you) and see what they can do. If you want friends and/or support, you could drop by on #lesswrong on Freenode, it's full of cool smart people. If I can help, you know where to find me.
RIT can be a pretty miserable place in the winter, I know from personal experience. Maybe you have some seasonal affective disorder in addition to your other issues? Vitamin D in the morning and melatonin in the evening might help, and of course exercise is good for all sorts of mood related issues - so joining one of the clubs might be a good idea, or take a class like fencing (well, I enjoyed the fencing class anyway...) or start rockclimbing at the barn. Clubs might be a good idea in general, actually - the people in the go club were not stupid when I was there and it was nice hanging out in Java Wally's.
I know there's at least 3 MtF semi-regulars on this board, and one more who turned down Aubrey de Grey for a date once; so it's not like you're alone here. But I agree with Kawoomba that there are resources focused more closely on your problems than a forum on rationality, and these will help better and quicker. If you cannot intellectually respect anyone there enough that talking would help, Shannon Friedman does life coaching (and Yvain is on the last leg of his journey to becoming a psychiatrist).
If there's a sequence that would directly help you, it's probably Luminosity.
It sounds like you have some extremely strong Ugh Fields. It works like this:
A long, long time ago, you had an essay due on Monday and it was Friday. You had the thought, "Man, I gotta get that essay done", and it caused you a small amount of discomfort when you had the thought. That discomfort counted as negative feedback, as a punishment, to your brain, and so the neural circuitry which led to having the thought got a little weaker, and the next time you started to have the thought, your brain remembered the discomfort and flinched away from thinking about the essay instead.
As this condition reinforced itself, you thought less and less about the paper, and then eventually the deadline came and you didn't have it done. After it was already a day late, thinking about it really caused you discomfort, and the flinch got even stronger; without knowing it, you started psychologically conditioning yourself to avoid thinking about it.
This effect has probably been building in you for years. Luckily, there are some immediately useful things you can do to fight back.
Do you like a certain kind of candy? Do you enjoy tobacco snuff? You can use positive conditioning on your brain the same way you did before, except in the opposite direction. Put a bag of candy on your desk, or in your backpack. Every time you think about an assignment you need to do, or how you have some job applications to fill out, eat a piece of candy. As long as you get as much pleasure out of the candy as you get pain out of the thought of having to do work, the neural circuitry leading to the thought of doing work will get stronger, as your brain begins to think it is being rewarded for having the thought.
It doesn't take long at all before the nausea of actually doing work is entirely gone, and you're back to being just "lazy". But at this point, the thought of doing work will be much less painful, and the candy (or whatever) reward will be much stronger.
All you have to do is trick your brain into thinking it will get candy every time it thinks about doing work. Even if you know that it's just you rewarding yourself, it still works. Yeah, it's practically cheating, but your goal should be to do what works. Just trying really, really hard isn't just painful; it also doesn't work. Cheat instead.
I think I understand. There is something of what you describe here that resonates with my own past experience.
I myself was always much smarter than my peers; this isolated me, as I grew contemptuous of the weakness I found in others, an emotion I often found difficult to hide. At the same time, though, I was not perfect; the ease at which I was able to do many things led me to insufficient conscientiousness, and the usual failures arising from such. These failures would lead to bitter cycles of guilt and self-loathing, as I found the weakness I so hated in others exposed within myself.
Like you, I've found myself becoming more functional over time, as my time in university gives me a chance to repair my own flaws. Even so, it's hard, and not entirely something I've been able to do on my own... I wouldn't have been able to come this far without having sought, and received, help. If you're anything like me, you don't want to seek help directly; that would be admitting weakness, and at the times when you hurt the worst, you'd rather do anything, rather hurt yourself, rather die than admit to your weakness, to allow others to see how flawed you are.
But ignoring your problems doesn't make them go away. You need to do something about them. There are people out there who are willing to help you, but they can't do so unless you make the first move. You need to take the initiative in seeking help; and though it will seem like the hardest thing you could do... it's worth it.
What would help?
Background:
21-year old transgender-neither. I spent 13 years enveloped by Mormon culture and ideology, growing up in a sheltered environment. Then, everything changed when the Fire nation attacked.
Woops. Off-track.
I want my actions to matter, not from others remembering them but from me being alive to remember them . In simpler terms, I want to live for a long time - maybe forever. Death should be a choice, not an unchanging eventuality.
But I don't know where to start; I feel overwhelmed by all the things I need to learn.
So I've come here. I'm reading the sequences and trying to get a better grasp on thinking rationally, etc., but was hoping to get pointers from the more experienced.
What is needed right now? I want to do what I can to help not only myself, but those whose paths I cross.
~Jenna
Is this the same thing as "agender"?
<3!!
Yes, it's the same. Transgender-neither sounds better to me, though, so I used that term.
But if I find that agender is more accessible I'll switch.
And yep, I'm an Avatar the Last Airbender junkie. :)
Welcome! Have you considered signing up for cryonics?
Aside from the occasional X-files episode and science fiction reading, I don't know much about cryonics.
I considered it as a possibility but dislike that it means I'm 'in suspense' while the world is continuing on without me. I want to be an active participant! :D
Certainly, but when you no longer can be, it's nice to have an option of becoming one again some day.
Option might be too strong a word. Its nice to have the vanishingly-small possibility. I think its important for transhumanists to remind ourselves that cryonics is unlikely to actually work, its just the only hail-mary available.
Far as I can tell, the basic tech in cryonics should basically work. Storage organizations are uncertain and so is the survival of the planet. But if we're told that the basic cryonics tech didn't work, we've learned some new fact of neuroscience unknown to present-day knowledge.
Don't assign vanishingly small probabilities to things just because they sound weird, or it sounds less likely to get funny looks if you can say that it's just a tiny chance. That is not how 'probability' works. Probabilities of basic cryonics tech working are questions of neuroscience, full stop; if you know the basic tech has a tiny probability of working, you must know something about current vitrification solutions or the operation of long-term memory which I do not.
Is this your true objection? What potential discovery in neuroscience would cause you to abandon cryonics and actively look for other ways to preserve your identity beyond the natural human lifespan? (This is a standard question one asks a believer to determine whether the belief in question is rational -- what evidence would make you stop believing?)
Personally, I would be very impressed if anyone could demonstrate memory loss in a cryopreserved and then revived organism, like a bunch of C. elegans losing their maze-running memories. They're very simple, robust organisms, it's a large crude memory, the vitrification process ought to work far better on them than a human brain, and if their memories can't survive, that'd be huge evidence against anything sensible coming out of vitrified human brains no matter how much nanotech scanning is done (and needless to say, such scanning or emulation methods can and will be tested on a tiny worm with a small fixed set of neurons long before they can be used on anything approaching a human brain). It says a lot about how poorly funded cryonics research is that no one has done this or something similar as far as I know.
Hmm, I wonder how much has been done on figuring out the memory storage in this organism. Like, if you knock out a few neurons or maybe synapses, how much does it forget?
Since it's C. elegans, I assume the answer is 'a ton of work has been done', but I'm too tired right now to go look or read more medical/biological papers.
Anders Sandberg who does get the concept of sufficiently advanced technology posts saying, "Shit, turns out LTM seems to depend really heavily on whether protein blah has conformation A and B and the vitrification solution denatures it to C and it's spatially isolated so there's no way we're getting the info back, it's possible something unknown embodies redundant information but this seems really ubiquitous and basic so the default assumption is that everyone vitrified is dead". Although, hm, in this case I'd just be like, "Okay, back to chopping off the head and dropping it in a bucket of liquid nitrogen, don't use that particular vitrification solution". I can't think offhand of a simple discovery which would imply literally giving up on cryonics in the sense of "Just give up you can't figure out how to freeze people ever." I can certainly think of bad news for particular techniques, though.
OK. More instrumentally, then. What evidence would make you stop paying the cryo insurance premiums with CI as the beneficiary and start looking for alternatives?
Anders publishes that, CI announces they intend to go on vitrifying patients anyway, Alcor offers a chop-off-your-head-and-dunk-in-liquid-nitro solution. Not super plausible but it's off the top of my head.
In my case, to name one contingency: if the NEMALOAD Project finds that analysis of relatively large cellular structures doesn't suffice to predict neuronal activity, and concludes that the activity of individual molecules is essential to the process, then I'd become significantly more worried about EHeller's objection and redo the cost-benefit calculation I did before signing up for cryonics. (It came out in favor, using my best-guess probability of success between 1 and 5 percent; but it wouldn't have trumped the cost at, say, 0.1%.)
To name another: if the BPF shows that cryopreservation makes a hash of synaptic connections, I'd explicitly re-do the cost-benefit calculation as well.
He's kind of been working on that for a while now.
(I suppose that works either as "subvert the natural human lifespan entirely through creating FAI" or "preserve his identity for time immemorial in the form of 'Harry-Stu' fanfiction" depending on how cynical one is feeling.)
Have you seen the comments by kalla724 in this thread?
Edit: There's some further discussion here.
I'd say full speed ahead, Cap'n. Basic cryonics tech working - while being a sine qua non - isn't the ultimate question for people signing up for cryonics. It's just a term in the probability calculation for the actual goal: "Will I be revived (in some form that would be recognizable to my current self as myself)?" (You've mentioned that in the parent comment, but it deserves more than a passing remark.)
And that most decidedly requires a host of complex assumptions, such as "an agent / a group of agents will have an interest in expending resources into reviving a group of frozen old-version homo sapiens, without any enhancements, me among them", "the future agents' goals cannot be served merely by reading my memory engrams, then using them as a database, without granting personhood", "there won't be so many cryo-patients at a future point (once it catches on with better tech) that thawing all of them would be infeasible, or disallowed", not to mention my favorite "I won't be instantly integrated into some hivemind in which I lose all traces of my individuality".
What we're all hoping for, of course, is for a benevolent super-current-human agent - e.g. an FAI - to care enough about us to solve all the technical issues and grant us back our agent-hood. By construction at least in your case the advent of such an FAI would be after your passing (you wouldn't be frozen otherwise). That means that you (of all people) would also need to qualify the most promising scenario "there will be a friendly AI to do it" with "and it will have been successfully implemented by someone other than me".
Also, with current tech not only would true x-risks preclude you from ever being revived, even non x-risk catastrophic events (partial civilizatory collapse due to Malthusian dynamics etc.) could easily destroy the facility you're held in, or take away anyone's incentive to maintain it. (TW: That's not even taking into account Siam the Star Shredder.)
I'm trying to avoid motivated cognition here, but there are lot of terms going into the actual calculation, and while that in itself doesn't mean the probability will be vanishingly small, there seem to be a lot more (and given human nature, unfortunately likely / contributing more probability mass) scenarios in which your goal wouldn't be achieved - or be achieved in some undesirable fashion - than the "here you go, welcome back to a society you'd like to live in" variety.
That being said, I'll take the small chance over nothing. Hopefully some decent options will be established near my place of residence, soon.
I actually am signed up for cryonics.
My issue with the basic tech is that liquid nitrogen, while a cheap storage method, is too cold to avoid fracturing. Experience with imaging systems leads me to believe that fractures will interfere with reconstructions of the brain's geometry, and cryoprotectants obviously destroy chemical information.
Now, it seems likely to me that at some point in the future the fracturing problem can be solved, or at least mitigated, by intermediate temperature storing and careful cooling processes, but that won't fix the bodies frozen today. So I don't doubt that (barring large neuroscience related, unquantifiable uncertainty) cryonics may improve to the point where the tech is likely to work (or be supplanted by plastination methods,etc), it is not there now, and what matters for people frozen today is the state of cryonics today.
Saying there are no fundamental scientific barriers to the tech working is not the same thing as saying the hard work of engineering has been done and the tech currently works.
Edit: I also have a weak prior that the chemical information in the brain is important, but it is weak.
Since this is the key point of neuroscience, do you want to expand on it? What experience with imaging leads you to believe that fractures (of incompletely vitrified cells) will implement many-to-one mappings of molecular start states onto molecular end states in a way that overlaps between functionally relevant brain states? What chemical information is obviously destroyed and is it a type that could plausibly play a role in long-term memory?
"many-to-one mappings of molecular start states onto molecular end states in a way that overlaps between functionally relevant brain states" is probably too restrictive. I would use "possibly functionally different, but subjectively acceptably close brain states".
The cryoprotectants are toxic, they will damage proteins (misfolds, etc) and distort relative concentrations throughout the cell. This information is irretrievable once the damage is done. This is what I refereed to when I said obviously destroyed chemical information. It is our hope that such information is unimportant, but my (as I said above fairly uncertain) prior would be that the synaptic protein structures are probably important. My prior is so weak because I am not an expert on biochemistry or neuroscience.
As to the physical fracture, very detailed imaging would have to be done on either side of the fracture in order to match the sides back up, and this is related to a problem I do have some experience with. I'm familiar with attempts to use synchrotron radiation to image protein structures, which has a percolation problem- you are damaging what you are trying to image while you image it. If you have lots of copies of what you want to image, this is a solvable problem, but with only one original you are going to lose information.
Edit: in regards to the first point, kalla724 makes the same point with much more relevant expertise in this thread http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryogenics/ His experience working with synapses leads him to a much stronger estimate that cryoprotectants cause irreversible damage. I may strengthen my prior a bit.
How do you know? I'm not asking for some burden of infinite proof where you have to prove that the info can't be stored elsewhere. I am asking whether you know that widely functionally different start states are being mapped onto an overlapping spread of molecularly identical end states, and if so, how. E.g., "denaturing either conformation A or conformation B will both result in denatured conformation C and the A-vs.-B distinction is just a little twist of this spatially isolated thingy here so you wouldn't expect it to be echoed in any exact nearby positions of blah" or something.
Do you think it's at all likely that the connectome can be recovered after fracturing by "matching up" the structure on either side of the fracture?
It seems to me that they're also questions of engineering feasibility. A thing can be provably possible and yet unfeasibly difficult to implement in reality. Consider the difference between, say, adding salt to water and getting it out again. What if the difference in cost and engineering difficulty between vitrifying and successfully de-vitrifying is similar? What if it turns out to be ten orders of magnitude greater?
I think the most likely failure condition for cryonics tech (as opposed to cyronics organizations) isn't going to be that revival turns out to be impossible, but that revival turns out to be so unbelievably hard or expensive that it's never feasible to actually do. If it's physically and information-theoretically allowed to revive a person, but technologically impractical (even with Sufficiently Advanced Science), then its theoretical possibility doesn't help the dead much.
I have the same concern about unbounded life extension, actually; but I find success in that area more probable for some reason.
(personal disclosure: I'm not signed up for cryonics, but I don't give funny looks to people who are. Their screws seem a bit loose but they're threaded in the right direction. That's more than one can say for most of the world.)
I think it might be important to remind others of that too, when discussing the subject. Especially for people who are signed up but have a skeptical social circle, "this seems like the least-bad of a set of bad options" may be easier for them to swallow than "I believe I'm going to wake up one day."
Hi. I discovered LessWrong recently, but not that recently. I enjoy Yudkowsky's writings and the discussions here. I hope to contribute something useful to LessWrong, someday, but as of right now my insights are a few levels below those of others in this community. I plan on regularly visiting the LessWrong Study Hall.
Also, is it "LessWrong" or "Less Wrong"?
You'll fit in great.
I endorse "Less Wrong" as a standalone phrase but "LessWrong" as an affixed phrase (e.g., "LessWrongian").
Good question... :-)
The front page and the About page consistently use the one with the space... except in the logo. Therefore I'm going to conclude that the change in typeface colour in the logo counts as a space and the ‘official’ name is the spaced one.
I went through the same reasoning pattern as you right before reading this comment. So I think I'll stick with "Less Wrong", for the time being.
We are currently undertaking a study on popular perceptions of existensial risk, our goal is to create a publicly accesible index of such risks, which may then be used to inform and catalyze comprehension through discussion generated around them.
If you have a few minutes, please follow the link to complete a brief, anonymous questionnaire - your input will be appreciated !
Survey Link : http://eclipsebureau-survey.questionpro.com/
Join us on Facebook: http://www.facebook.com/eclipse.bureau
Hello everyone. My name is Vadim Kosoy, and you can find some LW-relevant stuff about me in my Google+ stream: http://plus.google.com/107405523347298524518/about
I am an all time geek, with knowledge / interest in math, physics, chemistry, molecular biology, computer science, software engineering, algorithm engineering and history. Some areas in which I'm comparatively more knowledgeable: quantum field theory, differential geometry, algebraic geometry, algorithm engineering (especially computer vision)
In my day job I'm a technical + product manager of a small software group in Mantis Vision (http://www.mantis-vision.com/) a company developing 3D video cameras. My previous job was in VisionMap (http://www.visionmap.com/) which develops airborne photography / mapping systems, where I led a team of software and algorithm engineers.
I knew about Eliezer Yudkowsky and his friendly AI thesis (which I don't fully accept) for some time, but discovered this community only relatively recently. For me this community is interesting because of several reasons. One reason is that many discussions are related to the topics of transhumanism / technological singularity / artificial intelligence which I find very interesting and important. Another is that consequentialism is a popular moral philosophy here, and I (relatively recently) started to identify myself as strongly consequentialist. Yet another is that it seems to be a community where rational people discuss things rationally (or at least try), something that society all over the world misses as much direly as the idea seems trivial. This is in stark contrast the usual mode of discourse about social / political issues which is extremely shallow and plagued by excessive emotionality and dogmatism. I truly believe such a community can become a driver of social change in good directions, something with incredible impact
Recently I became very much interested with the subject of understanding general intelligence mathematically, in particular by the methods of computer science. I've written some comments here about my own variant of the Orseau-Ring framework, something I wished to expand into a full article but didn't have the karma for it. Maybe I'll post in on LW discussion.
My personal philosophy: As I said, I'm a consequentialist. I define my utility function not on the basis of hedonism or anything close to hedonism but on the basis of long-term scientific / technological / autoevolutional (transhumanist) progress. I don't believe in the innate value of h. sapiens but rather in the innate value of intelligent beings (in particular the more intelligence the more value). I can imagine scenarios in which a strong AI destroys humanity which are from my P.O.V. strongly positive: this is my disagreement with the friendly AI thesis. However I'm not sure whether any strong AI scenario will be positive, so I agree it is a concern. I also consider myself a deist rather than an atheist. Thus I believe in God, but the meaning I ascribe to the word "God" is very different from the meaning most religious people ascribe to it (I choose to still use the word "God" since there are a few things in common). For me God is the (unknowable) reason for the miraculous beauty of universe, perceived by us as the beauty of mathematics and science and the amazing plethora of interesting natural phenomena. God doesn't punish/reward for good/bad behavior, doesn't perform divine intervention (in the sense of occasional violations of natural law) and doesn't write/dictate scriptures and prophesies (except by inspiring scientists to make mathematical and scientific discoveries). I consider the human brain to be a machine, with no magic "soul" behind the scenes. However I believe in immortality in a stranger metaphysical sense which is something probably too long to detail here
I'm 29.9 years old, married with child (boy, 2.8 years old). I live in Israel since the age of 7 but I was born in the USSR. Ethnically I'm an Ashkenazi Jew. I enjoy science fiction, good cinema ( but no time to see any since my son was born :) ) and many sorts of music (but rock is probably my favorite). Glad to be here!
Welcome! You should probably join the MAGIC list. Orseau and others hang out there, and Orseau will probably comment on your two posts if you ask for feedback on that list. Also, if you ever visit California then you should visit MIRI and do some math with us.
Welcome! We're all 29.9 years old, here. I look forward to your comments, hopefully you'll find the time for that post on your Orseau-Ring variant.
Regarding your redefinition of god, allow me just a small comment: Calling an unknowable reason "god" - without believing in such a reason's personhood, or volition, or having a mind - invites a lot of unneeded baggage and historical connotations that muddle the discussion, and your self-identification, because what you apparently mean by that term is so different from the usual definitions of "god" that you could just as well call yourself a spiritual atheist (or related).
Speak for yourself, youngster ! Why, back in my day, we didn't have these "internets" you whippersnappers are always going on about, what with the cats and the memes and the facetubes and the whatnot. We had to make our own networks, by hand, out of floppies and acoustic modems, and we liked it . Why, there's nothing like an invigorating morning hike with a box of 640K floppies (formatted to 800K) in your backpack, uphill in the snow both ways. Builds character, it does. Mumble mumble mumble get off my lawn !
Hi. 18 years old. Typical demographics. 26.5-month lurker and well-read of the Sequences. Highly motivated/ambitious procrastinator/perfectionist with task-completion problems and analysis paralysis that has caused me to put off this comment for a long time. Quite non-optimal to do so, but... must fight that nasty sunk cost of time and stop being intimidated and fearing criticism. Brevity to assure it is completed - small steps on a longer journey. Hopefully writing this is enough of an anchor. Will write more in future time of course.
Finally. It is written. So many choices... so many thoughts, ideas, plans to express... No! It is done! Another time you silly brain! We must choose futures! We will improve, brain, I promise.
I look forward to at last becoming an active member of this community, and LEVELING UP! Tsuyoku naritai!
Peter here,
I stumbled onto LW from a link on TvTropes about the AI Box experiment. Followed it to an explanation of Bayes' Theorem on Yudowsky.net 'cause I love statistics (the rage I felt knowing that not one of my three statistics teachers ever mentioned Bayes was an unusual experience).
I worked my way through the sequences and was finally inspired to comment on Epistemic Viciousness and some of the insanity in the martial arts world. If your goal is to protect yourself from violence, martial arts is more likely to get you hurt or thrown in jail.
It seems inappropriate that I went by Truth_Seeker before discovering this site, so a chose a handle that was in opposition to that. And I like the word aether.
My name is Itai Bar-Natan. I have been lurking here for a long time, more recently I start posting some things, but only now do I formally introduce myself.
I am in grade 11, and I began reading less wrong at grade 8 (introduced by Scott Aaronson's blog). I am a former math prodigy, and am currently taking one graduate-level course in it. This is the first time I am learning math under the school system (although I not the first time I attended math classes under the school system). Before that, I would learn from my parents, who are both mathematicians, or (later on) from books and internet articles.
Heedless of Feynman, I believe I understand quantum mechanics.
One weakness I am working to improve on is the inability to write in large quantities.
I have a blog here: http://itaibn.wordpress.com/
I consider less wrong as a fun time-waster and a community which is relatively sane.
Are you, by any chance, related to Dror?
Yes, I am his son.
Give her to Headless Feyn-man!
Hellooo! I de-lurked during the survey and gradually started rambling at everyone but I never did one of these welcome posts!
My exposure to rationality started with idea that your brain can have bugs, which I had to confront when I was youngish because (as I randomly mentioned) I have a phobia that started pretty early. By then I had fairly accurate mental models of my parents to know that they wouldn't be very helpful/accommodating, so I just developed a bunch of workarounds and didn't start telling people about it until way later. The experience helped me reason about a lot of these blue-killing robot types of situations, and get used to handling involuntary or emotional responses in a goal-optimizing way. As a result, I'm interested in cognitive biases, neurodiversity and braaains, as well as how to explain and teach useful life skills to my tiny brother so that he doesn't have to learn them the hard way.
My undergrad degree is in CS/Math, I'm currently a CS grad student (though I don't know if I'm sticking around) and I'm noticing that I have a weird gap in my understanding of AI-related discussions, so I'll probably start asking more questions about it. I regret to admit I've been avoiding probability because I was bad at it, but I'm slowly coming around to the idea that it's important and I need to just suck it up and learn. Also, a lot of sciencey people whine about this, but I think AP Lit (and similar classes) helped me think better; it taught me to read the question carefully, read the text closely, pay attention to detail and collect evidence! But it has possibly made me way too sensitive to word choice; I apologize for comments saying "you could have used this other word but you didn't, so clearly this means something!" when the other word has never crossed your mind.
I started reading the site so long ago that I can't actually remember how I found it. One of the things I appreciate the most about the community is the way people immediately isolate problems, suggest solutions and then evaluate results, which is awesome! and also not an attitude I'm used to seeing a lot. I also appreciate having a common vocabulary to discuss biases, distortions, and factors that lead to disagreements. There were a lot of concepts I wanted to bring up with people that I didn't have a concise word for in the past.
I'm Robby Oliphant. I started a few months ago reading HP:MoR, which led me to the Sequences, which led me here about two weeks ago. So far I have read comments and discussions solely as a spectator. But finally, after developing my understanding and beginning on the path set forth by the sequences, I remain silent no more.
I am fresh out of high school, excited about life and plan to become a teacher, eventually. My short-term plans involve going out and doing missionary work for my church for the next two years. When I came head on against the problem of being a rationalist and a missionary for a theology, I took a step back and had a crisis of belief, not the first time, but this time I followed the prescribed method and came to a modified conclusion, though I still find it rational and advantageous to serve my 2 year mission.
I find some of this difficult, some of this intuitive and some of this neither difficult or intuitive, which is extremely frustrating, how something can appears simple but defy my efforts to intuitively work it. I will continue to work at it because rationality seems to be praiseworthy and useful. I hope to find the best evidence about theology here. I don't mean evidence for or against, just the evidence about the subject.
Hahaha! I find it heartening that that is your response to me wanting to be a teacher. I am quite aware that the system is broken. My personal way of explaining it: The school system works for what it was made to work for; avoiding responsibility for a failed product.
The parents are not responsible; the school taught their kids.
The students are not socially responsible; everything was compulsory, they had no choice to make.
Teachers are not to blame; they teach what they are told to teach and have the autonomy of a pre-AI computer intelligence.
The administrators are not to blame; They are not the students' parents or teachers.
The faceless, nameless committees that set the curriculum are not responsible, they formed then separated after setting forth the unavoidably terrible standards for all students of an arbitrary age everywhere.
So the product fails but everyone did they're best. No nails stick out, no one gets hammered.
I have high dreams of being the educator that takes down public education. If a teacher comes up with a new way of teaching or an important thing to teach, he can go to class the next day and test it. I have a hope of professional teachers; either trusted with the autonomy of being professionals, or actual professionals in their subject, teaching only those that want to learn.
Also the literature on Mormons fromDesrtopa, Ford and Nisan I am thankful for. I enjoyed the Mormonism organizational post because I have also noticed how well the church runs. It is one reason I stay a Latter-Day Saint in this time of Atheism mainstreaming. The church is winning, it is well organized, service and family-oriented, and supports me as I study rationality and education. I can give examples, but I will leave my deeper insights for my future posts; I feel I am well introduced for now.
I would be quite interested to see a more detailed post regarding that last part. Of course, I am just some random guy on the Internet, but still :-)
I don't think you'll find much discussion of theology here, since in these parts religion is generally treated as an open and shut case. The archives of Luke Muelhauser's blog, Common Sense Atheism, are probably a much more abundant resource for rational analysis of theology; it documents his (fairly extensive) research into theological matters stemming from his own crisis of faith, starting before he became an atheist.
Obviously, the name of the site is rather a giveaway as to the ultimate conclusion that he drew (I would have named it differently in his place,) and the foregone conclusion might be a bit mindkilling, but I think the contents will probably be a fair approximation of the position of most of the community here on religious theological matters, made more explicit than they generally are on Less Wrong.
I appreciate your altruistic spirit and your goal of gathering objective evidence regarding your religion. I'm glad to see you beginning on the path of improving your rationality! If you haven't encountered the term "effective altruist" yet or have not yet investigated the effective altruist organizations, I very much encourage you to investigate them! As a fellow altruistic rationalist, I can say that they've been inspiring to me and hope they're inspiring to you as well.
I feel it necessary to inform you of something important yet unfortunate about your goal of becoming a teacher. I'm not happy to have to tell you this, but I am quite glad that somebody told you about it at the beginning of your adulthood:
The school system is broken in a serious way. The problem is with the fundamental system, so it's not something teachers can compensate for.
If you wish to investigate alternatives to becoming a standard school teacher, I would highly recommend considering becoming involved with effective altruists. An organization like THINK or 80,000 hours may be very helpful to you in determining what sorts of effective and altruistic things you might do with your skills. THINK does training for effective altruists and helps them figure out what to do with themselves. 80,000 hours helps people figure out how to make the most altruistic contribution with careers they already have.
For information regarding religion, I recommend the blog of a former Christian (Luke Muehlhauser) as an addition to your reading list. That is here: Common Sense Atheism. I recommend this in particular because he completed the process you've started - the process of reviewing Christian beliefs - so Luke's writing may be able to save you significant time and provide you with information you may not encounter in other sources. Also, due to the fact that he began as a Christian, I'm guessing that his reasoning was not unnecessarily harsh toward Christian ideas like they might have been otherwise. The sampling of his blog that I've read is of good quality. He's a rationalist, so that might be part of why.
I would love to hear more details, both about the process and about the conclusion, if you are brave/foolish enough to share.
I am Yan Zhang, a mathematics grad student specializing in combinatorics at MIT (and soon to work at UC Berkeley after graduation) and co-founder of Vivana.com. I was involved with building the first year of SPARC. There, I met many cool people at CFAR, for which I'm now a curicculum consultant.
I don't know much about LW but have liked some of the things I have read here; AnnaSalamon described me as a "street rationalist" because my own rationality principles are home-grown from a mix of other communities and hobbies. In that sense, I'm happy to step foot into this "mainstream dojo" and learn your language.
Recently Anna suggested I may want to cross-post something I wrote to LW and I've always wanted to get to know the community better, so this is the first step, I suppose. I look forward to learning from all of you.
Welcome! It's good to see you here.
I am Pinyaka. I've been lurking a bit around this site for several months. I don't remember how I found it (probably a linked comment from Reddit), but stuck around for the main sequences. I've worked my way through two of them thanks to the epub compilations and am currently struggling to figure out how to prioritize and better put into practice the things that I learn from the site and related readings.
I hope to have some positive social interactions with the people here. I find that I become fairly unhappy without some kind of regular socialization in a largish group, but it's difficult to find groups whose core values are similar to mine. In fact, after leaving a quasi-religious group last year it occurred to me that I've always just fallen in with whatever group was most convenient and not too immediately repellant. This marks the first time I've tried to think about what I value and then seek out a group of like minded individuals.
I also hope to find a consistent stream of ideas for improving myself that are backed by reason and science. I recognize that removing (or at least learning to account for) my own biases will help me build a more accurate picture of the universe that I live in and how I function within that framework. Along with that, I hope to develop the ability to formulate and pursue goals to maximize my enjoyment of life (I've been reading a bunch of lukeprogs anti-akrasia posts recently, so following through on goals is on my mind currently).
I am excited to be here.
Hi Pinyaka!
Semi-seriously, have you considered moving?
Welcome! You might enjoy it if you show up to a meetup as well.
Hi there community! My name is Dave. Currently hailing from the front range in Colorado, I moved out here after 5 years with a Chicago non-profit - half as executive director - following a diagnosis of Asperger Syndrome (four years after being diagnosed with ADHD-I). That was three years ago. Much has happened in the interim, but long story short, I mercilessly began studying what we call AS & anything related I could find. After a particularly brutal first-time experience with hardcore passive-aggressivism (always two sides to every situation, but it doesn't work well when no one will talk about it :P), I became extremely isolated, & have been now for about a year. I'm in my second attempt to return to school via a great community college, but unfortunately the same difficulties as last term are getting in the way.
BUT, that's a different story! I've had this site recommended to me a few times now because over the course of my isolation I've become completely preoccupied with all sorts of fun mental projects, ranging in topics from physics to consciousness to quantum mechanics to dance. My current big projects (I bounce around a loooooot) are creating a linear model for the evolution of cognitive development & showing in some way why I'm not sure i agree that time is the fourth dimension. Oh, also trying to develop a structure for understanding :)
After looking through a few of the welcome threads here, I'm excited to be here! Now all I have to do is keep consistent...
I'm Shai Horowitz. I'm currently a duel physics and mathematics major at Rutgers university. I first learned of the concept of "Bayesian" or "rationality" through HPMOR and from there i took it upon myself to read the Overcoming Bias post which has been an extremely long endeavor of which I have almost but not yet accomplished. Through conversation with others in my dorm at Rutgers I have realized simply how much this learning has done to my thought process and it allowed me to hone in on my own thoughts that i could see were still biased and go about fixing them. Through this same reasoning it became apparent to me that it would be largely beneficial to become an active part in the lesswrong community to sharpen my own skills as a rationalist while helping others along the way. I embrace rationality for the very specific reason that I wish to be a Physicists and realize that in trying to do so i could (as Eliezer puts hit) "shoot off my own foot" while doing things that conventional science allows. In the process of learning this I did stall out for months at a time and even became depressed for a while as I was stabbing my weakest points with the metaphorical knife. I do look back at laugh at the fact now that a college student was making incredibly bad decisions to get over the pain of fully embracing the second law of thermodynamics and its implications, which to me seems to be a sign of my progress moving forward. I don't think that i will soon have to face a fact as daunting as that one and with the knowledge that I know how to accept even that law I will now be able to accept any truths much more easily. That being said even though hard science is my primary purpose for learning rationality I am a bit of a self proclaimed polymath and have spent recent times learning more of psychology and cognition then simply the cognitive bias's i need to be self weary of. I just finished the book "Influence: Science and Practice" which I've heard Eliezer mention multiple times and very recently as in this week my interest have turned into pushing standard ethical theories to there limits as to truly understand how to make the world a better place and to unravel the black box that is itself the word "better". I conclude with I would love to talk with anyone experienced or new to rationality about pretty much any topic and would very much like if someone would message me. furthermore if anyone reading this goes to Rutgers university or is around the area, a meet up over coffee or something similar would make my day.
Welcome! I am really curious what you mean by
Hello,
I found this site via HPMOR, which was the most awesome book I have read for several years. Besides being awesome as a book there were a lot of moments during reading I thought wow, there is someone who really thinks quite like myself. (Which is unfortunately something I do not experience too often.) Thus I was interested in who the author of HPMOR is, so I googled “less wrong”.
This site really held what HPMOR promised, so I spend quite some time reading through many articles absorbing a lot of new and interesting concepts.
Regarding my own person, I am a 30 years old biochemist currently working on my master thesis in structural biology. I grew up and live in Cologne, Germany.
I am, since early childhood very interested in everything science, engineering and philosophy related, thus inferential distances to most topics discussed here were not too large. On the downside most people perceive me as quite nerdy. This is reinforced by my rather poor social skill(I am possibly on the spectrum) so I was bullied a lot during childhood. Thus my social life was quite dim, though it improved quite a lot during my twenties, mostly due to having a relationship.
I was raised with an agnostic respectively weakly catholic (maybe there is a god, perhaps or something) worldview, and became increasingly atheistic during my teen-years, though this is not really remarkable and pretty much the default for scientifically educated people in Germany. Further on a lot of transhumanistic idea(l)s have a lot of appeal to me.
Besides the clarity and high intellectual level of discourse on this site I really like the technophilic / progress optimistic worldview of most people here. The general technology is evil meme held by a lot of “intellectuals” really puts me of, especially if they do not realize, that their entire live depends utterly on the very technology they shun.
My main criticism is an (IMHO) over-representation of the ai-foom scenario as a projected future, though this is a post on its own (which I hope to write up soon).
I have been lurking the site for quite some time now (> 1 year) mostly due to akrasia related reasons. First I really like reading interesting ideas and dislike writing so if I can spend time on less wrong this time has a much higher hedonic quality for me if I read articles than if I write my own article or comments. Second, whenever I read a post here and find something missing or imprecise or even wrong, in most cases someone already pointed it out often more precisely and eloquently than I could have done, so I mostly did not feel to much need to comment anyway.
I decided to delurk now anyway, because I have several ideas for posts in mind, which I hope to write up over the next few weeks or month, hopefully contributing to the awesomeness of this site. Further on I contemplate starting an LW meet-up group in my hometown (I could use som help / advise there).
Cudos and an unconditional upvote to the person who first guesses the meaning of my username.
Hello and goodbye.
I'm a 30 year old software engineer with a "traditional rationalist" science background, a lot of prior exposure to Singularitarian ideas like Kurzweil's, with a big network of other scientist friends since I'm a Caltech alum. It would be fair to describe me as a cryocrastinator. I was already an atheist and utilitarian. I found the Sequences through Harry Potter and the Methods of Rationality.
I thought it would be polite, and perhaps helpful to Less Wrong, to explain why I, despite being pretty squarely in the target demographic, have decided to avoid joining the community and would recommend the same to any other of my friends or when I hear it discussed elsewhere on the net.
I read through the entire Sequences and was informed and entertained; I think there are definitely things I took from it that will be valuable ("taboo" this word; the concept of trying to update your probability estimates instead of waiting for absolute proof; etc.)
However, there were serious sexist attitudes that hit me like a bucket of cold water to the face - assertions that understanding anyone of the other gender is like trying to understand an alien, for example.
Coming here to Less Wrong, I posted a little bit about that, but I was immediately struck in the "sequence rerun" by people talking about what a great utopia the gender-segregated "Failed Utopia 4-2" would be.
Looking around the site even further, I find that it is over 90% male as of the last survey, and just a lot of gender essentialist, women-are-objects-not-people-like-us crap getting plenty of upvotes.
I'm not really willing to put up with that and still less am I enthused about identifying myself as part of a community where that's so widespread.
So, despite what I think could be a lot of interesting stuff going on, I think this will be my last comment and I would recommend against joining Less Wrong to my friends. I think it has fallen very squarely into the "nothing more than sexism, the especially virulent type espoused by male techies who sincerely believe that they are too smart to be sexists" cognitive failure mode.
If you're interested in one problem that is causing at least one rationalist to bounce off your site (and, I think the odds are not unreasonable, where one person writes a long heartfelt post, there might be multiple others who just click away) here you go. If not, go ahead and downvote this into oblivion.
Perhaps I'll see you folks in some years if this problem here gets solved, or some more years after that when we're all unfrozen and immortal and so forth.
Sincerely,
Sam
Try to keep in mind selection effects. The post was titled Failed Utopia - people who agreed with this may have posted less than those who disagreed.
I confess to being somewhat surprised by this reaction. Posts and comments about gender probably constitute around 0.1% of all discussion on LessWrong.
Whenever I see a high quality comment made by a deleted account (see for example this thread where the two main participants are both deleted accounts), I'd want to look over their comment history to see if I can figure out what sequence of events alienated them and drove them away from LW, but unfortunately the site doesn't allow that. Here SamLL provided one data point, for which I think we should be thankful, but keep in mind that many more people have left and not left visible evidence of the reason.
Also, aside from the specific reasons for each person leaving, I think there is a more general problem: why do perfectly reasonable people see a need to not just leave LW, but to actively disidentify or disaffiliate with LW, either through an explicit statement (SamLL's "still less am I enthused about identifying myself as part of a community where that's so widespread"), or by deleting their account? Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?
It may be because lot of LW regulars visibly think of it in terms of identity. LW is described by most participants as a community rather than a discussion forum, and there has been a lot of explicit effort to strengthen the communitarian aspect.
Some people come from a background where they're taught to think of everything in terms of identity.
Some possibilities:
There have been deliberate efforts at community-building, as evidenced by all the meetup-threads and one whole sequence, which may suggest that one is supposed to identify with the locals. Even relatively innocuous things like introduction and census threads can contribute to this if one chooses to take a less than charitable view of them, since they focus on LW itself instead of any "interesting idea" external to LW.
Labeling and occasionally hostile rhetoric: Google gives dozens of hits for terms like "lesswrongian" and "LWian", and there have been recurring dismissive attitudes regarding The Others and their intelligence and general ability. This includes all snide digs at "Frequentists", casual remarks to the effect of how people who don't follow certain precepts are "insane", etc.
The demographic homogeneity probably doesn't help.
I agree with these, and I wonder how we can counteract these effects. For example I've often used "LWer" as shorthand for "LW participant". Would it be better to write out the latter in full? Should we more explicitly invite newcomers to think of LW in instrumental/consequentialist terms, and not in terms of identity and affiliation? For example, we could explain that "joining the LW community" ought to be interpreted as "making use of LW facilities and contributing to LW discussions and projects" rather than "adopting 'LW member' as part of one's social identity and endorsing some identifying set of ideas", and maybe link to some articles like Paul Graham's Keep Your Identity Small.
"Here at LW, we like to keep our identity small."
Nice one.
LW is a hub for several abnormal ideas. An implication that you're affiliated with LW is an implication that you take these ideas seriously, which no reasonable person would do.
As a hypothesis, they may be ambivalent about discontinuing their hobby ("Two souls alas! are dwelling in my breast; (...)) and prefer to burn their bridges to avoid further ambivalence and decision pressures. Many prefer a course of action being locked in, as opposed to continually being tempted by the alternative.
I guess you get considered fully unclean even if you're only observed breaking a taboo a few times.
Your comment's first sentence answers your second paragraph.
Did you use a Rawlsian veil of ignorance when judging it? From a totally selfish point of view, I would very, very, very much rather be myself in this world than myself in that scenario (given that, among plenty of other things, I dislike most people of my gender), but think of, say, starving African children or people with disabilities. I don't know much about what it feels like to be in such dire straits so I'm not confident that I'd rather be a randomly chosen person in Failed Utopia 4-2 than a randomly chosen person in the actual world, but the idea doesn't sound obviously absurd to me.
Thanks for writing this. It's true that LW has a record of being bad at talking about gender issues; this is a problem that has been recognized and commented on in the past. The standard response seems to have been to avoid gender issues whenever possible, which is unfortunate but maybe better than the alternative. But I would still like to comment on some of the specific things you brought up:
I think I know the post you're referring to, I didn't read this as sexist, and I don't think that indicates a male-techy failure mode on my part about sexism. Some men are just really, really bad at understanding women (and maybe commit the typical mind fallacy when they try to understand men, and maybe just don't know anyone who doesn't fall into one of those categories), and I don't think they should be penalized for being honest about this.
I haven't seen too much of this. Edit: Found some more.
Where? Edit: Found some of this too.
This is a somewhat dangerous weapon to wield. It is very easy to classify any attempt to counter this argument as falling into the failure mode you describe; please don't use this as a fully general counterargument.
Since I cannot imagine anything but a few cherry picked examples that could have led to your impression, let me use some of my own (the number of cases is low):
The extremely positive reception of Alicorns "Living Luminously" sequence (karma +50 for the main post alone, Anja's great and technical posts (karmas +13, +34, +29) all indicate that good content is not filtered along gender lines, which it should be if there were some pervasive bias.
Even asserting that understanding anyone of the other gender is "like trying to understand an alien" does not imply any sort of male superiority complex. If you object to sexism as just pointing out that there are differences both based on culture and genetics, well you got me there. Quite obviously there are, I assume you don't live in a hermaphrodite community. Why is it bad when/if that comes up? Forbidden knowledge?
Are you sure that's the rationalist thing to do? Gender imbalance and a few misplaced or easily misinterpreted remarks need not be representative of a community, just as a predominantly male CS program at Caltech and frat jokes need not be representative of College culture.
It's possible that user is sensitive to gender issues precisely because it's comparatively difficult and not entirely rationalist to leave a community like Caltech.
It's generally the stance of gender-sensitive humans that no one should have to listen to the occasional frat joke if they don't want to. I agree with everything else in your post; that final "can't you take a frat joke?" strikes me as defensive and unnecessary.
Hey everyone,
As I continue to work through the sequences, I've decided to go ahead and join the forums here. A lot of the rationality material isn't conceptually new to me, although much of the language is very much so, and thus far I've found it to be exceptionally helpful to my thinking.
I'm a 24 year old video game developer, having worked on graphics on a particular big-name franchise for a couple years now. It's quite the interesting job, and is definitely one of the realms I find the heady, abstract rationality tools to be extremely helpful. Rationality is what it is, and that seems to be acknowledged here, a fact I'm quite grateful for.
When I'm not discussing the down-to-earth topics here, people may find I have a sometimes anxiety-ridden attachment to certain religious ideas. Religious discussion has been extremely normal for me throughout my life, so while the discussion doesn't make me uncomfortable, my inability to come to answers that I'm happy with does, and has caused me a bit of turmoil outside of discussion. Obviously there is much to say about this, and much people may like to say to me, but I'd like to first get through all the sequences, get all of my questions about it all answered, pay attention a bit to the discussions here, and I'll go from there. I have no grand hopes to finally put these beliefs to rest, but I will go to lengths to see whether it is something I should do. To pick either seems to me to suppose I have a Way to rationality, if I understand the point correctly. I would invite any and all discussion on the topic, and I appreciate the little "welcome to Theists" in the main post here. :)
See you all around.
Welcome! Glad to see you here. :D
Hello everyone!
My personal and professional development keep leading me back to the LessWrong sequences, so I've gathered up enough humility to join in the discussions. I hope to meet your high standards.
I'm 27 and my background is in business and the life sciences; I see rationality as a critically important tool in these areas, but ultimately a relatively minor tool for life as a successful human animal. As such I see this community as being similar to a bodybuilding/powerlifting community, where the interest is in training the rational faculty instead of physical strength.
Edit: Wow, all my comments downvoted? That's a pretty strongly negative response. Care to explain?
From what I can see, people probably thought you were belaboring a point which was not a part of the discussion at hand. You said you were answering the moral value of "there exists 3^^^3 people AND..." versus the situation without that prefix, but people discussing it did not take that interpretation of the problem, nor did Eliezer when he asked it. You might say that to determine the value of 3^^^3 people getting specks in their eye you would have to presuppose it included the value of them existing, but nobody was discussing that as if it were part of the problem. It sucks, yeah, but the way that people prefer to have discussions wins out, and you can but prefer it or not, or persuade in the right channels. A good lesson to learn, and don't be discouraged.
Thank you.
Hello,
I'm Ben. I'm here mainly because I'm interested in effective altruism. I think that tracing through the consequences of one's actions is a complex task and I'm interested in setting out some ideas here in the hope that people can improve my reasoning. For example, I've a post on whether ethical investment is effective, which I'd like to put up once I've got a couple of points of karma.
I studied philosophy and theology, and worked for a while in finance. Now, I'm trying to work out how to increase the positive impact I have, which obviously demands answers about both what 'positive impact' means, and what the consequences are of the choices I make. I think these are far from simple to work out; I hope just to establish a few points with which I'm satisfied enough. I think that exposing ideas and arguments to thoughtful people who might want to criticise or expand them could help me a lot. And this seems a good place for doing that!
Hi, I'm Alex.
Every once in a while I come to LessWrong because I want to read more interesting things and have more interesting discussions on the Internet. I've found it a lot easier to spend time on Reddit (having removed all the drivel) and dredging through Quora to find actually insightful content (seriously, do they have any sort of actual organization system for me to find reading material?) in the past. LessWrong's discussions have seemed slightly inaccessible, so maybe posting an introduction like I'm supposed to will set in motion my figuring out how this community works.
I'm interested in a lot of things here, but especially physics and mathematics. I would use the word "metaphysics" but it's been appropriated for a lot of things that aren't actually meta-physics like I mean. Maybe I want "meta-mathematics"? Anyway, I'm really keen on the theory behind physical laws and on attempts at reformulating math and physics into more lucid and intuitive systems. Some of my reading material (I won't say research, but ... maybe I should say research) recently has been on geometric algebra, re-axiomizing set theory, foundations and interpretations of quantum mechanics, reformulations of relativity, quantum field theory's interpretation, things like that. I have a permanent distaste for spinors and all the math we don't try to justify with intuition when teaching physics, so I've spent a lot of my last few years studying those.
I was really intrigued by the articles/blog posts? on what proofs actually mean and causality a few months ago; that's when I started reading the site. I've spent the better part of the last year sifting through all kinds of math ideas related to reinterpretations or 'fundamental' insights, so I hope hanging around here can expose me to some more.
Oh, and I've spent a good amount of time on the Internet refuting crackpots who think they solved physics, so I, um, promise I'm not one.
I'm a programmer by trade and have a good interest in revolutionary (or just convenient) software projects and disruptive ideas and really naive, idealist world-changing ideas, which is fun.
I have read some of the sequences and such but - I guess I'm a rationalist at heart already, maybe because I've studied lots of logic and such, but a lot of it of the basic stuff seemed pretty apparent to me. I was already up to speed on Bayes and quantum mechanics, for example, and never considered anything other than atheism. And I already optimize and try to look at life in terms of expected payoffs and other very rational things like that. But, it's possible I've missed a lot of the material here - I find navigating the site to be pretty unintuitive.
I'm based in Seattle and I hope to go to the meetups if they... ever happen again. I mostly just like talking to smart people; I find it makes my brain work better - as if there's some sort of 'conversation mode' which hypercharges my creativity.
Oh, and I have a blog: http://ajkjk.com/blog/. I'm slightly terrified of linking it; it's the first time I've shown it to anyone but friends. It only has 6 posts so far. I've written a lot more but deleted/hid them until they're cleaned up.
Now I'm tempted to spread a meme. Have you heard Martin-Loef type theory? In my opinion, it's a much better foundation of mathematics than ZFC.
Be very careful thinking you are done. I was in pretty much exactly the same position as you about a year ago. ("yep, I'm pretty rational. Lol @ god; I wonder what it's like to have delusional beliefs"). After a year and a half here, having read pretty much everything in the sequences and most of the other archives, running a meetup, etc, I now know that I suck at rationality. You will find that you are nowhere near the limits, or even the middle, of possible human rationality.
Further, I now know what it's like to have delusional beliefs that are so ingrained you don't even recognize them as beliefs, because I had some big ones. I probably have more. There not easy to spot from the inside.
On the subject of atheism... I used to be an atheist, too. The rabbit hole you've fallen into here is deep.
The Seattle guys are pretty cool, from those I've met. Go hang out with them.
Don't be mysterious, Morpheus, please elaborate.
Okay, sure. Rather I mean: I feel like I'm passed the introductory material. Like I'm coming in as a sophomore, say. But - I could be totally wrong! We'll see.
I've definitely got counter-rational behaviors ingrained; I'm constantly fighting my brain.
And, if we're pedantic about things pretty similar to atheism, I might not be an atheist. I'm not up to speed on all the terms. What do you call:
I was calling that atheism.
From your blog:
This is amazing, yet seems so obvious in retrospect. So many of us have turned into blue-minimizing robots without realizing it. Hopefully breaking the reward feedback loop with your extension would force people to try to examine their true reasons for clicking.
I was pretty pleased with myself for discovering that. It - sorta works. I still find myself going to Reddit, but so far it's still "feeling" less addictive (which is really hard to quantify or describe). Now I'm finding myself just clicking to websites more looking for something, rather than specifically clicking links. I've been sleeping badly lately, though, and I find that my brain is a lot more vulnerable to my Internet addiction when I haven't slept well - so it's not a good comparison to my norm.
Incidentally, if anyone wanted me to I could certainly make the extension work on other browsers. It's the simplest thing ever, it just injects 7 clauses of CSS into Reddit pages. I thought about making it mess with other websites I used (hackernews, mostly) but I decided they weren't as much of a problem and it was better to keep it single-purpose for now.
Greetings LWers,
I'm an aspiring Friendliness theorist, currently based at the Australian National University -- home to Marcus Hutter, Rachael Briggs and David Chalmers, amongst others -- where I study formal epistemology through the Ph.B. (Hons) program.
I wasn't always in such a stimulating environment -- indeed I grew up in what can only be deemed intellectual deprivation, from which I narrowly escaped -- and, as a result of my disregard for authority and despise for traditional classroom learning, I am largely self-taught. Unlike most autodidacts, though, I never was a voracious reader, on the contrary I barely opened books at all, instead preferring to think things over in my head; this has left me an ignorant person -- something I'm constantly striving to improve on -- but has also protected me from many diseased ideas and even allowed me to better appreciate certain notions by having to rediscover them myself. (case in fact, throughout my adolescence I took great satisfaction in analysing my mental mechanisms and correcting for what I now know to be biases, yet I never came across the relevant literature, essentially missing out on a wealth of knowledge)
For a long time I've aspired to join a cultural movement modelled on the principles of the Enlightenment and, to my eyes, LW, MIRI, CFAR, FHI and CSER are exactly the kind of community that can impact society through the use of reason. Alas, I was long unaware of their existence and when I first heard about the 'Singularity' I immediately dismissed it as the science fiction it sounds like, but thankfully this is no longer the case and I can now start making my modest contributions to reducing existential risk.
Lastly, I've never had my IQ measured properly -- passing the Mensa admission test places me at least two SDs above the norm, but that's hardly impressive by LW standards -- and, as much as I value such an indicator, I'm too emotionally invested in my intelligence to dare undergo psychometric testing. (for what it's worth, as a child my development was precocious -- e.g. the development of my motor skills was superior to that of the subjects taking part in this well-known longitudinal study)
I've opened up a lot to you, LWers; I hope my only regret will be not having discovered you earlier...
Nice! What part of FAI interests you?
Too soon to say, as I discovered FAI a mere two months ago -- this, incidentally, could mean that it's a fleeting passion -- but CEV has definitely caught my attention, while the concept of a reflective decision theory I find really fascinating. The latter is something I've been curious about for quite some time, as plenty of moral precepts seem to break down once an agent -- even a mere homo sapiens -- reaches certain levels of self-awareness and, thus, is able to alter their decision mechanisms.
Isn't that a proper IQ test? At least it is where I live. Funny how we like to talk about things we're good at. The real test is "time from passing test to time you leave to save the yearly fee."
That's awesome. Don't miss Marcus' lectures, such a sharp mind. Also, midi - Imperial March (used to be?) playing on his home page.
Yes and no; it's some version of the Cattell, but it's not administered individually, has a lowish ceiling and they don't reveal your exact result.
For the record, you needn't join in order to take their heavily subsidised admission test.
Aaron's blog brought me here. Sad that he's no longer with us.
I have been thinking for a long time about overcoming biases, and to put them into action in life. I work as an orthopaedic surgeon in the daytime and all I see around me is an infinite amount of bias. I can't take it on unless I can understand them and apply them to my life processes!
Hi! I was wondering where to start on this website. I started reading the sequence "How to actually change your mind", but there's a lot of lingo and stuff I still don't understand. Is there a sequence here that's like, Rationality for Beginners, or something? Thanks.
Probably the best thing you can do, for yourself and for others, is to post comments on the posts you've read, asking questions where you don't understand something. The sequences ought to be as easy to understand as possible, but the reality may not always approach the ideal.
But if the jargon is the problem, the LW wiki has a dictionary
I found the order presented in the wiki's guide to the sequences to be quite helpful.
Hello. I've read sequence articles and discussion off this website for a while now. Been hesitant to join before because I like to keep my identity small but recently realized that being able to talk to others about topics on this site will make me more effective at reaching my goals.
Armchairs are very comfortable and I'm having some mental difficulty putting the effort into the practice of achieving set goals. It's very hard to actually do stuff and easy to just read about interesting topics without engaging.
I'm interested more in meta-ethics than in physics, more in decision theory than practical AI. My first comments will likely be in the sequences or in discussion comments of a few specific natures.
This should be fun, I look forward to talking with you. Ask me any questions that arouse your curiosity.
The browsing experience with Kibitzing off is strange but not unpleasant. How long did it take for you to get accustomed to it?
Hi LWers,
I am Robert and I am going to change the world. Maybe just a little bit, but that’s ok, since it’s fun to do and there’s nothing else I need to do right now. (Yay for mini-retirements!)
I find some of the articles here on LW very useful, especially those on heuristics and biases, as well as material on self-improvement although I find it quite scattered among loads of way to theoretic stuff. Does it seem odd that I have learned much more useful tricks and gained more insight from reading HPMOR than from reading 30 to 50 high-rated and “foundational” articles on this site? I am sincerely sad that even the leading rationalists on LW seem to struggle getting actual benefits out of their special skills and special knowledge (Yvain: Rationality is not that great; Eliezer: Why aren't "rationalists" surrounded by a visible aura of formidability?) and I would like to help them change that.
My interest is mainly in contributing more structured, useful content and also to band together with fellow LWers to practice and apply our rationalist skills. As a stretch goal I think that we could pick someone really evil as our enemy and take them down, just to show our superiority. Let me stress that I am not kidding here. If rationality really counts for something (other than being good entertainment for sciency types and sci-fi lovers), then we should be able to find the right leverages and play out a great plot which just leaves everyone gasping “shit!” And then we’ll have changed the world, because people will start taking rationality serious.
Let me send out a warm “thank you” to you all for welcoming me in your rationalist circles!
Welcome!
Because they don't project high status with their body language?
Re: Taking out someone evil. Let's be rational about this. Do we want to get press? Will taking them out even be worthwhile? What sort of benefits from testing ideas against reality can we expect?
I think humans who study rationality might be better than other humans at avoiding certain basic mistakes. But that doesn't mean that the study of rationality (as it currently exists) amounts to a "success spray" that you can spray on any goal to make it more achievable.
Also, if the recent survey is to be believed, the average IQ at Less Wrong is very high. So if LW does accomplish something, it could very well be due to being smart rather than having read a bunch about rationality. (Sometimes I wonder if I like LW mainly because it seems to have so many smart people.)
I'm evil by some people's standards. You'll have to get a little bit more specific about what you think constitutes evil.
From what I've seen, real evil tends to be petty. Most grand atrocities are committed by people who are simply incorrect about what the right thing to do is.
You may follow HJPEV in calling world domination "world optimization", but running on some highly unreliable wetware means that grand projects tend to become evil despite best intentions, due to snowballing unforeseen ramifications. In other words, your approach seems to be lacking wisdom.
You seem to be making a fully general argument against action.
Against any sweeping action without carefully considering and trying out incremental steps.
Greetings! I am Viktor Brown (please do not spell Viktor with a c), and I tend to go by deathpigeon (please do not capitalize it or spell pigeon with a d) on the internet. (I cannot actually think of a place I don't go by deathpigeon...) I'm currently 19 years old. I'm unemployed and currently out of school since my parents cut off me off for paying for school. I consider myself to be a rationalist, a mindset that comes from how I was raised rather than any particular moment in my life. When I was still in university, I was studying computer science, a subject that still interests me, and I learned some programming in C++. When I get a positive enough income flow that I can afford to continue my schooling, I plan on continuing to study computer science. Around the internet, I tend to hang out in the TvTropes fora, where I also go by deathpigeon. I make a point of regularly reviewing my beliefs, be they political, religious, or something else. I'm not entirely sure what else to say, since I'm terrible with social situations, and introducing myself to a bunch of strangers is a situation I'm especially bad with.
ouch... who the hell downvotes a greeting post?
Hi, my name is Briony Keir, I'm from the UK. I stumbled on this site after getting into an argument with someone on the internet and wondering why they ended up failing to refute my arguments and instead resorted to insults. I've had a read-around before posting and it's great to see an environment where rational thought is promoted and valued; I have a form of autism called Asperger Syndrome which, among many things, allows me to rely on rationality and logic more than other people seem to be able to - I too often get told I'm 'too analytical' and I 'shouldn't poke holes in other peoples' beliefs' when, the way I see it, any belief is there to be challenged and, indeed, having one's beliefs challenged can only make them stronger (or serve as an indicator that one should find a more sensible viewpoint). I'm really looking forward to reading what people have to say; my environment (both educational and domestic) has so far served more to enforce a 'we know better than you do so stop talking back' rule rather than one which allows for disagreement and resolution on a logical basis, and so this has led to me feeling both frustrated and unchallenged intellectually for quite some time. I hope I prove worthy of debate over the coming weeks and months :)
This is not at all unusual here at LessWrong... I can't seem to find a link, but I seem to recall that a fairly large portion of LessWrong-ers (at least relative to the general population) have Aspergers (or at least are somewhat Asperger-ish), myself included.
I'm not entirely sure though that I agree with the statement that Aspergers is "a form of autism"... I realize that that has been the general consensus for a while now, but I've read some articles (again, can't find a link at the moment, sorry) suggesting that Aspergers is not actually related to Autism at all... personally, my feeling on the matter is that "Aspergers" isn't an actual "disease" per se, but rather just a cluster of personality traits that happen to be considered socially unacceptable by modern mainstream culture, and have therefore been arbitrarily designated as a "disease".
In any case, welcome to LessWrong - I look forward to your contributions in the future!
If anything, I'd be tempted to say that autism is a more pronounced degree of asperger's. I certainly catch myself in the spectrum that includes ADD as well.
The whole idea of neurodiversity is kind of exciting, actually. If there can be more than one way to appropriately interact with society, everyone gets richer.
Hello, newbie here. I'm intrigued by the premise of this forum.
About me: I think a lot- mostly by myself. That's trained me in some really lazy habits that I am looking to change now.
In the last few weeks, I noticed what I think are some elemental breakdowns in human politics. When things go bad between people, I think it can be attributed to one of three causes: immaturity, addiction, or insanity. I would love to discuss this further, hoping someone's interested.
I wasn't going to mention theism, but it's here in the main post, and suddenly I'm interested: I trend toward the athiestic- I'm really unimpressed with my grandmother's deity, and "supernatural" doesn't seem a useful or interesting category of phenomena. But I like being agnostic more than atheist, just on a few tiny little wiggle-words that seem powerfully interesting to me, and I notice that other people seem to find survival value in it. So that's probably something I will want to talk about.
Many of my more intellectual friends and neighbors can seem like bullies a lot of the time. So I like the word "rationality" in the title of this place, much more than I like "science" or "logic". When I see the war of the darwin fish on people's bumpers, I remember that the Romans still get a lot of credit for their accomplishments even though math and science as we know it barely existed. Obsession with mere logic seems to put an awful lot of weight on some unexamined premises- and people don't talk in formal logic any more than they math in roman numerals.
I'm not against vaccination, but I am a caregiver to a profoundly autistic child. It's frustrating to try to have any sort of conversation about autism without it devolving into a vaccination tirade.
I don't think of myself as a 9/11 "truther", and yet I still have many questions about those events and the response that trouble me. Some of these questions are getting answered now that the 10 year anniversary has seen the release of more information. As with the Kennedy assassination, I don't think the full story will ever be widely known. I'm cynical enough that I doubt that it matters.
SETI fascinates me. Bigfoot, the Loch Ness Monster, UFOs- not so much. Whitley Streiber is actually kind of interesting, when I can muster up the required grains of salt.
Anyway, it feels a bit like I'm crawling out from under a rock, not sure what the weather is really like out here. I want to outgrow the pleasures of cleverness, hoping for some happiness in wisdom.
Yes, I know the feeling. Welcome out of the echo chamber!
Do you mean that it's literally the words you find interesting? Which ones?
I’m Taylor Smith. I’ve been lurking since early 2011. I recently finished a bachelor’s in philosophy but got sort of fed up with it near the end. Discovering the article on belief in belief is what first hooked me on LessWrong, as I’d already had to independently invent this idea to explain a lot of the silly things people around me seemed to be espousing without it actually affecting their behavior. I then devoured the Sequences. Finding LessWrong was like finding all the students and teachers I had hoped to have in the course of a philosophy degree, all in one place. It was like a light switching on. And it made me realize how little I’d actually learned thus far. I’m so grateful for this place.
Now I’m an artist – a writer and a musician.
A frequently-confirmed observation of mine is that art – be it a great sci-fi novel, a protest song, an anti-war film – works as a hack to help to change people’s minds who are resistant or unaccustomed to pure rational argument. This is true especially of ethical issues; works which go for the emotional gut-punch somehow make people change their minds. (I think there are a lot of overlapping reasons for this phenomenon, but one certainly is that a well-told story or convincing song provides an opportunity for empathy. It can also help people envision the real consequences of a mind-change in an environment of relative emotional safety.) This, even though of course the mere fact that someone who holds position X made a good piece of art about X doesn’t actually offer much real evidence for the truth of X. Thus, a perilous power. The negative word for the extreme end of this phenomenon is “propaganda.” Conversely, when folks end up agreeing with whatever a work of art brought them to believe, they praise it as “insightful” or some such. You can sort of understand why Plato was worried about having poets – those irrational, un-philosophic things – in his ideal city, swaying his people’s emotions and beliefs.
If I’m going to help save the world, though, I think I do it best through a) giving money to the efficient altruists and the smart people and b) trying to spread true ideas by being a really successful and popular creator.
But that means I have to be pretty damn certain what the true ideas are first, or I’m just spouting pretty, and pretty useless, nonsense.
So thank you, LessWrongers, for all caring about truth together.
It really feels good to be here. The name along sounds comforting..... 'less wrong'. I've always loved to be around people who write and provide of intuitive solutions to everyday challenges. Guess am gonna read a few posts and get acquainted to the customs here then make meaningful contributions too.
Thanks Guys for this great opportunity.
Hi, I'm Liz.
I'm a senior at a college in the US, soon to graduate with a double major in physics and economics, and then (hopefully) pursue a PhD in economics. I like computer science and math too. I'm hoping to do research in economic development, but more relevantly to LW, I'm pretty interested in behavioral economics and in econometrics (statistics). Out of the uncommon beliefs I hold, the one that most affects my life is that since I can greatly help others at a small cost to myself, I should; I donate whatever extra money I have to charity, although it's not much. (see givingwhatwecan.org)
I think I started behaving as a rationalist (without that word) when I became an atheist near the end of high school. But to rewind...
I was raised Christian, but Christianity was always more of a miserable duty than a comfort to me. I disliked the music and the long services and the awkward social interactions. I became an atheist for no good reason in the beginning of high school, but being an atheist was terrible. There was no one to forgive me when I screwed up, or pray to when the world was unbearably awful. My lack of faith made my father sad. Then, lying in bed and angsting about free will one night, I had some philosophical revelation, and it seemed that God must exist. I couldn't re-explain the revelation to myself, but I clung to the result and became seriously religious for the next year or so. But objections to the major strands of theism began to creep up on me. I wanted to believe in God, and I wanted to know the truth, and I found out that (surprise) having an ideal set of beliefs isn't compatible with seeking truth. I did lots of reading (mostly old-school philosophy), slowly changed my mind, then came out as an atheist (to close friends only) once the Bible Quiz season was over. (awk.)
At that point I decided to never lie to myself again. Not just to avoid comforting half-truths, but to actively question all beliefs I held, and to act on whatever conclusions I come to. After hard practice, unrelenting honesty towards myself is a habit I can't break, but I'm not sure it's actually a good policy. For example, a few white lies would've helped me move past a situation of extreme guilt last year.
Anyway, more recently, I read HPMOR and I'm now reading Kahneman's Thinking, Fast and Slow. I'm slowly working through the Sequences too. I always appreciate new reading recommendations.
I have some thoughts on Newcomb's Paradox. (Of course I am new to this, probably way off base, etc.) I think two boxes is the right way to go, and it seems that intuition towards one-boxing often comes from the idea that your decision somehow changes the contents of the boxes. (No reverse causality is supposed to be assumed, right?) Say that instead of an infallible superintelligence, the story changes to
"You go to visit your friend Ann, and her mom pulls you into the kitchen, where two boxes are sitting on a table. She tells you that box A has either $1 billion or $0, and box B has $1,000. She says you can take both boxes or just A, and that if she predicted you take box B she didn't put anything in A. She has done this to 100 of Anne's friends and has only been wrong for one of them. She is a great predictor because she has been spying on your philosophy class and reading your essays."
Terribly small sample size, but a friend told me this changes his answer from one box to two. As far as I can tell these changes are aesthetic and make the story clearer without changing the philosophy.
And, a question. Why is Bayes so central to this site? I use Bayesian reasoning regularly, but I learned Bayes' Theorem around the time I started thinking seriously about anything, so I'm not clear on what the alternative is. Why do y'all celebrate Bayes, rather than algebra or well-designed experiments?
Edit: Read farther in Thinking, Fast and Slow; question answered.
Welcome to LW.
Also not an expert on Newcomb's Problem, but I'm a one-boxer because I choose to have part of my brain say that I'm a one-boxer, and have that part of my brain influence my behavior if I get in to a Newcomb-like situation. Does that make any sense? Basically, I'm choosing to modify my decision algorithm so I no longer maximize expected value because I think having this other algorithm will get me better results.
Hello, I'm Ben Kidwell. I'm a middle-aged classical pianist and lifelong student of science, philosophy, and rational thought. I've been reading posts here for years and I'm excited to join the discussion. I'm somewhat skeptical of some things that are part of the conventional wisdom around here, but even when I think the proposed answers are wrong - the questions are right. The topics that are discussed here are the topics that I find interesting and significant.
I am only formally and professionally trained in music, but I have tried to self-study physics, math, computer science, and philosophy in a focused way. I confess that I do have one serious weakness as a rationalist, which is that I can read and understand a lot of math symbology, but I can't actually DO math past the level of simple calculus with a few exceptions. (Some computer programming work with algorithms has helped with a few things.) It's frustrating because higher math is The Key that unlocks a lot of deep understanding of the universe.
I have a particular interest in entropy, information theory, cosmology, and their relation to the human experience of temporality. I think the discovery that information-theoretic entropy and thermodynamic entropy are equivalent and the quantum formalism encodes this duality is a crucial insight which should be a foundational cornerstone of philosophy and our understanding of the world. The sequence about quantum theory and decoherence is one of my favorites and I think there is a lot more to be done to adjust our philosophy and use of language when it comes to what kind of quantum reality we are living in.
Hello everyone,
I found Less Wrong through "Harry Potter and the Methods of Rationality" like many others. I started reading more of Eliezer Yudkowsky's work a few months ago and was completely floored. I now recommend his writing to other people at the slightest provocation, which is new for me. Like others, I'm a bit scared by how thoroughly I agree with almost everything he says, and I make a conscious effort not to agree with things just because he's said them. I decided to go ahead and join in hopes that it would motivate me to start doing more active thinking of my own.
Hey everyone, I'm sean nolan. I found less wrong from tvtropes.org, but I made sure to lurk sufficiently long before joining. I've been finding a lot of interesting stuff on lesswrong (most of which was posted by eliezer), some of which I've applied to real life (such as how procrastination vs doing something is the equivalent of defect vs cooperate in a prisoners' dilemma against your future self). I'm 99.5% certain I'm a rationalist, the other 0.5% being doubt cast upon me by noticing I've somehow attained negative karma.
Hello all, My name is Benjamin Martens, a 19-year-old student from Newcastle, Australia. Michael Anissimov, director of Humanity+, added me to the Less Wrong Facebook group. I don’t know his reasons for adding me, but regardless I am glad that he did.
My interest in rational thinking, and in conscious thinking in general, stems, first, from the consequences of my apostasy from Christianity, which is my family’s faith; second, from my combative approach to my major depression, which I have (mostly) successfully beaten into submission through an analysis of some of the possible states of the mind and of the world— Less Wrong and the study of cognitive biases will, I hope, further aid me in revealing my depressive worldview as groundless; or, if not as groundless, then at least as something which is not by nature aberrant and which is, to some degree, justified; third, and in connection to my vegan lifestyle, I aim to understand the psychology which might lead a person to cause another being to suffer; and last, and in connection to all aforementioned, it is my hope that an understanding of cognitive biases will allow not merely myself to edge nearer to the true state of things, but also, through me, for others to do so; I want Less Wrong to school me in some underhand, PR techniques of psychological manipulation or modification which will help me teach others about scepticism, about the errors of learned helplessness and about ways out of the self-reinforcing and self-justifying loops of the pessimistic worldview, and allow me to ably coax others towards cruelty-free ways of living. So, that's me. Hello, Less Wrong.
Hi everyone!
Well, I'm new-ish here, and this site is really big, so I was wondering where I should start, like, which articles or sequences I should read first?
Thanks!
Hello, I'm a physics student from Croatia, though I've attended a combined physics and computer science program (study programs here are very specific) for couple of years at a previous university that I left, though my high school specialization is in economy. I am currently working towards my bachelor's degree in physics.
I have no idea how I learned of this site, though it was probably trough some transhumanist channels (there's a lot of half-forgotten bits and pieces of information floating in my mind, so I can't be sure). Lately I've started reading the core sequences, mostly on my cell phone, while traveling (it avoids tab explosions). So far I've encountered a lot of what I've already considered or concluded for myself in a more expanded form.
Hello,
I am Jay Swartz, no relation to Aaron. I have arrived here via the Singularity Institute and interactions with Louie Helm and Malo Bourgon. Look me up on Quora to read some of my posts and get some insight to my approach to the world. I live near Boulder, Colorado and have recently started a MeetUp; The Singularity Salon, so look me up if you're ever in the area.
I have an extensive background in high tech, roughly split between Software Development/IT and Marketing. In both disciplines I have spent innumerable hours researching human behavior and thought processes in order to gain insights into how to create user interfaces and how to describe technology in concise ways to help people to evaluate the merits of the technology. I've spent time at Apple, Sun, Seagate, Mensa, Osborne and a few start-ups applying my ever-deepening understanding of the human condition.
Over the years, I have watched synthetic intelligence (I much prefer the more precise SI over AI) grow in fits and starts. I am increasing my focus in this area because I believe we are on the cusp of general SI (GSI). There is a good possibility that within my life time I will witness the convergence of technology that leads to the appearance of GSI. This will in part be facilitated by advances in medicine that will extend my lifespan well past 100 years.
I am currently building my first SI web crawler that will begin building a corpus to be mined by some SciPy applications I have on my list of things to do. These efforts will provide me with technical insights on the SI challenge. There is even the possibility, however slight, that they can be matured to make a contribution to the creation of SI.
Finally, I am working on a potential paper for the Singularity Institute. I just posted a first outline/draft, Predicting Machine Super Intelligence, but do not yet know the details on how anyone finds it or how I see any responses. Having been on more than a few sites similar to this, I know I will be able to quickly sort thing out.
I am looking forward to reading and exchanging ideas here. I will strive to contribute as much as I receive.
Jay
I don't see anything. I assume you mean you put it in the LW edit box and then saved it as a draft? Drafts are private.
I'm Rev. PhD in mathematics, disabled shut-in crank. I spend a lot of time arguing with LW people on Twitter.
Noooooo don't get sucked in
I'm here to make one public prediction that I want to be as widely-read as possible. I'm here to predict publicly that the apparent increase in autism prevalence is over. It's important to predict it because it distinguishes between the position that autism is increasing unstoppably for no known reason (or because of vaccines) and the position that autism has not increased in prevalence, but diagnosis has increased in accuracy and a greater percentage of people with autism spectrum disorders are being diagnosed. It's important that this be as widely-read as possible as soon as possible because the next time prevalence estimates come out, I will be shown right or wrong. I want my theory and prediction out there now so that I can show that I predicted a surprising result before it happened. While many people are too irrational to be surprised when they see this result even though they have predicted the opposite, I hope that rationalists will come to believe my position when it is proven right. I hope that everyone disinterested will come to believe this. The reason why I hope this is because I want them to be more likely to listen to me when I make statements about human rights as they apply to people with autism spectrum disorders. It is important that society change its attitudes toward such individuals.
Please help me by upvoting me to two karma so I can post in the discussion section.
I'm not sure you're right that we won't see any increase in autism prevalance - there are still some groups (girls, racial minorities, poor people) that are "underserved" when it comes to diagnosis, so we could see an increase if that changes, even if your underlying theory is correct. Still upvoted, tho.
Hi I’m Bojidar (also known as Bobby). I was introduced to LW by Luke Muehlhauser’s blog “Common Sense Atheism” and I've been reading LW ever since he first started writing about it. I am a 25 year old laboratory technician (and soon to be PhD student) at a major cancer research hospital in Buffalo, NY. I've been reading LW for a while and recently I've been really wishing that Buffalo had a LW group (I've been considering starting one, but I’m a bit concerned that I don’t have much experience in running groups nor have I been very active in the online community). A bit about myself: I enjoy reading about rationality, psychology, biology, philosophy and methods of self-help (or self-optimization). In my spare time I like doing artistic things (oil painting, figure drawing, and making really cool Halloween costumes), gardening, travel, playing video games (casual MMO gamer & RPG fan), and I like watching sci-fi, fantasy genre movies/TV programs. Also, I work out 5 times per week (which thanks to some awesome self-help advice has been a whole lot easier to stick with – thanks Luke!). I hope to learn how to play the piano well (I currently just freestyle on occasion or attempt to learn songs I like by watching youtube synthesia videos, but I would really like to learn how to read sheet music).
As far as by background in rationality, I would have to say that I didn't really grow up in a particularly rational environment. I grew up Christian, but religion wasn't a huge influence on my upbringing. On the other hand, my family (particularly my mom), is really into alternative medicine. I wish I could say it is just a general belief in “healthy eating” coupled with the naturalistic fallacy, but sadly it is not. She is a homeopathic “doctor” (thankfully non-practicing!) and can easily be convinced of even the most biologically implausible remedies (on rare occasions even scaring me by taking or suggesting potentially dangerous treatments). I really fear the possible outcome of these beliefs; given the option between effective chemotherapy and magical sugar pills, she probably won’t choose the option that saves her life. (After several failed attempts to improve her rationality and change her mind, I have long abandoned any attempts in hopes to preserve my relationship with my family.)
That being said, for a large portion of my life, I believed many of the same things my parents taught me to believe. Then I went to college as a premed student and was exposed to a lot of new information, which over time, made me start to reject those beliefs. Growing up, I was considered to be pretty rational by other people around me (not always in a good way; often it was negatively attached to the claim of being "left – brained” or “not being in touch with my intuitive self”). In retrospect, I was only marginally saner than other people around me, perhaps just sane enough to change my mind given the chance.
P.S. I have not taken any formal logic classes and on occasion might need some terms or symbols clarified (although my boyfriend has and frequent discussions with him have helped me pick up some of this nomenclature).
Howdy. My name is Alexander. I've read a lot of LW, but only recently finally registered. I learned about LW from RationalWiki, where I am a mod. I have read most of the sequences, and many of them are insightful, although I am skeptical about the utility of such posts as the Twelve Virtues, which seeks to clothe a bit of good advice in the voluminous trappings of myth. HPMOR is also good. I don't anticipate engaging in much serious criticism of these things, however, because I have little experience in the sciences or mathematics, and often struggle to grasp things that appear easy for those accustomed to equations. The utility of Bayes' Theorem is one good example. I expect to ask questions, often.
My primary interest in LW are practical ones - discussions about AI and the singularity are interesting, but I am focused on improving my analytic ability and making good decisions.
Hi, I'm Rixie, and I read this fan fic called Harry Potter and the Methods of Rationality, by lesswrong, so I decided to check out Lesswrong.com. It is totally different from what I thought it would be, but it's interesting and I like it. And right now I'm reading the post below mine, and wow, my comment sounds all shallow now . . .
What did you think it would be like?
Hi,
My name is Hannah. I'm an American living in Oslo, Norway (my husband is Norwegian). I am 24 (soon to be 25) years old. I am currently unemployed, but I have a bachelor's degree in Psychology from Truman State University. My intention is to find a job working at a day care, at least until I have children of my own. When that happens, I intend to be a stay-at-home mother and homeschool my children. Anything beyond that is too far into the future to be worth trying to figure out at this point in my life.
I was referred to LessWrong by some German guy on OkCupid. I don't know his name or who he is or anything about him, really, and I don't know why he messaged me randomly. I suppose something in my profile seemed to indicate that I might like it here or might already be familiar with it, and that sparked his interest. I really can't say. I just got a message asking if I was familiar with LessWrong or Harry Potter and the Methods of Rationality (which I was not), and if so, what I thought of them. So I decided to check them out. I thought the HP fanfiction was excellent, and I've been reading through some of the major series here for the past week or so. At one point I had a comment I wanted to make, so I decided to join in order to be able to post the comment. I figure I may as well be part of the group, since I am interested in continuing reading and discussing here. :-)
As for more about my background in rationality and such, I like to think I've always been oriented towards rationality. Well, when I was younger I was probably less adept at reasoning and certainly less aware of cognitive biases and such, but I've always believed in following the evidence to find the truth. That's something I think my mother helped to instill in me. My acute interest in rationality, however, probably occurred when I was around 18-19 years old. It was at this point that I became an atheist and also when I began Rational Emotive Behavior Therapy.
I had been raised as a Christian, more or less. My mother is very religious, but also very intelligent, and she believes fervently in following the evidence wherever it leads (despite the fact that, in practice, she does not actually do this). The shift in my religious perspective initially occurred around when I first began dating my husband. He was not religious, and I had the idea in my head that it was important that he be religious, in order for us to be properly compatible. But I observed that he was very open-minded and sensible, so I believed that the only requirement for him to become a Christian was for me to formulate a sufficiently compelling argument for why it was the true religion. And if this had been possible, it's likely he would have converted, but alas, this was a task I could not succeed at. It was by examining my own religion and trying to answer his honest questions that I came to realize that I didn't actually know what any good reasons for being a Christian were, and that I had merely assumed there must be good reasons, since my mother and many other intelligent relgious people that I knew were convinced of the religion. So I tried to find out what these reasons were, and they came up lacking.
When I found that I couldn't find any obvious reasons that Christianity had to be the right religion, I realized that I didn't have enough information to come to that conclusion. When I reflected on all my religious beliefs, it occured to me that I didn't even know where most of them came from. So I decided to throw everything out the window and start from scratch. This was somewhat difficult for me emotionally, since I was honestly afraid that I was giving up something important that I might not get back. I mean, what if Christianity were the true religion and I gave it up and never came back? So I prayed to God (whichever god(s) he was, if any) to lead me on a path towards the truth. I figured if I followed evidence and reason, then I would end up at the truth, whatever it was. If that meant losing my religion, then my religion wasn't worth having. I trusted that anything worth believing would come back to me. And that even if I was led astray and ended up believing the wrong thing, God would judge me based on my intent and on my deeds. A god who is good will not punish me for seeking the truth, even if I am unsuccessful in my quest. And a god who is not good is not worth worshipping. I know this idea has been voiced by many others before me, but for me this was an original conclusion at the time, not something I'd heard as a quote from someone else.
Another pertinent influence of rationality on my life occured during my second year of college. I had decided to see a counselor for problems with anxiety and depression. The therapy that counselor used was Rational Emotive Behavior Therapy, and we often engaged in a lot of meaningful discussions. I found the therapy and that particular approach extremely helpful in managing my emotions and excellent practice in thinking rationally. I think it really helped me become a better thinker in addition to being more emotionally stable.
So it's been sort of a cumulative effect, losing my religion, going to college, going through counseling, etc. As I get older, I expose myself to more and more ideas (mostly through reading, but also through some discussion) and I feel that I get better and better at reasoning, understanding biases, and being more rational. A lot of the things I've read here are things that I had either encountered before or seemed obvious to me already. Although, there is plenty of new stuff too. So I feel that this community will be a good fit for me, and I hope that I will be a positive addition to it.
I have a lot of unorthodox ideas and such that I'd be happy to discuss. My interests are parenting (roughly in line with Unconditional Parenting by Alfie Kohn), schooling/education (I support a Sudbury type model), diet (I'm paleo), relationships (I don't follow anyone here; I've got my own ideas in this area), emotions and emotional regulation (REBT, humanistic approach, and my own experience/ideas) and pretty much anything about or related to psychology (I'm reasonably educated in this area, but I can always learn more!). I'm open to having my ideas challenged and I don't shy away from changing my mind when the evidence points in the opposite direction. I used to have more of a problem with this, in so far as I was concerned about saving face (I didn't want to look bad by publicly admitting I was wrong, even if I privately realized it), but I've since reasoned that changing my mind is actually a better way of saving face. You look a lot stupider clinging to a demonstrably wrong position than simply admitting that you were mistaken and changing your ideas accordingly.
Anyway, I hope that wasn't too long an introduction. I have a tendency to write a lot and invest a lot of time and effort in to my writings. I care a lot about effective communication, and I like to think I'm good at expressing myself and explaining things. That seems to be something valued here too, so that's good.
Welcome here!
Hi All,
I'm Will Crouch. Other than one other, this is my first comment on LW. However, I know and respect many people within the LW community.
I'm a DPhil student in moral philosophy at Oxford, though I'm currently visiting Princeton. I work on moral uncertainty: on whether one can apply expected utility theory in cases where one is uncertain about what is of value, or what one one ought to do. It's difficult to do so, but I argue that you can.
I got to know people in the LW community because I co-founded two organisations, Giving What We Can and 80,000 Hours, dedicated to the idea of effective altruism: that is, using one's marginal resources in whatever way the evidence supports as doing the most good. A lot of LW members support the aims of these organisations.
I woudn't call myself a 'rationalist' without knowing a lot more about what that means. I do think that Bayesian epistemology is the best we've got, and that rational preferences should conform to the von Neumann-Morgenstern axioms (though I'm uncertain - there are quite a lot of difficulties for that view). I think that total hedonistic utilitarianism is the most plausible moral theory, but I'm extremely uncertain in that conclusion, partly on the basis that most moral philosophers and other people in the world disagree with me. I think that the more important question is what credence distribution one ought to have across moral theories, and how one ought to act given that credence distribution, rather than what moral theory one 'adheres' to (whatever that means).
Pretense that this comment has a purpose other than squeeing at you like a 12-year-old fangirl: what arguments make you prefer total utilitarianism to average?
Haha! I don't think I'm worthy of squeeing, but thank you all the same.
In terms of the philosophy, I think that average utilitarianism is hopeless as a theory of population ethics. Consider the following case:
Population A: 1 person exists, with a life full of horrific suffering. Her utility is -100.
Population B: 100 billion people exist, each with lives full of horrific suffering. Each of their utility levels is -99.9
Average utilitarianism says that Population B is better than Population A. That definitely seems wrong to me: bringing into existence people whose lives aren't worth living just can't be a good thing.
I'm glad you're here! Do you have any comments on Nick Bostrom and Toby Ord's idea for a "parliamentary model" of moral uncertainty?
Thanks! Yes, I'm good friends with Nick and Toby. My view on their model is as follows. Sometimes intertheoretic value comparisons are possible: that is, we can make sense of the idea that the difference in value (or wrongness) between two options A and B one one moral theory is greater, lesser, or equal to the difference in value (or wrongness) between two options C and D on another moral theory. So, for example, you might think that killing one person in order to save a slightly less happy person is much more wrong according to a rights-based moral view than it is according to utilitarianism (even though it's wrong according to both theories). If we can make such comparisons, then we don't need the parliamentary model: we can just use expected utility theory.
Sometimes, though, it seems that such comparisons aren't possible. E.g. I add one person whose life isn't worth living to the population. Is that more wrong according to total utilitarianism or average utilitarianism? I have no idea. When such comparisons aren't possible, then I think that something like the parliamentary model is the right way to go. But, as it stands, the parliamentary model is more of a suggestion than a concrete proposal. In terms of the best specific formulation, I think that you should normalise incomparable theories at the variance of their respective utility functions, and then just maximise expected value. Owen Cotton-Barratt convinced me of that!
Sorry if that was a bit of a complex response to a simple question!
Hi Will,
I think most LWer's would agree that; "Anyone who tries to practice rationality as defined on Less Wrong." is a passible description of what we mean by 'rationalist'.
Thanks for that. I guess that means I'm not a rationalist! I try my best to practice (1). But I only contingently practice (2). Even if I didn't care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering. I.e. I do what I do not because it's what I happen to value, but because I think it's objectively valuable (and if you value something else, like promoting suffering, then I think you're mistaken!) That is, I'm a moral realist. Whereas the definition given in Eliezer's post suggests that being a rationalist presupposes moral anti-realism. When I talk with other LW-ers, this often seems to be a point of disagreement, so I hope I'm not just being pedantic!
Not at all. (Eliezer is a sort of moral realist). It would be weird if you said "I'm a moral realist, but I don't value things that I know are objectively valuable".
It doesn't really matter whether you're a moral realist or not - instrumental rationality is about achieving your goals, whether they're good goals or not. Just like math lets you crunch numbers, whether they're real statistics or made up. But believing you shouldn't make up statistics doesn't therefore mean you don't do math.
Hi, Charlie here.
I'm a middle-aged high-school dropout, married with several kids. Also a self-taught computer programmer working in industry for many years.
I have been reading Eliezer's posts since before the split from Overcoming Bias, but until recently only lurked the internet -- I'm shy.
I broke cover recently by joining a barbell forum to solve some technical problems with my low-bar back squat, then stayed to argue about random stuff. Few on the barbell forum argue well -- it's unsatisfying. Setting my sights higher, I now join this forum.
I'll probably start by trying some of the self-improvement schemes and reporting results. Any recommendations re: where to start?
Never mind, I found the Group rationality diary which is exactly the right aggregation point for self-improvement schemes.
I am a 43 year old man who loves to read, and stumbling across HPMOR was an eye opener for me, and it resonated profoundly within. My wife is not only the Queen of Critical Thinking and logic, she is also the breadwinner. Me? I raise the children( three girls), take care of the house, and function as a housewife/gourmet chef/personal trainer/massage therapist for my wife on top of being my daughters personal servant. This is largely due to my wife's towering intellect, overwhelming competence, my struggles with ADHD, and the fact that she makes huge amounts of money. Me, I just age almost supernaturally slowly(at 43, I still pass for thirty, possibly due to an obsession with fitness ), am above average handsome, passingly charming, good singing voice, and incapable of winning a logical argument, as the more stress I grow, the faster my IQ shrinks. I am taken as seriously by my wife, as Harry probably was by his father as a four year old. I am looking to change that. I am hoping that if I learn enough about less wrong, I just might learn how to put all the books I compulsively read to good use, and maybe learn how to...change.
I'm Rachel Haywire and I love to hate culture. I've been in "the community" for almost 2 years but just registered an account today. I need to read more of the required texts here before saying much but wanted to pop my head out from lurking. I've been having some great conversations on Twitter with a lot of the regulars here.
I organize the annual transhumanist/alt-culture event Extreme Futurist Festival (http://extremefuturistfest.info) and should have my new website up soon. I like to write, argue, and write about arguing. I've also done silly things such as producing industrial music and modeling.
You probably know me as that really loud girl at parties with the tattoos and crazy hair. I'm actually not trying to get attention. I'm just an autist. I am here so I can become a more rational person. I love philosophy and debate but my thinking is not always... correct?
I am Alexander Baruta, High-school student currently in the 11th grade (grade 12 math and biology). I originally found the site through Eliezer's blog, I am (technically) part of the school's robotics team (someone has to stop them from creating unworkable plans), undergoing Microsoft It certification, and going through all of the psychology courses in as little time as possible (Currently enrolled in a self-directed learning school) so I can get to the stuff I don't already know. My mind is fact oriented, (I can remember the weirdest things with perfect clarity after only hearing them once) but I have trouble combining that recall with my English classes, and I have trouble remembering names. I am informally studying formal logic, programming, game theory, and probability theory (don't you hate it when the curriculum changes. (I also have a unusual fondness for brackets, if you couldn't tell by now)
I also feel that any discussion about me that fails to mention my love of Sf/Fantasy should be shot dead, I caught onto reading at a very, very early age and by the time I was in 5th grade I was reading at a 12th grade comprehension level, and I was tackling Asimov, Niven, Pohl, Piers Anthony, Stephen R. Donaldson, Roger Zelazny and most good authors.
Lisp ith a theriouth condition, once you go full Lisp, you'll never (((((((((((((... come back)?n).
Apologies in advance for the novella. And any spelling errors that I don't catch (I'm typing in notepad, among other excuses).
It's always very nice when I come across something that reminds me that there are not only people in the world who can actually think rationally, but that many of them are way better at it than me.
I don't like mentioning this so early in any introduction, but my vision is terrible to the point of uselessness; I mostly just avoid calling myself "blind" because it internally feels like that would be giving up on the tiny power left in my right eye. I mention it now just because it will probably be relevant by the end of my rambling. (Feel free to skip to the last paragraph if you'd rather avoid all the backstory.)
I'm from northeast Arkansas. My parents were never really religious (I kinda internalized the ambient mythos of "God=good and fluffy cloud heaven, Satan=bad and fire and brimstone hell" just because it seemed to be the accepted way of things among all of my other relatives. TUrns out my dad identified himself as a Buddhist after one of our many trips to Disneyworld. ... they.... really like Disney. They have a dog named Disney.). They did emphasize the importance of education and individualism and all of those ideals from the late eighties and nineties that turned out to be counterproductive (though I'm having trouble finding the cracked.com articles that point this out in the most academically sound manner imaginable. (note: the previous statement was sarcastic)). So I tried to learn as much as I could in the general direction of science. Being that this was all done at public schools, and that a whole 0 of the more advanced science books I wanted were available in braille, this didn't get me very far.
I did my last two years of highschool at the Arkansas School of Mathematics and Science (which added "and the arts" when I got there, though before they'd actually added an art program), and somehow graduated without actually doing much science (I did a study of the effects of atmosphere on dreams for the year-and-a-half science project that everyone had to do, but forewent trying to organize an experiment and just wrote a terrible research paper). Then I got to college, and everything went to hell. I'd somehow managed to sneak around learning things like vectors, dot/cross products, and actual lab reports in highschool, and the experiments we did in gen physics never felt like experiments so much as demonstrations ("Behold: gravity still works!"). This is about where it became extremely clear to me that I simply could no longer make myself do things by force of will alone (and it became doubly clear that no one else seemed capable of understanding that I wasn't just "blowing off" everything). It took several semesters after that for me to realize that I had seriously missed out on some basic life things and that I actually needed friends (and that I needed to seriously reevaluate what qualified as friendship). They finally made me pick a new major, seeing as I'd kinda kept away from physics after the first semester ended in disaster. So I took the quickest way out, that being French, and now I'm still living with my parents, have about a dozen essays on Franco-african literature to write, and am about $30,000 in debt (that's only counting the loans in my name; my parents took the rest of the financial burden in their names).
So I mostly try to focus on creative endeavors, such as fiction and video games. Except the lack-of-vision thing makes that harder (I've been focusing on developing audio games for the past couple years, but it's virtually impossible to actually live off the tiny audio games market. Oh, but I could write pages on my observations there, and I rather want to, as I'm sure many of you could make some meaningful observations/analyses on some of those trends.).
... Well crap, I just wrote a few pages without actually getting to anything useful. I have serious need of better rationality skills than I'm currently applying: independence, dealing with emotional/cognative weirdness, finding ways to actually travel outside of my house (public transportation might as well not exist anywhere but the capital in Arkansas, and good sidewalks are hard to find), social issues, productivity issues, finding ways to get in physical activity, being unemployed with an apparent hiring bias against disabilities, financial ability, etc. The total money that I have to work with is less than $400, so I can't exactly sign up for cryonics or hire a driver to take me places. And this wall-o-text demonstrates my horrible disorganization rather well, I fear. (Hm, is there not a way to preview a comment before one posts it?)
After having read all of the Sequences, I suppose its time I actually registered. I did the most recent (Nov 2012) survey. I'm doing my PhD in the genetics of epilepsy (so a neurogenetics background is implied). I'm really interested in branching out into the field of biases and heuristics, especially from a functional imaging and genetics perspective (my training includes EEG, MRI/fMRI, surgical tissue analysis, and all the usual molecular stuff/microarrays).
Experiences with grant writing makes me lean more toward starting my own biotech or research firm and going from there, but academics is an acceptable backup plan.
Hey there! I'm a 19-year old Canadian girl with a love for science, science fiction, cartoons, RPGs, Wayne Rowley, learning, reading, music, humour, and a few thousand other things.
Like many I found this site via HPMOR. As a long-time fan of both science and Harry Potter, I was ultimately addicted from chapter one. It's hard to apply scientific analysis to a fictional universe while still keeping a sense of humour, and HPMOR executes this brilliantly. My only complaint ( all apologies to Mr. Yudkowsky, though I doubt he'll ever read this) are that Harry comes off as rather Sue-ish. I wanted more, so I came here and found yet more excellent excellent writings. The story about the Pebblesorters is my personal favourite.
I'm mad about music. Queen, Rush, Black Sabbath, and Bowie are some of my favourite bands. I have a Telecaster, which I use mostly to play blues. God I love the blues. But I digress..
Though I'm merely a high school graduate looking for a part-time job, I'm really passionate about biology. I'm the kind of person who reads about sodium-potassium pumps not because it's on the the upcoming quiz, but because it indulges my curiousity about how humans and other lifeforms work. (Don't get me started about speculative xenobiology!)
I've lurked this site for about 7 months now and I really hope that I'll be accepted here in spite of my laconic, idiosyncratic, comma-ridden ramblings. Thank You.
Hi, my name is Wes(ley), and I'm a lurkaholic.
First, I'd like to thank this community. I think it is responsible in a large way for my transformation (perceived transformation of course) from a cynical high schooler who truly was only motivated enough to use his natural (not worked hard for) above average reasoning skills to troll his peers, to a college kid currently making large positive lifestyle changes, and dreaming of making significant positive changes in the world.
I think I have observed significant changes in my thinking patterns since reading the sequences, learning about Bayes, and watching discussions unfold on LessWrong over the last two years or so.
Three examples (and there are many more) of this are:
Noticing quicker, and more often when a dispute is about terms and not substance.
Identifying situations in which myself or others are trying to "guess the teacher's password" (this has really helped me identify gaps in understanding)
Increased internal dialogue concerning bias (in myself, and in others, I at first started to notice myself being strongly subject to confirmation bias; I suspect realizing this has at least a little bias-reducing effect)
Unfortunately, I don't think I have come even close to being able to apply these skills in a place where they would be highly beneficial to others, like a decision making position. That is okay, my belief is that this is something that will come with age, and career advancement.
One of my goals for the next year is to start a LessWrongish student organization at my college campus (Auburn University), which is a traditionally very conservative place. This is partially out of a wholly selfish desire to engage in more stimulating discussions (instead of just spectating, this is also why I am delurking), and partially out of a part selfish desire to create a community at school that fosters instrumental rationality. I think that by posting this goal here, it is at least slightly more likely I will go through with it.
Some of the things I like to do include: race small sailboats, read, play video games, try new foods, explore, learn, smile at people I don't know, play rough with my family's dogs, drive with high acceleration (not necesscarily high speeds), travel, talk with people I don't know and will likely never meet again, find a state of flow in work, read comments on CNN political articles (it's a comedy thing), learn about native animal and plant species, catch critters, listen to big band music, find humor in unusual places, laugh at myself, fantasize about getting superpowers, and lab benchwork.
Some of the things I don't like to do include: get to know new people (I like knowing people though), spend time on social networking sites (I don't have a Facebook or Twitter), have text conversations, dress formally (ties? why do we need to cling to those?), "jumping through hoops" (e.g. make sure to attend 5 events for this class, suck up to professor x for a good recc, make sure to put x on your resume), engaging in politics, talk to people who say things like "it's all relative man," or "I choose to not let my world be bound by logic", clean, binge drink (okay, actually, I don't like being hung over, or the thought of poisoning myself), die to lag, percieve assignment of undue credit.
Currently I am taking a semester off from studying cell and molecular biology, and volunteering as a research student in a solid tumor immunology lab. I think long-term I would like to get involved with research on the molecular basis of aging, or applied research related to life extension.
Hi! I am Robert Pearson: Political professional of the éminence grise variety. Catholic rationalist of the Aquinas variety. Avid chess player, pistol shooter. Admirer of the writings of Ayn Rand and Robert Heinlein. Liberal Arts BA from a small state university campus. I read Overcoming Bias occasionally some years ago, but heard of LessWrong from Leah Libresco.
My real avocation is learning how to be a smarter, better, more efficient, happier human being. Browsing the site for awhile convinced me it was a good means to those ends.
I write a column on Thursdays for Grandmaster Nigel Davies' The Chess Improver
Long-time lurker, first-time poster. I'm 21, male, and a college student majoring in economics and minoring in CS. I first heard of Eliezer Yudkowsky when a couple of my friends discovered Harry Potter and the Methods of Rationality two years ago. I started reading it and enjoyed it immensely at first, but as the plot eclipsed what I'd call the "cool tricks", I became less interested and dropped it. More recently, a different friend linked me to Intellectual Hipsters. After reading it, I read several sequences and was hooked.
My journey to rationality was started by my parents (both of whom are atheists with degrees in STEM fields). I was provided with numerous science books as a child, and I was taught the basics of the scientific method, as well as encouraged to think analytically in general. They also introduced me to science fiction. I grew up in a heavily religious part of the US, so I frequently had to defend my beliefs. Then I discovered what people call "arguing on the Internet", which I found I enjoy. That caused me to refine and develop my beliefs.
My current beliefs. I'm a quasi-Objectivist (in the Ayn Rand sense), though politically I'm a classical liberal (pragmatic libertarian). I'm not particularly interested in AI or cryonics (though I support transhumanism). I'm a compatiblist (free will and determinism are not mutually exclusive). I think technological and scientific progress will continue to reduce limitations on humans, and that's a good thing.
Hi, I’m Cinnia, the name I go by on the net these days. I found my way here by way of both HPMOR and Luminosity about 8 months ago, but never registered an account until the survey.
Like Alan, I’m also in my final year of secondary school, though I’m on the other side of the pond. I love science and math and plan to have a career in neuroscience and/or psychiatry after I graduate. This year I finally decided to branch out my interests a bit and joined the local robotics club (a part of FIRST, if anyone’s curious), and it’s possibly the best extracurricular I’ve ever tried.
I’ve noticed that there aren’t many virtual communities that manage to hold my interest for long, due to a number of different reasons, but I’ve been lurking around LessWrong for about 8 months now and find it incredibly enlightening. I am (very) slowly working my way through the Sequences and some of the top articles here, but have finished Eliezer’s “Three Worlds Collide” and Alicorn’s original posts on Luminosity.
I’m still very much in the process of learning and trying to understand many of the concepts LessWrong explores, so I’m not sure how often I’ll be contributing. However, I do have some understanding of Riso and Hudson’s Enneagram and Spiral Dynamics, so I suppose there’s some groundwork that I can build from in the future.
Anyway, I like LessWrong’s mission and am happy to have finally joined the community.
Edited to clarify: Spiral Dynamics is an entirely separate psychological theory from the Enneagram, in case it wasn't clear.
I wandered onto this site, read an article, read some interesting discussion on it, and decided to take the survey. The survey had some interesting discussion and I enjoyed the extra credit, which I did the majority of, with an exception of the IQ test I couldn't get to work right and will do later. I enjoyed the discussion I read, though, and decided this would be an interesting site to read more on. I don't know yet how much discussion I'll contribute, but when I see an interesting discussion I'm sure I'll join in.
I don't have too much to say about myself. I'm a college student majoring in computer science, and I'd like to do work in artificial intelligence eventually, although I'm nowhere near experienced enough yet to be able to have real discussion about it.
Hi, I'm Alan, a student in my final year of secondary school in London, England. For some reason I'm finding it hard to remember how and when I stumbled upon Less Wrong. It was probably in March or April this year, and I think it was because Julia Galef mentioned it at some point, thought I may be misremembering.
Anyway, I've now read large chunks of the Sequences (though I can never remember which bits exactly) and HPMOR, and enjoy reading all the discussion that goes on here. I've never registered as a user before as I've never felt the burning need to comment on anything, but thought I should take the survey as I seemed part of its intended audience, so maybe I'll find things to say now.
I only study maths and science subjects in school, and am planning to study for a science degree when I head off to University next year. However, I tend to hang out more with the philosophically inclined people in school, and have had much fun introducing and debating Newcomb, prisoners' dillemas, torture vs dust specks, transhumanism and the like with them.
LessWrong is definitely one of those things I regret not finding out about earlier. It's my favourite website now, although I should probably stop using it as a place to procrastinate so much.
I'm Abd ul-Rahman Lomax, introducing myself. I have six grandchildren, from five biological children, and I have two adopted girls, age 11 from China, and age 9 from Ethiopia.
Born in 1944, Abd ul-Rahman is not my birth name, I accepted Islam in 1970. Not being willing to accept pale substitutes, I learned to read the Qur'an in Arabic by reading the Qur'an in Arabic.
Back to my teenage years, I was at Cal Tech for a couple of years, being in Richard P. Feynman's two years of undergraduate physics classes, the ones made into the textbook. I had Linus Pauling for freshman chemistry, as well. Both of them helped create how I think.
I left Cal Tech to pursue a realm other than "science," but was always interested in direct experience rather than becoming stuffed with tradition, though I later came to respect tradition (and memorization) far more than at the outset. I became a leader of a "spiritual community," and a successor to a well-known teacher, Samuel L. Lewis, but was led to pursue many other interests.
I delivered babies (starting with my own) and founded a school of midwifery that trained midwives for licensing in Arizona.
Self-taught, I started an electronics design consulting business, still going with a designer in Brazil.
I became known as one of the many independent inventors of delegable proxy as a method of creating hierarchical communication structure from the bottom up. Social structure, and particularly how to facilitate collective intelligence, has been a long-term interest.
I was a Muslim chaplain at San Quentin State Prison, serving an almost entirely Black community. In case you haven't guessed, I'm not black. I loved it. People are people.
So much I'm not saying yet.... I became interested in wikis early on, but didn't get to Wikipedia until 2005, becoming seriously active in 2007. Eventually, I came across an abusive blacklisting of a web site, a well-known archive of scientific papers on cold fusion. I'd been very aware of the 1989 announcement and some of the ensuing flap, but had assumed, like most people with enough knowledge to know what it was about, that the work had not been replicated.
When I looked, I became interested enough to buy a number of major works in the area (including almost all of the skeptical literature).
Among those who have become familiar, cold fusion (a bit of a misnomer; at the least it was prematurely named), is an ultimately clear example of how pseudoskepticism came to dominate a whole field, for over fifteen years. The situation flipped in the peer-reviewed journals beginning about eight years ago, but that's not widely recognized, it is merely obvious if one looks at what has been published in that period of time..
Showing this is way beyond the scope of this introduction, but I assume it will come up. I'm just asserting what I reasonably conclude, having become familiar with the evidence, (and I'm working with the scientists in the field now, in many ways).
As to rational skepticism, I was known to Martin Gardner, who quoted a study of mine on the so-called Miracle of the Nineteen in the Qur'an, the work of Rashad Khalifa, whom I knew personally.
I naively thought, for a couple of days, that a rational-skeptic approach to cold fusion might be welcome on RationalWiki. Definitely not. Again, that's another story. However, I'm not banned there and have sysop privileges (like most users).
On RationalWiki, however, I came across the work of Yudkowsky, and this blog. Wow! In some of the circles in which I've moved, I've been a voice crying in the wilderness, with only a few echoes here and there. Here, I'm reluctant to say anything, so commonly cogent is comment in this community. I know I'm likely to stick my foot in my mouth.
However, that's never stopped me, and learning to recognize the taste of my foot, with the help of my friends, is one way in which I've kept my growth alive. The fastest way to learn is generally to make mistakes.
I'm also likely to comment, eventually, on the practical ontology and present reality of Landmark Education, with which I've become quite familiar, as well as on the myths and facts which widely circulate about Landmark. To start, they do let you go to the bathroom.
Meanwhile, I've caught up with HPMOR, and am starting to read the sequences. Great stuff, folks.
Welcome! That's a fascinating biography.
I have been to one introductory Landmark seminar and wrote about the experience here.
Hello. I was brought here by HPMOR, which I finished reading today. Back in 1999 or something I found the site called sysopmind.com which had interesting reads on AI, Bayes theorem (that I didn't understand) and 12 virtues of rationality. I loved it for the beauty that reminded me of Asimov. I kept it in my bookmarks forever. (I knew him before he was famous? ;-))
I like SF (I have read many SF books but most were from before 1990 for some reason) and I'm a computer nerd, among other things. I want to learn everything, but I have a hard time putting in the work. I study to become a psychologist, scheduled to finish in 2013. My favorite area of psychology is social psychology, especially how humans make decisions, how humans are influenced by biases or norms or high status people. I'm married and have a daughter born in 2011.
I like to watch tv-shows, but I have high standards. It is SF if it is based in science and rationality, otherwise it's just space drama/space action and I have no patience for it. I also like psychological drama, but it has to be realistic and believable. Please give recommendations if you like. (edited:) Also, someone could explain in what way Star Trek, Babylon 5 or Battlestar Galactica is really SF or Buffy is feminist, so I know if they are worth my while.
Of those, the only one I've seen is Star Trek. They can be a bit handwavey about the science sometimes; I liked it, but if you're looking for hard science then you might not. As far as recommendations go, may I recommend the Chanur series (books, not TV) by one C.J. Cherryh?
For realistic psychological drama, I haven't seen any show that beats Mad Men.
Not without knowing you well enough. Sherlock, on the other hand should suit you just fine.
Ah, yes, thank you. I have seen Sherlock and loved it. Too few episodes though! =)
I'm Nancy Hua. I was MIT 2007 and worked in NYC and Chicago in automated trading for 5 years after graduating with BS's in Math with CS (18C) and in Writing (21W).
Currently I am working on a startup in the technology space. We have funding and I am considering hiring someone.
I started reading Eliezer's posts on Overcoming Bias. In 2011, I met Eliezer, Robin Hanson, and a bunch of the NYC Lesswrongers. After years of passive consumption, very recently I started posting on lesswrong after meeting some lesswrongers at the 2012 Singularity Summit and events leading up to it, and after reading HPMOR and wanting to talk about it. I tried getting my normal friends to read it but found that making new friends who have already read it is more efficient.
Many of the writings regarding overcoming our biases and asking more questions appeal to me because I see many places where we could make better decisions. It's amazing how far we've come without being all that intelligent or deliberate, but I wonder how much more slack we have before our bad decisions prevent us from reaching the stars. I want to make more optimal decisions in my own life because I need every edge I can get to achieve some of my goals! Plus I believe understanding and accepting reality is important to our success, as individuals and as a species.
Hello everyone, I'm Luc, better known on the web as lucb1e. (I prefer not to advertise my last name for privacy reasons.) I'm currently a 19 year old student, doing application development in Eindhoven, The Netherlands.
Like Aaron Swartz, I meant to post in discussion but don't have enough karma. I've been reading articles from time to time for years now, so I think I have an okay idea what fits on this site.
I think I ended up on LessWrong originally via Eliezer's NPC story. After reading that I looked around on the site, read about the AIBox experiment (which I later conducted myself), and eventually found LessWrong. This was probably about three or four years ago. During this time I've read some articles, sometimes being linked here and sometimes coming here by myself. I'm a bit hesitant to participate in the community because it seems quite out of my league; everybody knows a ton about rationality whereas I've only read some bits and pieces. I think I have an okay idea of what is appropriate to post, though, and also especially where I should not try to post :)
I'm a 20-year-old physics student from Finland whose hobbies include tabletop roleplaying games and Natalie Reed-Zinnia Jones-style intersection of rationality and social justice.
I've been sporadically lurking on LessWrong for the last 2-3 years and have read most of the sequences. My primary goal is to contribute useful research to either SI or FHI or failing that, a significant part of my income. I've contacted the X-risks Reduction Career Network as well.
I consider this an achievable goal as my general intelligence is extremely high and I have won a national level mathematics competition 7 years ago despite receiving effectively no training in a small backwards town. With dedication and training I believe I could reach the level of the greats.
However, my biggest challenge currently is Getting Things Done; apart from fun distractions, committing any significant effort to something is nigh impossible. This could probably be caused by clinical depression (without the mood effects) and I'm currently on venlafaxine as an attempt to improve my capability to actually do something useful but so far (about 3 months) it hasn't had the desired effect. Assistance/advice would be appreciated.
Well, I haven't really figured out what you all need to know about me, but I suppose there must be something relevant. Let's start with why I'm here.
I can remember being introduced to Less Wrong in two ways, though I don't know in what order. One was through HPMoR, and the other through a post about Newcomb's problem. Neither of those really brought me here in a direct way, though. I guess I am here based on the cumulative sum of recommendations and mentions of LW made by people in my social circle, combined with a desire for new reading material that is between SF/fantasy novels and statistics textbooks in need for concentration. So, since I want stuff to read, preferably lots of it, I am starting with the Sequences.
I think the next-most-relevant information here is what fields I am knowledgeable (or not) about. My single area of greatest expertise is pure mathematics; I dropped out of grad school most of the way (I was told by people who should know) to a PhD with a thesis in algebraic topology, and am now a math tutor at the high school and college levels. I have a big gap in my useful math knowledge around statistics, though, which I am now working to fill. Hence the textbooks. I also know more than the average person about archaic household chores like canning and sewing.
Hi everyone! Another longtime lurker here. I found LW through Yvain's blog (Emily and Control FTW!). I'm not really into cryonics or FAI, but the sequences are awesome, and I enjoy the occasional instrumental rationality post. I decided to become slightly more active here, and this thread seemed like a good place to start, even if a bit old.
Hi, I'm Jess. I've just graduated from Oxford with a masters degree in Mathematics and Philosophy. I'm trying to decide what to do next with my life, and graduate study in cognitive science is currently top of my list. What I'm really interested in is the application of research in human rationality, decision making and its limitations to wider issues in society, public policy etc.
I'm taking some time to challenge my intuition that I want to go into research, though, as I'm slightly concerned that I'm taking the most obvious option not knowing what else to do. My methods for doing this at the moment are a) trying to think about reasons it might not be the best option (a "consider the opposite" type approach) and b) initiating conversations with as many people as possible doing things that interest me, and getting some work experience in different areas this year, to broaden my limited perspective. Any better/additional suggestions are more than welcome!
I'm about to start an internship with 80000 hours, doing a project on the role of cognitive bias in career choice. The aim is to collect together the existing research on biases and mitigation techniques and apply it in a practical and accessible way, identifying the biases that most commonly affect career choice and providing useful strategies for avoiding them. I was wondering if anyone here has a summary of the existing literature on cognitive bias mitigation, or any recommendations of particularly useful/important research? Equally if anyone has spent much time thinking about this, I'd love to hear about it.
I don't have a full summary on-hand, but if you just want to jumpstart your own search you might want to read Lukeprogs article on efficient scholarship and look into the keyword "debiasing".
Hi! I'm shard. I have been looking for a community just like this for quite awhile. Someone on the Brain Workshop group recommended this site too me. It looks great, I am very excited to sponge as much knowledge off as I can, and hopefully to add a grain someday.
I love the look of the site. What forum or bb do you use? or is it a custom one? I've never seen one like it, it's very clean, and I'd like to use it for a forum I wanted to start.
The software behind the site is a clone of Reddit, plus some custom development.
I'm new on Less Wrong and I want to solve P vs. NP.
Consider partitioning into smaller steps. For example, getting a PhD in math or theoretical comp sci is a must before you can hope to tackle something like that. Well, actually before you can even evaluate whether you really want to. While you seem to be on your way there, you clearly under-appreciate how deep this problem is. Maybe consider asking for a chat with someone like Scott Aaronson.
Yes, I do.
Do the math yourself to calculate your odds. Only one of the 7 Millennium Prize Problems have been solved so far, and that by a person widely considered a math genius since his high-school days at one of the best math-oriented schools in Russia and possibly the world at the time. And he was lucky that most of the scaffolding for the Poincaré conjecture happened to be in place already.
So, your odds are pretty bad, and if you don't set a smaller sub-goal, you will likely end up burned out and disappointed. Or worse, come up with a broken proof and bitterly defend it against others "who don't understand the math as well as you do" till your dying days. It's been known to happen.
Sorry to rain on your parade.
My sense is that you are underestimating the number of extremely smart mathematicians who have been attacking N ? NP. And further, you are not yet in a position to accurately estimate your chances.
For example, PhDs in math OR comp. sci. != PhDs in math AND comp. sci. The later is more impressive because it is much, much harder.
If you find theoretical math interesting, by all means pursue it as far as you can - but I wouldn't advise a person to attend law school unless they wanted to be a lawyer. And I wouldn't advise you to enroll in a graduate mathematics program if you wouldn't be happy in that career unless you worked on P ? NP
I was definitely engaging in motivated cognition.
Mulmuley's geometric complexity theory is still where I would start. It's based on continuum mathematics, but extending it to boolean objects is the ultimate goal. A statement of P!=NP in GCT language can be seen as Conjecture 7.10 here. (Also downloadable from Mulmuley's homepage, see "Geometric complexity theory I".)
I saw this site on evand's computer one day, so of course then had to look it up for myself. In my free time, I pester him with LW-y questions.
By way of background, I graduated from a trying-to-be-progressive-but-sort-of-hung-up-on-orthodoxy quasi-Protestant seminary in spring 2010. Primary discernible effects of this schooling (i.e., I would assign these a high probability of relevance on LW) include:
deeply suspicious of pretty much everything
a predisposition to enter a Hulk-smash rage at the faintest whiff of systematic injustice or oppression
high value on beauty, imagination*, and inclusivity
* Part of my motivation to involve myself in rationalism is a hope that I can learn ways to imagine better (more usefully, maybe.)
I like learning more about how brains work (/don't work). Also about communities. Also about things like why people say and do what they say and do, both in terms of conditioning/unconscious motivation and conscious decision. And and and. I will start keeping track on a wiki page perhaps.
I cherish ambitions of being able to contribute to a discussion one day! (If anyone has any ideas/relevant information about getting over not wanting to look stupid, please do share ...)
Hi!