If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, or how you found us. Tell us how you came to identify as a rationalist, or describe what it is you value and work to achieve.
If you'd like to meet other LWers in real life, there's a meetup thread and a Facebook group. If you've your own blog or other online presence, please feel free to link it. If you're confused about any of the terms used on this site, you might want to pay a visit to the LW Wiki, or simply ask a question in this thread. Some of us have been having this conversation for a few years now, and we've developed a fairly specialized way of talking about some things. Don't worry -- you'll pick it up pretty quickly.
You may have noticed that all the posts and all the comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. Try not to take this too personally. Voting is used mainly to get the most useful comments up to the top of the page where people can see them. It may be difficult to contribute substantially to ongoing conversations when you've just gotten here, and you may even see some of your comments get voted down. Don't be discouraged by this; it happened to many of us. If you've any questions about karma or voting, please feel free to ask here.
If you've come to Less Wrong to teach us about a particular topic, this thread would be a great place to start the conversation, especially until you've worked up enough karma for a top level post. By posting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
A note for theists: you will find LW overtly atheist. We are happy to have you participating but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false. If you'd like to know how we came to this conclusion you may find these related posts a good starting point.
A couple technical notes: when leaving comments, you may notice a 'help' link below and to the right of the text box. This will explain how to italicize, linkify, or quote bits of text. You'll also want to check your inbox, where you can always see whether people have left responses to your comments.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site.
(Note from MBlume: though my name is at the top of this page, the wording in various parts of the welcome message owes a debt to other LWers who've helped me considerably in working the kinks out)
By the middle of the second paragraph I was thinking "Whoa, is everyone an Amanda Baggs fan around here?". Hole in one! I win so many Bayes-points, go me.
I and a bunch of LWers I've talked to about it basically already agree with you on ableism, and a large fraction seems to apply usual liberal instincts to the issue (so, no forced cures for people who can point to "No thanks" on a picture board). There are extremely interesting and pretty fireworks that go off when you look at the social model disability from a transhumanist perspective and I want to round up Alicorn and Anne Corwin and you and a bunch of other people to look at them closely. It doesn't look like curing everyone (you don't want a perfectly optimized life, you want a world with variety, you want change over time), and it doesn't look like current (dis)abilities (what does "blind" mean if most people can see radio waves?), and it doesn't look like current models of disability (if everyone is super different and the world is set up for that and everything is cheap there's no such thing as accommodations), and it doesn't look like the current structures around disability (if society and personal identity and memory look nothing like they started with "culture" doesn't mean the same thing and that applies to Deaf culture) and it's complicated and pretty and probably already in some Egan novel.
But, to address your central point directly: You are completely and utterly mistaken about what Eliezer Yudkowsky wants to do. He's certainly not going to tell a superintelligence anything as direct and complicated as "Make this person smarter", or even "Give me a banana". Seriously, nursing homes?
If tech had happened to be easier, we might have gotten a superintelligence in the 16th century in Europe. Surely we wouldn't have told it to care about the welfare of black people. We need to build something that would have done the right thing even if we had built it in the 16th century. The very rough outline for that is to tell it "Here are some people. Figure out what they would want if they knew better, and do that.". So in the 16th century, it would have been presented with abled white men; figured out that if they were better informed and smarter and less biased and so on, these men would like to be equal to black women; and thus included black women in its next turn of figuring out what people want. Something as robust as this needs to be can't miss an issue that's currently known to exist and be worthy of debate!
And for the celibacy thing: that's a bit besides the point, but obviously if you want to avoid sex for reasons other than low libido, increasing your libido obviously won't fix the mismatch.
How do you identify what knowing better would mean, when you don't know better yet?