Okay, so I recently made this joke about future Wikipedia article about Less Wrong:
[article claiming that LW opposes feelings and support neoreaction] will probably be used as a "reliable source" by Wikipedia. Explanations that LW didn't actually "urge its members to think like machines and strip away concern for other people's feelings" will be dismissed as "original research", and people who made such arguments will be banned. Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games. This Wikipedia article will be quoted by all journals, and your families will be horrified by what kind of a monster you have become. All LW members will be fired from their jobs.
A few days later I actually looked at the Wikipedia article about Less Wrong:
...In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea
I'd suggest being careful about your approach. If you lose this battle, you may not get another chance. David Gerard most likely has 100 times more experience with wiki battling than you. Essentially, when you make up a strategy, sleep on it, and then try imagining how a person already primed against LW would read your words.
For example, expect that any edit made by anyone associated with LW will be (1) traced back to their identity and LW account, and consequently (2) reverted, as a conflict of interest. And everyone will be like "ugh, these LW guys are trying to manipuate our website", so the next time they are not going to even listen to any of us.
Currently my best idea -- I didn't make any steps yet, just thinking -- is to post a reaction to the article's Talk page, without even touching the article. This would have two advantages: (1) No one can accuse me of being partial, because that's what I would openly disclose first, and because I would plainly say that as a person with a conflict of interest I shouldn't edit my article. Kinda establishing myself as the good guy who follows the Wikipedia rules. (2) A change in article could be simply reverted by David, but he i...
Is any of the following not true?
You are one of the 2 or 3 most vocal critics of LW worldwide, for years, so this is your pet issue, and you are far from impartial.
A lot of what the "reliable sources" write about LW originates from your writing about LW.
You are cherry-picking facts that descibe LW in certain light: For example, you mention that some readers of LW identify as neoreactionaries, but fail to mention that some of them identify as e.g. communists. You keep adding Roko's basilisk as one of the main topics about LW, but remove mentions of e.g. effective altruism, despite the fact that there is at least 100 times more debate on LW about the latter than about the former.
Should we expect more anti-rationalism in the future? I believe that we should, but let me outline what actual observations I think we will make.
Firstly, what do I mean by 'anti-rationality'? I don't mean that in particular people will criticize LessWrong. I mean it in the general sense of skepticism towards science / logical reasoning, skepticism towards technology, and a hostility to rationalistic methods applied to things like policy, politics, economics, education, and things like that.
And there are a few things I think we will observe first (some of...
Front page being reconfigured. For the moment, you can get to a page with the sidebar by going through the "read the sequences" link (not great, and if you can read this, you probably didn't need this message).
Maybe there could be some high-profile positive press for cryonics if it became standard policy to freeze endangered species seeds or DNA for later resurrection
Hello guys, I am currently writing my master's thesis on biases in the investment context. One sub-sample that I am studying is people who are educated about biases in a general context, but not in the investment context. I guess LW is the right place to find some of those so I would be very happy if some of you would participate since people who are aware about biases are hard to come by elsewhere. Also I explicitly ask for activity in the LW community in the survey, so if enough of LWers participate I could analyse them as an individual subsample. Would...
Not the first criticism of the Singularity, and certainly not the last. I found this on reddit, just curious what the response will be here:
"I am taking up a subject at university, called Information Systems Management, and my teacher is a Futurologist! He refrains from even teaching the subject just to talk about technology and how it will solve all of our problems and make us uber-humans in just a decade or two. He has a PhD in A.I. and has already talked to us about nanotechnology getting rid of all diseases, A.I. merging with us, smart cities that...
I think most people on LW also distrust blind techno-optimism, hence the emphasis on existential risks, friendliness, etc.
I've been writing about effective altruism and AI and would be interested in feedback: Effective altruists should work towards human-level AI
What do you think of the idea of 'learning all the major mental models' - as promoted by Charlie Munger and FarnamStreet? These mental models also include cognitive fallacies, one of the major foci of Lesswrong.
I personally think it is a good idea, but it doesn't hurt to check.
The main page lesswrong.com no longer has a link to the Discussion section of the forum, nor a login link. I think these changes are both mistakes.
Suppose there are 100 genes which figure into intelligence, the odds of getting any one being 50%.
The most common result would be for someone to get 50/100 of these genes and have average intelligence.
Some smaller number would get 51 or 49, and a smaller number still would get 52 or 48.
And so on, until at the extremes of the scale, such a small number of people get 0 or 100 of them that no one we've ever heard of or has ever been born has had all 100 of them.
As such, incredible superhuman intelligence would be manifest in a human who just got lucky enough to have all 100 genes. If some or all of these genes could be identified and manipulated in the genetic code, we'd have unprecedented geniuses.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "