Okay, so I recently made this joke about future Wikipedia article about Less Wrong:
[article claiming that LW opposes feelings and support neoreaction] will probably be used as a "reliable source" by Wikipedia. Explanations that LW didn't actually "urge its members to think like machines and strip away concern for other people's feelings" will be dismissed as "original research", and people who made such arguments will be banned. Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games. This Wikipedia article will be quoted by all journals, and your families will be horrified by what kind of a monster you have become. All LW members will be fired from their jobs.
A few days later I actually looked at the Wikipedia article about Less Wrong:
...In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea
I'd suggest being careful about your approach. If you lose this battle, you may not get another chance. David Gerard most likely has 100 times more experience with wiki battling than you. Essentially, when you make up a strategy, sleep on it, and then try imagining how a person already primed against LW would read your words.
For example, expect that any edit made by anyone associated with LW will be (1) traced back to their identity and LW account, and consequently (2) reverted, as a conflict of interest. And everyone will be like "ugh, these LW guys are trying to manipuate our website", so the next time they are not going to even listen to any of us.
Currently my best idea -- I didn't make any steps yet, just thinking -- is to post a reaction to the article's Talk page, without even touching the article. This would have two advantages: (1) No one can accuse me of being partial, because that's what I would openly disclose first, and because I would plainly say that as a person with a conflict of interest I shouldn't edit my article. Kinda establishing myself as the good guy who follows the Wikipedia rules. (2) A change in article could be simply reverted by David, but he i...
Is any of the following not true?
You are one of the 2 or 3 most vocal critics of LW worldwide, for years, so this is your pet issue, and you are far from impartial.
A lot of what the "reliable sources" write about LW originates from your writing about LW.
You are cherry-picking facts that descibe LW in certain light: For example, you mention that some readers of LW identify as neoreactionaries, but fail to mention that some of them identify as e.g. communists. You keep adding Roko's basilisk as one of the main topics about LW, but remove mentions of e.g. effective altruism, despite the fact that there is at least 100 times more debate on LW about the latter than about the former.
Should we expect more anti-rationalism in the future? I believe that we should, but let me outline what actual observations I think we will make.
Firstly, what do I mean by 'anti-rationality'? I don't mean that in particular people will criticize LessWrong. I mean it in the general sense of skepticism towards science / logical reasoning, skepticism towards technology, and a hostility to rationalistic methods applied to things like policy, politics, economics, education, and things like that.
And there are a few things I think we will observe first (some of...
Front page being reconfigured. For the moment, you can get to a page with the sidebar by going through the "read the sequences" link (not great, and if you can read this, you probably didn't need this message).
Maybe there could be some high-profile positive press for cryonics if it became standard policy to freeze endangered species seeds or DNA for later resurrection
Hello guys, I am currently writing my master's thesis on biases in the investment context. One sub-sample that I am studying is people who are educated about biases in a general context, but not in the investment context. I guess LW is the right place to find some of those so I would be very happy if some of you would participate since people who are aware about biases are hard to come by elsewhere. Also I explicitly ask for activity in the LW community in the survey, so if enough of LWers participate I could analyse them as an individual subsample. Would...
Not the first criticism of the Singularity, and certainly not the last. I found this on reddit, just curious what the response will be here:
"I am taking up a subject at university, called Information Systems Management, and my teacher is a Futurologist! He refrains from even teaching the subject just to talk about technology and how it will solve all of our problems and make us uber-humans in just a decade or two. He has a PhD in A.I. and has already talked to us about nanotechnology getting rid of all diseases, A.I. merging with us, smart cities that...
I think most people on LW also distrust blind techno-optimism, hence the emphasis on existential risks, friendliness, etc.
I've been writing about effective altruism and AI and would be interested in feedback: Effective altruists should work towards human-level AI
What do you think of the idea of 'learning all the major mental models' - as promoted by Charlie Munger and FarnamStreet? These mental models also include cognitive fallacies, one of the major foci of Lesswrong.
I personally think it is a good idea, but it doesn't hurt to check.
The main page lesswrong.com no longer has a link to the Discussion section of the forum, nor a login link. I think these changes are both mistakes.
Suppose there are 100 genes which figure into intelligence, the odds of getting any one being 50%.
The most common result would be for someone to get 50/100 of these genes and have average intelligence.
Some smaller number would get 51 or 49, and a smaller number still would get 52 or 48.
And so on, until at the extremes of the scale, such a small number of people get 0 or 100 of them that no one we've ever heard of or has ever been born has had all 100 of them.
As such, incredible superhuman intelligence would be manifest in a human who just got lucky enough to have all 100 genes. If some or all of these genes could be identified and manipulated in the genetic code, we'd have unprecedented geniuses.
I already answered #3
No, you really didn't, you dismissed it as not worth answering and proposed that people claiming #3 can't possibly mean it and must be using it as cover for something else more blatantly unreasonable.
I understand that #3 may seem like an easy route for anyone who wants to shut someone up on Wikipedia without actually refuting them or finding anything concrete they're doing wrong. It is, of course, possible that that Viliam is not sincere in suggesting that you have a conflict of interest here, and it is also possible (note that this is a separate question) that if he isn't sincere then his actual reason for suggesting that you have is simply that he wishes you weren't saying what you are and feels somehow entitled to stop you for that reason alone. But you haven't given any, y'know, actual reasons to think that those things are true.
Unless you count one of these: (1) "Less Wrong is obviously a nest of crackpots, so we should expect them to behave like crackpots, and saying COI when they mean 'I wish you were saying nice things about us' is a thing crackpots do". Or (2) "This is an accusation that I have a COI, and obviously I don't have one, so it must be insincere and match whatever other insincere sort of COI accusation I've seen before". I hope it's clear that neither of those is a good argument.
Someone from the IEET tried to seriously claim [...]
I read the discussion. The person in question is certainly a transhumanist but I don't see any evidence he is or was a member of the IEET, and the argument he made was certainly bad but you didn't describe it accurately at all. And, again, the case is not analogous to the LW one: conflict versus competition again.
first assuming malfeasance as an explanation for disagreement
I agree, that's a bad idea. I don't quite understand how you're applying it here, though. So far as I can tell, your opponents (for want of a better word) here are not troubled that you disagree with them (e.g., they don't deny that Roko's basilisk was a thing or that some neoreactionaries have taken an interest in LW); they are objecting to your alleged behaviour: they think you are trying to give the impression that Roko's basilisk is important to LWers' thinking and that LW is a hive of neoreactionaries, and they don't think you're doing that because you sincerely believe those things.
So it's malfeasance as an explanation for malfeasance, not malfeasance as an explanation for disagreement.
I repeat that I am attempting to describe, not endorsing, but perhaps I should sketch my own opinions lest that be thought insincere. So here goes; if (as I would recommend) you aren't actually concerned about my opinions, feel free to ignore what follows unless they do become an issue.
I do have the impression that you wish LW to be badly thought of, and that this goes beyond merely wanting it to be viewed accurately-as-you-see-it. I find this puzzling because in other contexts (and also in this context, in the past when your attitude seemed different) the evidence available to me suggests that you are generally reasonable and fair. (Yes, I have of course considered the possibility that I am puzzled because LW really is just that bad and I'm failing to see it. I'm pretty sure that isn't the case, but I could of course be wrong.)
I do not think the case that you have a WP:COI on account of your association with RationalWiki, still less because you allegedly despise LW, is at all a strong one, and I think that if Viliam hopes that making that argument would do much to your credibility on Wikipedia then his hopes would be disappointed if tested.
I note that Viliam made that suggestion with a host of qualifications about how he isn't a Wikipedia expert and was not claiming with any great confidence that you do in fact have a COI, nor that it would be a good idea to say that you do.
I think his suggestion was less than perfectly sincere in the following sense: he made it not so much because he thinks a reasonable person would hold that you have a conflict of interest, as because he thinks (sincerely) that you might have a COI in Wikipedia's technical sense, and considers it appropriate to respond with Wikipedia technicalities to an attack founded on Wikipedia technicalities.
The current state of the Wikipedia page on Less Wrong doesn't appear terribly bad to me, and to some extent it's the way it is because Wikipedia's notion of "reliable sources" gives a lot of weight to what has attracted the interest of journalists, which isn't your fault. But there are some things that seem ... odd. Here's the oddest:
Let's look at those two refs (placed there by you) for the statement that "the neoreactionary movement takes an interest in Less Wrong" (which, to be sure, could be a lot worse ... oh, I see that you originally wrote "is associated with Less Wrong" and someone softened it; well done, someone). First we have a TechCrunch article. Sum total of what it says is that "you may have seen" neoreactionaries crop up "on tech hangouts like Hacker News and Less Wrong". I've seen racism on Facebook; is Facebook "associated with racism" in any useful sense? Second we have a review of "Neoreaction: a basilisk" claiming "The embryo of the [neoreactionary] movement lived in the community pages of Yudkowsky’s blog LessWrong", which you know as well as I do to be flatly false (and so do the makers and editors of WP's page on neoreaction, which quite rightly doesn't even mention Less Wrong). These may be Reliable Sources in the sense that they are the kind of document that Wikipedia is allowed to pay attention to. They are not reliable sources for the claim that neoreaction and Less Wrong have anything to do with one another, because the first doesn't say that and the second says it but is (if I've understood correctly) uncritically reporting someone else's downright lie.
I have to say that this looks exactly like the sort of thing I would expect to see if you were trying to make Less Wrong look bad without much regard for truth, and using Wikipedia's guiding principles as "cover" rather than as a tool for avoiding error. I hope that appearance is illusory. If you'd like to convince me it is, I'm all ears.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "