Okay, so I recently made this joke about future Wikipedia article about Less Wrong:
[article claiming that LW opposes feelings and support neoreaction] will probably be used as a "reliable source" by Wikipedia. Explanations that LW didn't actually "urge its members to think like machines and strip away concern for other people's feelings" will be dismissed as "original research", and people who made such arguments will be banned. Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games. This Wikipedia article will be quoted by all journals, and your families will be horrified by what kind of a monster you have become. All LW members will be fired from their jobs.
A few days later I actually looked at the Wikipedia article about Less Wrong:
...In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea
I'd suggest being careful about your approach. If you lose this battle, you may not get another chance. David Gerard most likely has 100 times more experience with wiki battling than you. Essentially, when you make up a strategy, sleep on it, and then try imagining how a person already primed against LW would read your words.
For example, expect that any edit made by anyone associated with LW will be (1) traced back to their identity and LW account, and consequently (2) reverted, as a conflict of interest. And everyone will be like "ugh, these LW guys are trying to manipuate our website", so the next time they are not going to even listen to any of us.
Currently my best idea -- I didn't make any steps yet, just thinking -- is to post a reaction to the article's Talk page, without even touching the article. This would have two advantages: (1) No one can accuse me of being partial, because that's what I would openly disclose first, and because I would plainly say that as a person with a conflict of interest I shouldn't edit my article. Kinda establishing myself as the good guy who follows the Wikipedia rules. (2) A change in article could be simply reverted by David, but he i...
Is any of the following not true?
You are one of the 2 or 3 most vocal critics of LW worldwide, for years, so this is your pet issue, and you are far from impartial.
A lot of what the "reliable sources" write about LW originates from your writing about LW.
You are cherry-picking facts that descibe LW in certain light: For example, you mention that some readers of LW identify as neoreactionaries, but fail to mention that some of them identify as e.g. communists. You keep adding Roko's basilisk as one of the main topics about LW, but remove mentions of e.g. effective altruism, despite the fact that there is at least 100 times more debate on LW about the latter than about the former.
Should we expect more anti-rationalism in the future? I believe that we should, but let me outline what actual observations I think we will make.
Firstly, what do I mean by 'anti-rationality'? I don't mean that in particular people will criticize LessWrong. I mean it in the general sense of skepticism towards science / logical reasoning, skepticism towards technology, and a hostility to rationalistic methods applied to things like policy, politics, economics, education, and things like that.
And there are a few things I think we will observe first (some of...
Front page being reconfigured. For the moment, you can get to a page with the sidebar by going through the "read the sequences" link (not great, and if you can read this, you probably didn't need this message).
Maybe there could be some high-profile positive press for cryonics if it became standard policy to freeze endangered species seeds or DNA for later resurrection
Hello guys, I am currently writing my master's thesis on biases in the investment context. One sub-sample that I am studying is people who are educated about biases in a general context, but not in the investment context. I guess LW is the right place to find some of those so I would be very happy if some of you would participate since people who are aware about biases are hard to come by elsewhere. Also I explicitly ask for activity in the LW community in the survey, so if enough of LWers participate I could analyse them as an individual subsample. Would...
Not the first criticism of the Singularity, and certainly not the last. I found this on reddit, just curious what the response will be here:
"I am taking up a subject at university, called Information Systems Management, and my teacher is a Futurologist! He refrains from even teaching the subject just to talk about technology and how it will solve all of our problems and make us uber-humans in just a decade or two. He has a PhD in A.I. and has already talked to us about nanotechnology getting rid of all diseases, A.I. merging with us, smart cities that...
I think most people on LW also distrust blind techno-optimism, hence the emphasis on existential risks, friendliness, etc.
I've been writing about effective altruism and AI and would be interested in feedback: Effective altruists should work towards human-level AI
What do you think of the idea of 'learning all the major mental models' - as promoted by Charlie Munger and FarnamStreet? These mental models also include cognitive fallacies, one of the major foci of Lesswrong.
I personally think it is a good idea, but it doesn't hurt to check.
The main page lesswrong.com no longer has a link to the Discussion section of the forum, nor a login link. I think these changes are both mistakes.
Suppose there are 100 genes which figure into intelligence, the odds of getting any one being 50%.
The most common result would be for someone to get 50/100 of these genes and have average intelligence.
Some smaller number would get 51 or 49, and a smaller number still would get 52 or 48.
And so on, until at the extremes of the scale, such a small number of people get 0 or 100 of them that no one we've ever heard of or has ever been born has had all 100 of them.
As such, incredible superhuman intelligence would be manifest in a human who just got lucky enough to have all 100 genes. If some or all of these genes could be identified and manipulated in the genetic code, we'd have unprecedented geniuses.
If you look at Bill Gates and Warren Buffet they see purpose in helping the poor. In general employing poor people to do something for you and paying them a wage is also a classic way poor people get helped.
I'm happy that these people have taken actions to support such stances. However, I'm more interested in the incentive system, not a few outliers within it. Both of these examples hold about $80 billion in net worth, these are paltry numbers compared to the amount of money circulating in world today, GDP estimates ranging in the $74 trillion. I am therefore still unaware of an incentive system that helps the poor until I see the majority of this amount of money being circulated and distributed in the manner Gates and Buffett propose.
The great thing about smart phones is that they allow for software to be distributed with little cost for additional copies. Having a smart phone means that you can use Duolingo to learn English for free.
Agreed, and unfortunately utilizing a smartphone to its full benefit isn't necessarily obvious to somebody poor. While one could use it to learn English for free, they could also use it inadvertently as an advertising platform with firms soliciting sales from the user, or just as a means of contact with others willing to stay in contact with them (other poor people, most likely). A smartphone would be an example of a technology that managed to trickle down the socio-economic ladder and help poor people, but it can do harm as well as good, or have no effect at all.
We are quite successful in reducing the numbers of the poorest of the poor. We reduced them both in relative and in absolute numbers. It's debatable how much of that is due to new technology and how much is through other factors but we have now less people in extreme poverty.
Please show me these statistics. Are they adjusted to and normalized relative to population increase?
A cursory search gave me contradictory statistics. http://www.statisticbrain.com/world-poverty-statistics/
I'd like to know where you get such sources, because a growing income gap between rich and poor necessarily implies three things: the rich are getting richer, the poor are getting poorer, or both.
Note: we are discussing relative poverty, or absolute poverty? I'd like to keep it to absolute poverty, since meeting basic human needs is a solid baseline as long as you trust nutritional data sources and research with regards to health. If you do not trust our current understanding of human health, then relative poverty is probably the better topic to discuss.
EDIT: found something to support your conclusion, first chart shows the decrease of population of people in the lowest economic tier. These are not up to date, only comparing statistics from 2001 to 2011. I'm having a hard time finding anything more recent.
I'm happy that these people have taken actions to support such stances. However, I'm more interested in the incentive system, not a few outliers within it.
When basic needs are fulfilled many humans tend to want to satisfy needs around contributing to making the world a better place. It's a basic psychological mechanism.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "