Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: eagain 23 January 2017 07:57:47PM 1 point [-]

Hi. I used to have an LW account and post sometimes, and when the site kinda died down I deleted the account. I'm posting back now.

We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we're told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW.

Please do not start discussing politics without enforcing a real-names policy and taking strong measures against groupthink, bullying, and most especially brigading from outside. The basic problem with discussing politics on the internet is that the normal link between a single human being and a single political voice is broken. You end up with a homogeneous "consensus" in the "community" that reflects whoever is willing to spend more effort on spam and disinformation. You wanted something like a particularly high-minded Parliament, you got 4chan.

I have strong opinions about politics and also desire to discuss the topic, which is indeed boiling to a crisis point, in a more rationalist way. However, I also moderate several subreddits, and whenever politics intersects with one of our subs, we have to start banning people every few hours to keep from being brigaded to death.

I advise allowing just enough politics to discuss the political issues tangent to other, more basic rationalist wheelhouses: allow talking about global warming in the context of civilization-scale risks, allow talking about science funding and state appropriation of scientific output in the context of AI risk and AI progress, allow talking about fiscal multipliers to state spending in the context of effective altruism.

Don't go beyond that. There are people who love to put an intellectual veneer over deeply bad ideas, and they raid basically any forum on the internet nowadays that talks politics, doesn't moderate a tight ship, and allows open registration.

And in general, the watchword for a rationality community ought to be that most of the time, contrarians are wrong, and in fact boring as well. Rationality should be distinguished from intellectual contrarianism -- this is a mistake we made last time, and suffered for.

Comment author: Lumifer 23 January 2017 08:42:54PM 1 point [-]

enforcing a real-names policy

Ha-ha

I have strong opinions about politics and also desire to discuss the topic

You seem to have a desire to discuss the topic only in a tightly controlled environment where you get to establish the framework and set the rules.

Comment author: The_Jaded_One 23 January 2017 07:00:07PM 1 point [-]

I think this makes grossly false assumptions about how human psychology actually works.

Imagine applying that logic to, for example, computer games. Hey, let's get rid all achievements and ranks that are handed out willy nilly to people who just turn up and play the game. Instead, you will now only get any recognition for your efforts if you are a lot better than average.

It's funny how successful games almost always hand out lots of 'inflationary' gold stars just for turning up and playing. To build the user-base, you give people rewards for their efforts, not punishments for falling in the bottom 90%.

Comment author: Lumifer 23 January 2017 08:12:07PM *  0 points [-]

It's funny how successful games almost always hand out lots of 'inflationary' gold stars just for turning up and playing.

The goal of successful computer games is to maximize the playerbase without regard to the quality of that playerbase (with exceptions made for people who drive other players away -- cheaters, harassers, etc.). If a reasonably docile idiot shows up and clicks where he is expected to click, the game would be happy to reward him with a variety of virtual goodies. Drool - click - star! - drool - click - star! - drool...

Notably the goals of LW are different. I don't think we should reward people for just showing up. I think we should actively filter idiots out, docile or not. I don't want more posts -- I want more high-quality posts which you shouldn't expect if you're rewarding quantity. A pile of mediocre posts, festooned with gold stars, will make LW just another island of mediocrity in the big 'net ocean.

Comment author: dglukhov 23 January 2017 06:06:54PM 2 points [-]

On a lighter note (Note: this is open access)

Comment author: Lumifer 23 January 2017 07:58:11PM 0 points [-]

LOL. "How to prevent crimethink". I recommend introducing global-warming Newspeak and phasing out Oldspeak -- it should help with that "inoculation" thing.

Comment author: dglukhov 23 January 2017 03:19:35PM *  1 point [-]

I'm curious if anybody here frequents retraction watch enough to address this concern I have.

I find articles here very effective at announcing retractions and making testimonies from lead figures in investigations a frequent fallback, but rarely do you get to see the nuts and bolts of the investigations being discussed. For example, "How were the journals misleading?" or "What evidence was or was not analyzed, and how did the journal's analysis deviate from correct protocol?" are questions I often ask myself as I read, followed by an urge to see the cited papers. And then upon investigating the articles and their retraction notices, I am given a reason that I can't myself arbitrate. Maybe data was claimed to have been manipulated, or analyzed according to an incorrect framework.

Studies such as these I find alarming because I'm forced to trust the good intentions of a multi-billion dollar corporation in finding the truth. Often I find myself going on retraction watch, trusting the possibly non-existing good intentions of the organization's leadership, as I read the headlines without time to read every detail of the article. I am given certain impressions from the pretentious writing of the articles, but none of the substance, when I choose to skim selections.

Perhaps I am warning against laziness. Perhaps I am concerned about the potential for corruption in even a crusade to fight misinformation that retraction watch seems to fight. Nonetheless, I'm curious if people here have had similar or differing experiences with these articles...

Comment author: Lumifer 23 January 2017 05:59:17PM 1 point [-]

but rarely do you get to see the nuts and bolts of the investigations being discussed.

Gelman's blog goes into messy details often enough.

because I'm forced to trust

No, you're not. You are offered some results, you do NOT have to trust them.

Comment author: Viliam 23 January 2017 09:41:03AM *  0 points [-]

How do you become a rationalist political being if you aren't able to practice rationalist politics in the supportive company of other rationalists?

I don't think LW qualifies as a sufficiently supportive company of rationalists for at least two major reasons: (1) Eugine and his army of sockpuppets, (2) anyone can join, rationalist or not, and talking about politics would most likely attract the wrong kind people, so even if LW would qualify as a sufficiently supportive company of rationalists now, that could easily change overnight.

I imagine that if we could solve the problem of sockpuppets and/or create a system of "trusted users" who could moderate the debate, we would have a chance to debate politics rationally. But I suspect that a rational political debate would be quite boring for most people.

To give an example of "boring politics", when Trump was elected, half people on internet were posting messages like "that's great, now Americal will be great again", half people on internet were posting messages like "that's horrible, now racists and sexist will be everywhere, and we are all doomed"... and there was a tiny group of people posting messages like "having Trump elected increased value of funds in sectors A, B, C, and decreased value of funds in sectors X, Y, Z, so by hedging against this outcome I made N% money". You didn't have to tell these people that rationalists are supposed to bet on their beliefs, because they already did.

Comment author: Lumifer 23 January 2017 05:56:20PM 0 points [-]

and there was a tiny group of people posting messages like "having Trump elected increased value of funds in sectors A, B, C, and decreased value of funds in sectors X, Y, Z, so by hedging against this outcome I made N% money".

Funnily enough, I heard rumors that George Soros placed a big bet on the markets going down after the election and lost very very badly.

Comment author: gjm 23 January 2017 10:44:49AM 0 points [-]

I'm not steven0461, but I'm pretty sure the intended meaning is: Asking for a "rationalist political being" is like asking for a "clean sewer"; it's a contradiction in terms because politics is fundamentally anti-rational. So when you say "How do you become a rationalist political being if ..." you have already made a mistake.

(I don't think I agree; politics is part of the real world and I see no reason to think that rationalists should never find sufficient reason to become involved. I might agree with the more modest claim that most of us most of the time would do well to pay much less attention to politics than we do.)

Comment author: Lumifer 23 January 2017 05:52:23PM 0 points [-]

There is the obvious counterargument of "Try ignoring your sewer system for a few years and see where it gets you". I suspect that drowning in shit is not a pleasant experience.

In response to comment by gjm on Crisis of Faith
Comment author: Jade 23 January 2017 02:57:37AM *  0 points [-]

Historical Muhammad not certain: http://www.wsj.com/articles/SB122669909279629451 . Of course, people have set about trying to protect minds from a 'fringe' Bayesian view: "Prof. Kalisch was told he could keep his professorship but must stop teaching Islam to future school teachers." In case anyone missed it, Richard Carrier explicitly used Bayes on question of historical Jesus. I don't know if Kalisch used Bayes, but his language conveys intuitive Bayesian update.

The bearing of fictional stories is simple: calculate probabilities of historical X based on practically 100% probability that human imagination was a factor (given that the stories contain highly unlikely magic like in known-to-be fiction stories, plus were written long after X supposedly lived). Note that that still leaves out probabilities of motivations for passing fiction as nonfiction like Joseph Smith or L. Ron Hubbard did. Once you figure probabilities including motivations and iterations of previous religious memes, it becomes increasingly unlikely that X existed. Paul Bunyan, AFAIK, wasn't based on previous memes for controlling people, nor were the stories used to control people, so I wouldn't be suspicious if someone believed the stories started based on someone real. When people insist religious characters were real, OTOH, I become suspicious of their motivations, given unlikelihood that they examined evidence and updated Bayesian-like.

@Salemicus: Citation for "We do ask JK Rowling what non magical boy inspired Harry Potter"?

In response to comment by Jade on Crisis of Faith
Comment author: Lumifer 23 January 2017 05:47:52PM 0 points [-]

Historical Muhammad not certain

What's your comparison baseline? Compared to the screen in front of your face, he's not certain. Compared to pretty much anyone born in the VI century, he is quite certain.

Comment author: The_Jaded_One 20 January 2017 01:19:17AM 1 point [-]

What's wrong with gold stars for everyone who makes a non-spammy, coherent point?

Comment author: Lumifer 23 January 2017 05:44:10PM 1 point [-]

Inflation.

If everyone gets a gold star for most every post, gold stars lose any value.

In response to comment by Lumifer on Project Hufflepuff
Comment author: onlytheseekerfinds 19 January 2017 11:09:40PM 0 points [-]

It is, of course, in a trivial way

Practicality is usually in some sense "trivial", not so? Is there any sense in which the word implies complexity or subtlety?

Comment author: Lumifer 23 January 2017 05:42:50PM 0 points [-]

To me, practicality doesn't imply triviality.

I'd say that "practical' has two main meanings. One is "achievable given certain constraints and limits", the antonym is "impractical". Two is "focusing on the direct outputs", the antonym is a bit hard to come by but the opposite meaning is, basically, "done for status/signaling purposes".

Neither of the two meanings implies simplicity or bluntness.

Comment author: WalterL 23 January 2017 05:03:37PM 1 point [-]

Our consensus is pretty unalterably "Build an AI God".

Comment author: Lumifer 23 January 2017 05:36:27PM 2 points [-]

Our consensus is pretty unalterably "Build an AI God".

Kinda. The LW's position is "We will make a God, how do we make sure He likes us?"

View more: Next