(I hope that is the least click-baity title ever.)

Political topics elicit lower quality participation, holding the set of participants fixed. This is the thesis of "politics is the mind-killer".

Here's a separate effect: Political topics attract mind-killed participants. This can happen even when the initial participants are not mind-killed by the topic. 

Since outreach is important, this could be a good thing. Raise the sanity water line! But the sea of people eager to enter political discussions is vast, and the epistemic problems can run deep. Of course not everyone needs to come perfectly prealigned with community norms, but any community will be limited in how robustly it can handle an influx of participants expecting a different set of norms. If you look at other forums, it seems to take very little overt contemporary political discussion before the whole place is swamped, and politics becomes endemic. As appealing as "LW, but with slightly more contemporary politics" sounds, it's probably not even an option. You have "LW, with politics in every thread", and "LW, with as little politics as we can manage".  

That said, most of the problems are avoided by just not saying anything that patterns matches too easily to current political issues. From what I can tell, LW has always had tons of meta-political content, which doesn't seem to cause problems, as well as standard political points presented in unusual ways, and contrarian political opinions that are too marginal to raise concern. Frankly, if you have a "no politics" norm, people will still talk about politics, but to a limited degree. But if you don't even half-heartedly (or even hypocritically) discourage politics, then a open-entry site that accepts general topics will risk spiraling too far in a political direction. 

As an aside, I'm not apolitical. Although some people advance a more sweeping dismissal of the importance or utility of political debate, this isn't required to justify restricting politics in certain contexts. The sort of the argument I've sketched (I don't want LW to be swamped by the worse sorts of people who can be attracted to political debate) is enough. There's no hypocrisy in not wanting politics on LW, but accepting political talk (and the warts it entails) elsewhere. Of the top of my head, Yvain is one LW affiliate who now largely writes about more politically charged topics on their own blog (SlateStarCodex), and there are some other progressive blogs in that direction. There are libertarians and right-leaning (reactionary? NRx-lbgt?) connections. I would love a grand unification as much as anyone, (of course, provided we all realize that I've been right all along), but please let's not tell the generals to bring their armies here for the negotiations.

New Comment
71 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Political topics elicit lower quality participation, holding the set of participants fixed. This is the thesis of "politics is the mind-killer".

I don't think that's a good description. The article argues that using political example when you can make the same point with a nonpolitical example is bad because the political aspect prevents people from using their usual reasoning abilities.

This is a plausible, and worrying point. Is there some evidence for the basic thesis beyond simple intuitiveness?

So, has there been an influx of new participants into LW who only want to argue politics? I haven't noticed any.

It's also worth pointing out that we mostly debate political philosophy and not politics. Politics debates look like "Should Obama just ignore Congress and ram through whatever regulations he can?" or "Is Ted Cruz the greatest guy ever?" or "Shall we just tell the Greeks to go jump into the Aegean sea?" and we do NOT have them.

[-]Zubon100

I'm not sure if it says more about me or the context I'm used to seeing here at Less Wrong, but when I read, "Shall we just tell the Greeks to go jump into the Aegean sea?" I thought "Iliad" before "ongoing economic crisis." If that order ever flips, we may have gotten too much into current events and lost our endearing weirdness and classicism.

2Lumifer
:-D Yep. That's a good thing.
7ChristianKl
There seems to be a member who mainly came to LW to argue "pragmatarianism" who's karma is as the time of this writing negative.
7Lumifer
I think this member is mostly in the business of marketing his blog and isn't interested in LW otherwise.
2emr
I agree that this isn't happening to LW. (To avoid repetition, I talk about a bit more about motivation in this comment)

As I've mentioned in the other recent political thread, it's not just that political topics elicit lower quality participation. Even if you have the best of intentions and can keep your mind-killing mechanisms in check, it's extremely hard to have a rational debate about politics.

6Lumifer
This is the classic keys-under-a-streetlamp argument. We should debate what needs to be debated, not what's easy to debate.
5satt
Dunno whether "We" means "someone in general" or "LWers" there. If the former, I agree. If the latter, ehhhh, I'm doubtful. Some things need to be debated, but from a consequentialist perspective it doesn't automatically follow that we should be debating those things. Political debates on LW tend to be better than political debates elsewhere, but they might not be a net gain.
-1Lumifer
How do you calculate what might or might not be "a net gain"?
1satt
By iterating over individual things and evaluating whether each of those might or might not be a net gain. More general & idealized method for evaluating whether a specific thing might be a net gain: list the concrete consequences of the thing, then assign a number to each consequence representing how much utility each consequence adds or subtracts, according to your normative preferences. Sum the numbers and look at the sign of the result. More specific & common method for evaluating whether a specific thing might be a net gain: try to think of the consequences of the thing, picking out those which seem like they have a non-negligible utility or disutility, then use your gut to try weighing those together to come up with the net utility of the thing. I recognize my answer is general, but that's a side effect of the question's generality. I'm not the person who downvoted your question, but I'm not sure what its point is.
0Lumifer
I think I phrased my question incorrectly -- I am interested not so much in "how", but in "what" and "for whom". By what kind of criteria do you estimate whether something is a "gain" or not and whose gain is it? And if the answer is "look at utility", the question just chains to how do you estimate whether something is positive-utility, especially with non-obvious issues like having or banning certain kinds of debates on a forum.
1satt
What kind of criteria? Depends on the something. (Again a general answer but again the question is general. The criteria for evaluating whether to take an aspirin are different to those for evaluating an online debate, which are different again to those for evaluating some bookshelves, which are different again to...) Whose gain? Whoever the person doing the evaluating cares about. Well, you hit rock bottom eventually; you translate things into consequences about which you have reasonably clean-cut "this is good" or "this is bad" intuitions. Or, if you're doing it in a more explicit cost-benefit-analysis kind of way, you can pin rough conversion factors on each of the final consequences which re-express those consequences in terms of a single numeraire for comparison. Here I think the estimating is relatively easy, because I'm weighing up "We should debate what needs to be debated", apparently in the context of LW specifically, and the impression I got from your phrasing was that you were implicitly excluding broad classes of consequences like warm fuzzy hedons. If so, considering the issue on your terms (as I understand them), I can simplify the calculation by leaving out hedonic and similarly self-centered aspects. Elaborating on why I interpreted you like that: when people use the language of duty or obligation, as in "We should [X] what needs to be [X]ed", they normally imply something like "we need to do that, even if through gritted teeth, for prudential reasons", rather than e.g. "that would be fun, we should do that". If that's what you meant here (perhaps I misunderstood you?), that excludes consequences like the little glow we might get from signalling how clever we are by debating things, the potential pleasure of learning things that're new to us, or even the epistemic virtue of inching closer to the right answer to a knotty, politically polarized empirical question. Once one rules out those kinds of consequences, the main kind that's left, I reckon
1Lumifer
Thanks for the serious answer. No, I do not. ...but that is a very interesting idea :-D
4buybuydandavis
I'd say we should debate what's hard to debate. IMO, LW provides a very interesting forum. Progressives and Libertarians actually talking, and moderately civilly. What I like is the window into different priors and different values. Really, that's what is going on in your head? Who knew? Not me.
3passive_fist
Difficulty is certainly no reason not to attempt debate, as long as all sides acknowledge that debates about politics are necessarily difficult, ill-informed, and far from optimally rational.
4Gram_Stone
I think the fact that the public is less well-informed about politics than world leaders is an issue separate from the ability to have rational discussions about politics. From the fact that we have less information than world leaders it doesn't follow that we necessarily use that information less rationally. I've always considered the ability to reason under ignorance a Great Rationalist Superpower. Less information does seem to make mind-killing more likely, but you're assuming that mind-killing mechanisms are being kept in check.
0buybuydandavis
Or the desire to have rational discussions about politics. People in power get that way by using discussion to get power, not illuminate the truth.

Political participants do not just have different norms of community participation: by definition, they have very different motivations as well. This is the real take-away of Politics is the mind-killer. Keep in mind that politics is a kind of conflict: it's about things that people actually fight over, in the real world. So the difference in norms may well be a consequence of these motivations: as the potential for real strife increases sharply, good deliberation becomes less relevant and "fairness" concerns are far more important:

This is why ... (read more)

0Jiro
More Right doesn't allow comments, so how does that work?
3bogus
AIUI, they host open threads where comments are allowed. Alternatively, they do take e-mails, and will consider posting these if sufficiently relevant and high-quality (by their standards). Slate Star Codex allows comments, but with no karma system to provide a "currency", they're not exactly helpful.

We need to be able to sort which participants perceive parts of Haidt's moral taste spectrum before we have any discussion on anything.

I'm a bit curious what prompted you to post this?

What I've been noticing is that right now, Slatestarcodex is sort of the place people go to talk about politics in a rationality-infused setting, and the comments there have been trending in the direction you'd caution about. (I'm not sure whether to be sad about that or glad that there's a designated place for political fighting)

8emr
Well, I think it's true, interesting, and useful :) The argument is a specific case of a more general form (explaining changing group dynamics by selection into the group, driven by the norms of the group, but without the norms necessarily causing a direct change to any individual's behavior) which I think is a powerful pattern to understand. But like a lot of social dynamics, explicitly pointing it out can be tricky, because it can make the speaker seem snooty or Machiavellian or tactless, and because it can insult large classes of people, possibly including current group members. I felt that LW is one of the few places where I could voice this type of argument and get a charitable reception (after all, I'm indirectly insulting everyone who likes to talk politics, which is most people, including me :P) To be clear: I don't think lesswrong is currently being hurt by this dynamic. But I do see periodic comments criticizing the use of only internal risks (mind-killing ourselves) as the justification for avoiding political topics. I'm sympathetic to some of these critiques, and I wanted to promote a reason to avoiding political topics that didn't imply that mind-killing susceptibility was somehow an insurmountable problem for individuals.
4Lumifer
Homeostasis of social communities is a very interesting topic. Let me just point out that there are dangers on all sides -- you don't want to be at the mercy of every wandering band of barbarians, but you also don't want to become an inbred group locked up high in an ivory tower.
3Viliam_Bur
SSC is one-person dictatorship with a benevolent dictator. It would be much worse there if people could play voting games in comments: upvoting everyone on their "side" and downvoting everyone on the opposing "side". Also, on SSC people are banned more often than on LW, although most of the bans are temporary.
1Vaniver
I would be very sad if LW comments went the way of Slate Star Codex comments.
1Raemon
I hadn't noticed a trend of political posts on LW, so hadn't been worried about this specific phenomenon.

I think that there needs to be somewhere to discuss politics related to Less Wrong, but somewhere away from the main site. Ideally somewhere hard to find so as to keep the quality high.

7Stuart_Armstrong
A kind of would-have-banned-store for political discussion...
[-][anonymous]230

"MAIN" "DISCUSSION" "QUARANTINE"

would be a good site layout.

And only visible to people who log in.

4[anonymous]
I agree.
1dxu
Upvoted for the purpose of tolerating tolerance.
4Transfuturist
I think usernames would have to be anonymized, as well.
5satt
I've had the idea before that a group of LWers keen to start an inflammatory political argument could help keep LW cool by having the argument on an unrelated, pre-existing, general politics forum. They could link the argument on the Open Thread so the rest of us know it's happening. Possible advantages & disadvantages: * fewer political flame-outs on LW... * ...more political flame-outs on Unnamed Other Forum * other forum posters would likely have worse argumentative norms... * ...but you could look for a forum with relatively good norms to minimize this (a pre-existing LW-affiliated blog/network, or a traditional rationalist/sceptic forum with a politics subforum?) * LWers modelling good argumentative norms to strangers might get the strangers to up their own game... * ...or social contagion might happen in the other direction, with LWers regressing towards the mean for online political arguments * has the trivial inconvenience of requiring LWers to register on another forum and post there, even as they continue to post other stuff here... * ...but maybe a trivial inconvenience is what you want if you think the marginal LW political argument has net negative value * could be interpreted as a forum invasion... * ...but it's not like LWers are trolls, and only a few LWers would likely bother with this anyway, so they'd probably blend into a bigger on-topic forum without much fuss * might entrench misinterpretations of "Politics is the Mindkiller" * in the unlikely event this became a firmly established norm, LWers might start demanding threads be taken elsewhere at the least scent of politics
59eB1
We could easily use the LessWrong subreddit for that purpose, or create a LWPolitics subreddit.
2casebash
Interesting idea. There doesn't seem to be much traffic there, I wonder if the mods would be open to it?
2Zubon
Rationalist Tumblr discusses politics and culture, but it is definitely not hard to find; the quality of discussion may be higher than the Tumblr average but probably not what you are looking for. On the plus side, most of us have different usernames there, so you can consider ideas apart from author until you re-learn everyone. Which happens pretty quickly, so not a big plus. The Tumblr folks seem to mostly agree that Tumblr is not the optimal solution for this, but it has the advantage of currently existing.
4casebash
The problem is that to contribute to that I would have to follow like 50 tumblrs and try to convince people to follow me as well

There are certain questions that have several possible answers, where people decide that a certain answer is obviously true and have trouble seeing the appeal of the other answers. If everyone settles on the same answer, all is well. If different people arrive at different answers and each believes that his answer is the obvious one, then the stage is set for a flame war. When you think the other guy's position is obviously false, it's that much harder to take him seriously.

-1ChristianKl
How much flame wars do you see on LW, when we do discuss politics? I don't see that as a major problem.
-1Kindly
We don't have flame wars of the calling-each-other-names kind, because of norms that say that if you see a comment with the substring "you're an idiot" outside of quotation, you downvote it regardless of anything else. (Or at least this is my strategy. If the comment is otherwise brilliant, I retract the downvote instead of upvoting, but this doesn't happen.) We do still have discussions about politics in which everyone says unproductive and/or stupid things. At the very least, all the stupidest comments that I've made on LW have been related to politics. And I'd go back and apologize for them if other people involved in the discussion weren't so obviously wrong.
0ChristianKl
Yes. That very much in line with the position I argue in this thread. Epictetus on the other hand did argue that flame wars are an issue.
0Epictetus
I was being figurative. I meant to imply that when two people both think the other person is obviously wrong, then productive, civil discourse is unlikely. The short time I've been on LW I noticed that the community is very much averse to actual flame wars and would probably down-vote a thread into oblivion before things got out of hand.
2Viliam_Bur
In recent months there were a few comments with flame-war potential which were quickly "downvoted into oblivion", but the next day their karma was above zero. Either it means we have a group of people who prevent their "side" from being downvoted below zero (although they don't bother to upvote it highly when it already is above zero), or we have a group of people who believe in something like "no comment should be downvoted just because it has a flame-war potential" who prevent downvoting below zero in principle regardless of the side. I don't know which one of these options it is, since all comments where I have seen this happen were from one "side" (maybe even from one user, I am not sure).
-1seer
No, all seems well. Except people develop massive over-confidence in that answer.
[-][anonymous]10

This is why I write about political philosophy, not politics. E.g. I disagree with John Rawl's veil-of-ignorance theory and even find it borderline disgusting (he is just assuming everybody is a risk-averse coward), but I don't see either myself or anyone else getting mind-killingly tribal over it. After all it is not about a party. It is not about an election program. It is not about power. It is about ideas.

I see the opposite norm what you mention: I think when I write about political philosophy on LW it gets a negative reaction because it is too politic... (read more)

After you get a taste of LW, every other internet forum feels stupid

And why do you think this is so? Are all participants on this forum genetically superior, and they have to prove it by giving a DNA sample before registering the user account? Or could it be because some topics and some norms of the debate attract some kind of people (and the few exceptions are then removed by downvoting)? Any other hypothesis?

If you propose another hypothesis, please don't just say "well, it is because you are (for example) more intelligent or more reasonable" without adding an explanation about how specifically this website succeeds in attracting the intelligent and/or reasonable people, without attracting the other kinds of people, so the newcomers who don't fit the norm are not able to simply outvote the old members and change the nature of the website dramatically. (Especially considering that this is a community blog, not one person's personal blog such as Slate Star Codex.)

9[anonymous]
Well, as for me, reading half the sequences change my attitude a lot by simply convicing me to dare to be rational, that it is not socially disapproved at least here. I would not call it norms, as the term "norms" I understand as "do this or else". And it is not the specific techniques in the sequences, but the attitudes. Not trying to be too clever, not showing off, not trying to use arguments as soldiers, not trying to score points, not being tribal, something I always liked but on e.g. Reddit there was quite a pressure to not do so. So it is not that these things are norms but plain simply that they are allowed. A good parallel is that throughout my life, I have seen a lot of tough-guy posturing in high school, in playgrounds, bars, locker rooms etc. And when I went to learn some boxing then paradoxically, that was the place I felt it is the most approved to be weak or timid. Because the attitude is that we are all here to develop, and therefore being yet underdeveloped is OK. One way to look at is that most people out in life tend to see human characteristics as fixed: you are smart of dumb, tough or puny and you are just that, no change, no development. Or putting it different, it is more of a testing, exam-taking attitude, not learning attitude: i.e. on the test, the exam, you are supposed to prove you already have whatever virtue is valued there, it is too late to say I am working on it. But in the boxing gym where everybody is there to get tougher, there is no such testing attitude, you can be upfront about your weakness or timidity and as long as you are working on it you get respect, because the learning attitude kills the testing attitude, because in learning circumstances nobody considers such traits too innate. Similarly on LW, the rationality learning attitude kills the rationality testing attitude and thus the smarter-than-though posturing, points-scoring attitude gets killed by it, because showing off inborn IQ is less important than learning the o
3Viliam_Bur
I like your example and "learning environment" vs "testing environment". However, I am afraid that LW is attractive also for people who instead of improving their rationality want to do other things; such as e.g. winning yet another website for their political faction. Some people use the word "rationality" simply as a slogan to mean "my tribe is better than your tribe". There were a few situations when people wrote (on their blogs) something like: "first I liked LW because they are so rational, but then I was disappointed to find out they don't fully support my political faction, which proves they are actually evil". (I am exaggerating to make a point here.) And that's the better case. The worse case is people participating on LW debates and abusing the voting system to downvote comments not beause those comments are bad from the espistemic rationality point of view, but because they were written by people who disagree (or are merely suspect to disagree) with their political tribe.
3[anonymous]
This is all fine, but what is missing for me is the reasoning behind something like "... and this is bad enough to taboo it completely and forfeit all potential benefits, instead of taking these risks" - at least if I understand you right. The potential benefits is coming up with ways to seriously improve the world. The potential risk is, if I get it right, that some people will behave irrationally and that will make some other people angry. Idea: let's try to convince the webmaster to make a third "quarantine" tab, to the right from the discussion tab, visible only to people logged in. That would cut down negative reflections from blogs, and also downvotes could be turned off there. An alternative without programming changes would be biweekly "incisive open threads", similar to Ozy's race-and-gender open threads, and downvoting customarily tabood in them. Try at least one?
4Viliam_Bur
Feel free to start a "political thread". Worst case: the thread gets downvoted. However, there were already such threads in the past. Maybe you should google them, look at the debate and see what happened back then -- because it is likely to happen again. Not downvoting brings also has its own problems: genuinely stupid arguments remain visible (or can even get upvotes from their faction), people can try winning the debate by flooding the opponent with many replies. Another danger is that political debates will attract users like Eugine Nier / Azathoth123. Okay, I do not know how to write it diplomatically, so I will be very blunt here to make it obvious what I mean: The current largest threat to the political debate on LW is a group called "neoreactionaries". They are something like "reinventing Nazis for clever contrarians"; kind of a cult around Michael Anissimov who formerly worked at MIRI. (You can recognize them by quoting Moldbug and writing slogans like "Cthulhu always swims left".) They do not give a fuck about politics being the mindkiller, but they like posting on LessWrong, because they like the company of clever people here, and they were recruited here, so they probably expect to recruit more people here. Also, LessWrong is pretty much the only debate forum on the whole internet that will not delete them immediately. If you start a political debate, you will find them all there; and they will not be there to learn anything, but to write about how "Cthulhu always swims left", and trying to recruit some LW readers. -- Eugine Nier was one of them, and he was systematically downvoting all comments, including completely innocent comments outside of any political debate, of people who dared to disagree with him once somewhere. Which means that if a new user happened to disagree with him once, they usually soon found themselves with negative karma, and left LessWrong. No one knows how many potential users we may have lost this way. I am afraid that if yo

I upvoted for this:

However, there were already such threads in the past. Maybe you should google them, look at the debate and see what happened back then -- because it is likely to happen again.

And, to further drive home the point, I'll link to the ones I could easily find: Jan 2012, Aug 2012, Dec 2012, Jan 2013, Feb 2013, more Feb 2013, Oct 2013, Jun 2014, Nov 2014.

3seer
Just out of curiosity, I looked at the latest politics thread in Vaniver's list. Despite being explicitly about NRx, in contains only two references to "Cthulhu", both by people arguing against NRx. Rather anyone who isn't sufficiently progressive gets called a neoreactionary.
3Lumifer
Y'know, you do sound mindkilled about NRx...

Viliam_Bur is the person who gets messages asking him to deal with mass downvotes, so I am sympathetic to him not wanting us to attract more mass downvoters.

0Viliam_Bur
Not anymore, but yeah, this is where my frustration is coming from. Also, for every obvious example of voting manipulation, there are more examples of "something seems fishy, but there is no clear definition of 'voting manipulation' and if I go down this slippery slope, I might end up punishing people for genuine votes that I just don't agree with, so I am letting it go". But most of these voting games seem to come from one faction of LW users, which according to the surveys is just a tiny minority. (When the "progressives" try to push their political agenda on LW -- and I don't remember them doing this recently -- at least they do it by writing accusatory articles, and by complaining about LW and rationality on other websites, not by playing voting games. So their disruptions do not require moderator oversight.)
2hairyfigment
I don't understand this word "was" - I just lost another 9+ karma paperclips to Eugine Nier. Not to put too fine a point on it, but this seems less like a problem with political threads and more like a problem with someone driving most of the world's population (especially the educated western population) away from existential risk prevention in general and FAI theory in particular.
5ChristianKl
It's usually very hard to recognize when one get's mindkilled. Empirical evidence from studies suggests that it needs very little to get people who can use Bayes rules for abstract textbook problems to avoid using it when faced with a political subject where they care about one side winning. That's what "mind-killing" is about. People on LW aren't immune on that regard. I have plenty of times seen that someone on LW makes an argument on the subject of politics that he surely wouldn't make on a less charged subject because they argument structure doesn't work.
0[anonymous]
Yes, but Bayesian rules are about predictions e.g. would a policy what it is expected to do e.g. does raising the min wage lead to unemployment or not, and political philosophy is one meta-level higher than that e.g. is unemployment bad or not, or is it unjust or not. While it is perhaps possible and perhaps preferable to turn all questions of political philosophy into predictive models, changing some of them and some other questions simply dissolved (i.e. is X fair?) if they cannot be, that is not done yet, and that is precisely what could be done here. Because where else?
0ChristianKl
When talking about issues of political philosophy you often tend to talk quite vaguely and are to vague to be wrong. That's not being mind-killed but it's also not productive. If you want to decide whether unemployment is bad or not than factual questions about unemployment matter a great deal. How does unemployment affect the happiness of the unemployed? To what extend do the unemployed use their time to do something useful for society like volunteering?
2Transfuturist
Um, what? What's wrong with risk-aversion? And what's wrong with the Veil of Ignorance? How does that assumption make the concept disgusting?
9[anonymous]
First of all, the there is the meta-level issue whether to engage the original version or the pop version, as the first is better but the second is far, far more influential. This is an unresolved dilemma (same logic: should an atheist debate with Ed Feser or with what religious folks actually believe?) and I'll just try to hover in between. A theory of justice does not simply describe a nice to have world. It describes ethical norms that are strong enough to be warrant coercive enforcement. (I'm not even libertarian, just don't like pretending democratic coercion is somehow not one.) Rawls is asking us to imagine e.g. what if we are born with a disability that requires really a lot of investment from society to make its members live an okay life, let's call the hypothetical Golden Wheelchair Ramps. Depending on whether we look at it rigorously, in a more "pop" version Rawls is saying our pre-born self would want GWR built everywhere even when it means that if we are born able and rich we taxed through the nose to pay for it, or in a more rigorous version 1% change to be born with this illness would mean we want 1% of GWRs built. Now, this all is all well if it is simply understood as the preferences of risk-averse people. After all we have a real, true veil of ignorance after birth: we could get poor, disabled etc. any time. It is easy to lose birth privileges, well, many of them at least. More risk-taking people will say I don't really want to pay for GWR, I am taking my gamble tha I will be born rich and able in which case I won't need them and I would rather keep that tax money. (This is a horribly selfish move, but Rawls set up the game so that it is only about fairness emerging out of rational selfishness and altruism is not required in this game so I am just following the rules.) However, since it is a theory of justice, it means the preferences of risk-aversge people are made mandatory, turned into a social policy and enforced with coercion. And that is
0Transfuturist
Thanks for the explanation. Do you have any alternatives?
3[anonymous]
1. How about no theory of justice? :) Philosophers should learn from scientists here: if you have no good explanation, none at all is more honest than a bad but seductive one. As a working hypothesis we could consider our hunger for justice and fairness an evolved instinct, a need, emotion, a strong preference, something similar to the desire for social life or romantic love, it is simply one of the many needs a social engineer would aim to satisfy. The goal is, then, to make things "feel just" enough to check that checkmark. 2. "to each his own" reading Rawls and Rawlsians I tend to sense a certain, how to put it, overly collective feeling. That there is one heavily interconnected world and it is the property of all humankind and there is a collective, democratic decision-making on how to make it suitable for all. So in this kind of world there is nothing exempt for politics, nothing is like "it is mine and mine alone and not to be touched by others". The question is, is it a hard reality derived by the necessities of the dynamics of a high-tech era? Or just a preference? My preferences are way more individualistic than that. The attitude that everything is collective and to be shaped and formed in a democratic way is IMHO way too often a power play by "sophists" who have a glib tongue, good at rhethorics, and can easily shape democratic opinion. I am atheist but "culturally catholic" enough to find the parable of the snake offering the fruit useful: that it is not only through violence, but also through glib, seductive persuasion, through 100% consent, a lot of damage can be done. This is something not really understood properly in the modern world, we understand how violence, oppression or outright fraud can be bad, but not really realize how much harm a silver tongue can cause without even outright lying, because we already live in socities where silver-tongue intellectuals are already the ruling class, so they underplay their own power by lionizing consent
0seer
The problem is that Rawls asserts that everyone is maximally risk-averse.
1JoshuaZ
I don't think Rawls makes that assertion. Rawls does presume some amount of risk aversion, but it seems highly inaccurate to say that Rawls asserts that "everyone is maximally risk-averse."
-8[anonymous]