Multiheaded comments on Kevin Drum's Article about AI and Technology - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (33)
The problem is much worse. All the ethical sophistication in the world (as argued by e.g. Richard Rorty) can not in practice serve as a barrier to cruelty and domination if there's no underlying moral sentiment of empathy, of sharing a certain "human circle" with your fellow beings.
The ruling class needs to share some moral emotions that would allow preference-utilitarian/negative-utilitarian ethics towards the "worthless" poor in the first place. Otherwise they might simply decide that the "efficiency" in "efficient charity" is best achieved by enslaving the former working class or lumping it with domestic animals... for the proles' own safety/virtue/moral benefit/etc, of course. They're dirty and uncouth! they would revert to "savagery" without proper control! they have no use for the liberties that the superior man enjoys! empathizing with them as equals is simply an intellectual mistake and naive sentimentality! ...Basically, see every rationalization for slavery ever made; when slavery cannot be opposed by force or starved economically, those aspiring to become the new "aristocracy" could grow very enamored of such rationalizations again.
I can hardly even begin describing my fears of how the new technocratic/quasi-aristocratic elite might conduct its relations with the "common people" that have no economic or military leverage over it. This is why I am so fucking terrified about the emerging association between transhumanism and an anti-egalitarian/far-right ideology!
The Jacobin magazine has a very good article on this subject from a while ago:
I wonder how this bleak picture might change if we throw cheap cognitive enhancement into the mix. Especially considering Eliezer's idea that increased intelligence should make the poor folks better at cooperating with each other.
Considered it too. I just pray to VALIS that there'll be a steep enough curve of diminishing returns associated with intelligence amplification - so that even if the technocratic elite desperately wants to maintain supremacy, it can't just throw exponentially more resources at "boutique" small-scale enhancement and maintain the gap with "mass-enhanced" humans.
I suspect there are unknown unknowns in the scenario. The masses are more likely to have open source research. I think the elite is more likely to screw itself up with a bad ideology, but that might be mere wishful thinking.
Can you provide some examples of this association?
If you're just talking about libertarians or something, my impression is that they want a reasonably egalitarian society too....they just have different economic policies for bringing it about.
Not libertarians. Reactionaries.
authoritarian anti-egalitarians.
I just meant that I haven't come across any examples of people who are simultaneously transhumanist and authoritarian. Where do I find these writings?
these guys are lesswrongers.
I am transhumanist and authoritarian.
Nick Land I think is another big example?
Oh...so basically the whole Dark Enlightenment school of thought?
I've only started reading this strand of thought recently, and haven't yet made the connection to authoritarianism. I get that they reject modern liberalism, democracy, and the idea that everyone has equal potential, but do they also reject the idea of meritocracy and the notion that everyone aught to have equal opportunity? Do they also believe that an elite group should have large amounts of power over the majority? And do they also believe that different people have (non-minor) differences in intrinsic value as well as ability?
EDIT thoughts after reading the sources you linked:
Perhaps an anti-egalitarian can be thought of one who does not value equality as an intrinsic moral good? Even if everyone is valued equally, the optimal solution in terms of getting the most satisfaction to everyone does not necessarily involve everyone being satisfied in roughly equal measures.
Basically, on Haidt's moral axis, the anti-egalitarians would score highly only on Harm Avoidance, and low on everything else...
...actually, come to think of it that's almost how I scored when i took it a few years ago. - 3.7 harm, 2.0 fairness, 0 on everything else.
you've given yourself the label "authoritarian". If you took Haidt's test, did you score high on authoritarianism? (just trying to pin down what exactly is meant by authoritarianism in this case)
Can't speak for others, but here's my take:
s/they/you:
I think it's more important to look at absolute opportunity than relative opportunity.
That said, in my ideal world we all grow up together as one big happy family. (with exactly the right amount of drama)
Yes, generally. Note that everything can be cast in a negative light by (in)appropriate choice of words.
The elites need not be human, or the majority need not be human.
My ideal world has an nonhuman absolute god ruling all, a human nobility, and nonhuman servants and npc's.
Yes, people currently have major differences in moral value. This may or may not be bad, I'm not sure.
But again, I'm more concerned with people's absolute moral value, which should be higher. (and just saying "I should just value everyone more" ie "lol I'll multiply everyone's utility by 5" doesn't do anything)
Dunno, you'd have to test them.
My general position on such systems is that all facets of human morality are valuable, and people throw them out/emphasize them for mostly signalling/memetic-infection reasons.
All of those axes sound really important.
Haven't taken the test. Self-describing as an "authoritarian" can only really be understood in the wider social context where authority and hierarchy have been devalued.
So a more absolute description would be that I recognize the importance of strong central coordination in doing things (empirical/instrumental), and find such organization to have aesthetic value. For example, I would not want to organize my mind as a dozen squabbling "free" modules, and I think communities of people should be organized around strong traditions, priests, and leaders.
Of course I also value people having autonomy and individual adventure.
I think that's really the crux of it. When someone says they are authoritarian, that doesn't necessarily have anything to do with present/past authoritarian regimes.
Isn't that a bit recursive? Human morality defines what is valuable. Saying that a moral is valuable is implying some sort of meta-morality. If someone doesn't assign "respect for authority" intrinsic value (though it may have utility in furthering other values), isn't that ...just the way it is?
I think everyone's ideal world is one where all our actions were directed by a being with access to the CEV of humanity (or, more accurately, each person wants humanity to be ruled by their own CEV). On LessWrong, that's not even controversial - it would be by definition the pinnacle of rational behavior.
The question is intended to be answered with realistic limitations in mind. Given our current society (or maybe given our society within 50 years, assuming none of that "FOOM" stuff happens) is there a way to bring about a safe, stable authoritarian society which is better than our own? There's no point to a political stance unless it has consequences for what actions one can take in the short term.
Sounds pretty dangerous.
No. Generally people are confused about morality, and such statements are optimized for signalling rather than correctness with respect to their actual preferences.
For example, I could say that I am a perfectly altruistic utilitarian. This is an advantageous thing to claim in some circles, but it is also false. I claim that the same pattern applies to non-authoritarianism, having been there myself.
So when I say "all of it is valuable" I am rejecting the pattern "Some people value X, but they are confused and X is not real morality, I only value Y and Z" which is a common position to take wrt the authority and purity axes on haidt, because that is supposedly a difference between liberals and conservatives, hence ripe for in-group signalling.
If some people value X, consider the proposition that it is actually valuable. Sometimes it isn't, and they're just weird, but that's rare, IMO.
You are asking me to do an extremely large computational project (designing not only a good human society, but a plausible path to it), based on assumptions I don't think are realistic. I don't have time for that. Some people do though:
Moldbug has written plenty about how such a society could function and come about (the reaction)
Yvain has also recently laid out his semi-plausible authoritarian human society (raikoth) (eugenics, absolute rule by computer, omnipresent surveillance, etc)
I expect moreright will have some interesting discussion of this as well.
Oh, I didn't mean that I want you to outline a manifesto or plan or anything.
was my original question. What I meant was more that if you identify as "authoritarian", it implicitly means that you think that it is a goal worth working towards in the real world, rather than a platonic ideal. Obviously, if it were possible to ensure a ruler or ruling class competently served the interests of the people, dictatorship would be the best form of government - but someone who identifies as authoritarian is saying that they believe that this can actually happen and that if history had gone differently and we were under a certain brand of authoritarian right now we'd be better off.
Hehe...you better expect to save quite a few lives if you want to justify staying alive with that preference set (you have organs that could be generating so much utility to so many people!).
If you cross out " but they are confused and X is not real morality" I guess I'm one of those people - I don't think they are confused about what they value. I just think that I don't share that value. The phrase "real morality" is senseless - I'm not a moral realist.
I suppose I could be confused about my own values, of course. But when I read Haidt's work, I became better able to understand what my conservative friends would think about various situations. It improved my ability to empathize. It wouldn't even have occurred to me to respect authority or purity intrinsically...I used to think that they just weren't thinking clearly (whereas now I think it's just a matter of different values)
Enslaving, in terms of putting to work w/o pay, doesn't make much sense in the hypothetical where the marginal value of human labor is effectively worthless, right? What would the poor be enslaved to do?
Perhaps a more realistic scary scenario would be this one: http://unqualified-reservations.blogspot.com/2009/11/dire-problem-and-virtual-option.html (essentially: when you're no longer of productive benefit to society, you go to virtual reality / video game heaven).
How scary that proposal sounds might be a matter of debate, though I suppose most folks around here would prefer a more egalitarian scenario where cognitive enhancement is distributed evenly enough that no human is left behind.
For my own part, I'm content to wirehead to the extent that I have confidence that machines are capable of being more productive-to-others than I am along the axes I value being productive-to-others on.
Put differently: I don't seem to care very much whether I am doing important things, as long as important things are getting done at least as effectively as they would be were I doing them.
There's a slight refinement to this in the case where the entities doing the important things are basically like me, since there's a whole question of whether I'm defecting in a Prisoner's Dilemma, but I interpret the connotations of "machine" as implying that this complication doesn't arise here.
Ah, a very interesting point of view. Framing it as a dichotomy between important work and wireheading seems a bit stark though. Are you meaning to include any sort non-productive fun under the umbrella of wireheading? I usually think of that term as implying only simple, non-complex fun (e.g. pleasure of orgasm vs experience of love and friendship).
This gets difficult to specify, because "productive" and "important" are themselves ill-defined, but I certainly mean to include "virtual reality/video game heaven" within "wireheading," including VR environments including virtual people that pretend to love and befriend me. (This is distinguished from actual people who really do love and befriend me, whether they have flesh-and-blood bodies or not.)
That said, I have no idea, ultimately, if I would prefer the continuous-orgasm video game or the fake-love-and-friendship video game... I might well prefer the latter, or to switch back and forth, or something else.
Ah, that clears things up, thanks!
It certainly seems more analogous to welfare or gated communities than a hypothetical "war against the poor" does.