Vaniver comments on New report: Intelligence Explosion Microeconomics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (244)
I see no reason to suspect the space of optimization options contains value imperatives, assuming the AI is guarded against the equivalent of SQL injection attacks.
Humanity seems to be evolving towards compassion because being the causal factors increasing compassion are on average profitable for individual humans with those factors. The easy example of this is stable, strong police forces routinely hanging murderers, instead of those murderers profiting from from their actions. If you don't have an analogue of the police, then you shouldn't expect the analogue of the reduction in murders.
(I should remark that I very much like the way this report is focused; I think that trying to discuss causal models explicitly is much better than trying to make surface-level analogies.)
At the very least, using a page break rather than a bunch of ellipses seems better.
I was simply paraphrasing David Pearce, it's not my opinion, so no point arguing with me. That said, your argument seems misdirected in another way: the imperative against suffering applies to people and animals whose welfare is not in any way beneficial and sometimes even detrimental to those exhibiting compassion.
Yeah, but they are losing compassion for other things (unborn babies, gods, etc...). What reason is there to believe there is a net gain in compassion, rather than simply a shift in the things to be compassionate towards?
EDIT: This should have been directed towards Vaniver rather than shminux.
an expanding circle of empathetic concern needn't reflect a net gain in compassion. Naively, one might imagine that e.g. vegans are more compassionate than vegetarians. But I know of no evidence this is the case. Tellingly, female vegetarians outnumber male vegetarians by around 2:1, but the ratio of male to female vegans is roughly equal. So an expanding circle may reflect our reduced tolerance of inconsistency / cognitive dissonance. Men are more likely to be utilitarian hyper-systematisers.
Does your source distinguish between motivations for vegetarianism? It's plausible that the male:female vegetarianism rates are instead motivated by (e.g.) culture-linked diet concerns -- women adopt restricted diets of all types significantly more than men -- and that ethically motivated vegetarianism occurs at similar rates, or that self-justifying ethics tend to evolve after the fact.
Nornagest, fair point. See too "The Brain Functional Networks Associated to Human and Animal Suffering Differ among Omnivores, Vegetarians and Vegans" : http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0010847
Right. What I should have said was:
The growth of science has led to a decline in animism. So in one sense, our sphere of concern has narrowed. But within the sphere of sentience, I think Singer and Pinker are broadly correct. Also, utopian technology makes even the weakest forms of benevolence vastly more effective. Consider, say, vaccination. Even if, pessimistically, one doesn't foresee any net growth in empathetic concern, technology increasingly makes the costs of benevolence trivial.
[Once again, I'm not addressing here the prospect of hypothetical paperclippers - just mind-reading humans with a pain-pleasure (dis)value axis.]
Would this be the same Singer who argues that there's nothing wrong with infanticide?
On (indirect) utilitarian grounds, we may make a strong case that enshrining the sanctity of life in law will lead to better consequences than legalising infanticide. So I disagree with Singer here. But I'm not sure Singer's willingness to defend infanticide as (sometimes) the lesser evil is a counterexample to the broad sweep of the generalisation of the expanding circle. We're not talking about some Iron Law of Moral Progress.
If I recall correctly Singer's defense is that it's better to kill infants than have them grow up with disabilities. The logic here relies on excluding infants and to a certain extent people with disabilities from our circle of compassion.
You may want to look at gwern's essay on the subject. By the time you finish taking into account all the counterexamples your generalization looks more like a case of cherry-picking examples.
Eugine, are you doing Peter Singer justice? What motivates Singer's position isn't a range of empathetic concern that's stunted in comparsion to people who favour the universal sanctity of human life. Rather it's a different conception of the threshold below which a life is not worth living. We find similar debates over the so-called "Logic of the Larder" for factory-farmed non-human animals: http://www.animal-rights-library.com/texts-c/salt02.htm. Actually, one may agree with Singer - both his utilitarian ethics and bleak diagnosis of some human and nonhuman lives - and still argue against his policy prescriptions on indirect utilitarian grounds. But this would take us far afield.
As I understand the common arguments for legalizing infanticide, it involves weighting the preferences of the parents and society more - not a complete discounting of the infant's preferences.
I find it really weird that I don't recall having seen that piece of rhetoric before. (ETA: Argh, dangerously close to politics here. Retracting this comment.)
I wish I could upvote your retraction.
The closest thing I have seen to this sort of idea is this:
http://www.gwern.net/The%20Narrowing%20Circle
Wow, an excellent essay!
If I remember correctly, I started thinking along these lines after hearing Robert Garland lecture on ancient Egyptian religion. As a side-note to a discussion about how they had little sympathy for the plight of slaves and those in the lower classes of society (since this was all part of the eternal cosmic order and as it should be), he mentioned that they would likely think that we are the cruel ones, since we don't even bother to feed and cloth the gods, let alone worship them (and the gods, of course, are even more important than mere humans, making our lack of concern all the more horrible).
Any idea where Garland might've written that up? All the books listed in your link sound like they'd be on Greece, not Egypt.
It was definitely a lecture, not a book. Maybe I'll track it down when I get around to Ankifying my Ancient Egypt notes.
It seems beneficial to make sure my understanding of why Pearce's argument fails matches that of others, even if I don't need to convince you that it fails.
I interpret imperatives as "you should X," where the operative word is the "should," even if the content is the "X." It is not at all obvious to me why Pearce expects the "should" to be convincing to a paperclipper. That is, I don't think there is a logical argument from arbitrary premises to adopt a preference for not harming beings that can feel pain, even though the paperclipper may imagine a large number of unconvincing logical arguments whose conclusion is "don't harm beings that can feel pain if it costless to avoid" on the way to accomplishing its goals.
Perhaps it's worth distinguishing the Convergence vs Orthogonality theses for: 1) biological minds with a pain-pleasure (dis)value axis. 2) hypothetical paperclippers.
Unless we believe that the expanding circle of compassion is likely to contract, IMO a strong case can be made that rational agents will tend to phase out the biology of suffering in their forward light-cone. I'm assuming, controversially, that superintelligent biological posthumans will not be prey to the egocentric illusion that was fitness-enhancing on the African savannah. Hence the scientific view-from-nowhere, i.e. no arbitrarily privileged reference frames.
But what about 2? I confess I still struggle with the notion of a superintelligent paperclipper. But if we grant that such a prospect is feasible and even probable, then I agree the Orthogonality thesis is most likely true.
As mentioned elsewhere in this thread, it's not obvious that the circle is actually expanding right now.
This reads to me as "unless we believe conclusion ~X, a strong case can be made for X," which makes me suspect that I made a parse error.
This is a negative statement: "synthetic superintelligences will not have property A, because they did not come from the savanna." I don't think negative statements are as convincing as positive statements: "synthetic superintelligences will have property ~A, because ~A will be rewarded in the future more than A."
I suspect that a moral "view from here" will be better at accumulating resources than a moral "view from nowhere," both now and in the future, for reasons I can elaborate on if they aren't obvious.
There is no guarantee that greater perspective-taking capacity will be matched with equivalent action. But presumably greater empathetic concern makes such action more likely. [cf. Steven Pinker's "The Better Angels of Our Nature". Pinker aptly chronicles e.g. the growth in consideration of the interests of nonhuman animals; but this greater concern hasn't (yet) led to an end to the growth of factory-farming. In practice, I suspect in vitro meat will be the game-changer.]
The attributes of superintelligence? Well, the growth of scientific knowledge has been paralleled by a growth in awareness - and partial correction - of all sorts of cognitive biases that were fitness-enhancing in the ancestral environment of adaptedness. Extrapolating, I was assuming that full-spectrum superintelligences would be capable of accessing and impartially weighing all possible first-person perspectives and acting accordingly. But I'm making a lot of contestable assumptions here. And see too the perils of: http://en.wikipedia.org/wiki/Apophatic_theology