Wiki Contributions

Comments

I prefer this briefer formalization, since it avoids some of the vagueness of "adequate preparations" and makes premise (6) clearer.

  1. At some point in the development of AI, there will be a very swift increase in the optimization power of the most powerful AI, moving from a non-dangerous level to a level of superintelligence. (Fast take-off)
  2. This AI will maximize a goal function.
  3. Given fast-take off and maximizing a goal function, the superintelligent AI will have a decisive advantage unless adequate controls are used.
  4. Adequate controls will not be used. (E.g. Won’t box/boxing won’t work)
  5. Therefore, the superintelligent AI will have a decisive advantage
  6. Unless that AI is designed with goals that stably and extremely closely align with ours, if the superintelligent AI has a decisive advantage, civilization will be ruined. (Friendliness is necessary)
  7. The AI will not be designed with goals that stably and extremely closely align with ours.
  8. Therefore, civilization will be ruined shortly after fast take-off.

IMO, the "rapid takeoff" idea should probably be seen as a fundraising ploy. It's big, scary, and it could conceivably happen - just the kind of thing for stimulating donations.

It seems that SIAI would have more effective methods for fundraising, e.g. simply capitalizing on "Rah Singularity!". I therefore find this objection somewhat implausible.

A recent study by folks at the Oxford Centre for Neuroethics suggests that Greene et. al.'s results are better explained by appeal to differences in how intuitive/counterintuitive a moral judgment is, rather than differences in how utilitarian/deontological it is. I had a look at the study, and it seems reasonably legit, but I don't have any expertise in neuroscience. As I understand it, their findings suggest that the "more cognitive" part of the brain gets recruited more when making a counterintuitive moral judgment, whether utilitarian or deontological.

Also, it is worth noting that attempts to replicate the differences in response times have failed (this was the result with the Oxford Center for Neuroethics study as well).

Here is an abstract:

Neuroimaging studies on moral decision-making have thus far largely focused on differences between moral judgments with opposing utilitarian (well-being maximizing) and deontological (duty-based) content. However, these studies have investigated moral dilemmas involving extreme situations, and did not control for two distinct dimensions of moral judgment: whether or not it is intuitive (immediately compelling to most people) and whether it is utilitarian or deontological in content. By contrasting dilemmas where utilitarian judgments are counterintuitive with dilemmas in which they are intuitive, we were able to use functional magnetic resonance imaging to identify the neural correlates of intuitive and counterintuitive judgments across a range of moral situations. Irrespective of content (utilitarian/deontological), counterintuitive moral judgments were associated with greater difficulty and with activation in the rostral anterior cingulate cortex, suggesting that such judgments may involve emotional conflict; intuitive judgments were linked to activation in the visual and premotor cortex. In addition, we obtained evidence that neural differences in moral judgment in such dilemmas are largely due to whether they are intuitive and not, as previously assumed, to differences between utilitarian and deontological judgments. Our findings therefore do not support theories that have generally associated utilitarian and deontological judgments with distinct neural systems.

An important quote from the study:

To further investigate whether neural differences were due to intuitiveness rather than content of the judgment [utilitarian vs. deontological], we performed the additional analyses....When we controlled for content, these analyses showed considerable overlap for intuitiveness. In contrast, when we controlled for intuitiveness, only little--if any--overlap was found for content. Our results thus speak against the influential interpretation of previous neuroimaging studies as supporting a general association between deontological judgment and automatic processing, and between utilitarian judgment and controlled processing.” (p. 7 my version)

Where to find the study (subscription only):

Kahane, G., K. Wiech, N. Shackel, M. Farias, J. Savulescu and I. Tracey, ‘The Neural Basis of Intuitive and Counterintuitive Moral Judgement’, forthcoming in Social, Cognitive and Affective Neuroscience.

Link on Guy Kahane's website: http://www.philosophy.ox.ac.uk/members/research_staff/guy_kahane

A simple explanation is that using phrases like "brain scans indicate" and including brain scan images signals scientific eliteness, and halo effect/ordinary reasoning causes them to increase their estimate of the quality of the reasoning they see.

Do you know about Giving What We Can? You may be interested in getting to know people in that community. Basically, it's a group of people that pledges to give 10% of their earnings to the most effective charities in the developing world. Feel free to PM me or reply if you want to know more.

Usually, average utilitarians are interested in maximizing the average well-being of all the people that ever exist, they are not fundamentally interested in the average well-being of the people alive at particular points of time. Since some people have already existed, this is only a technical problem for average utilitarianism (and a problem that could not even possibly affect anyone's decision).

Incidentally, not distinguishing between averages over all the people that ever exist and all the people that exist at some time leads some people to wrongly conclude that average utilitarianism favors killing off people who are happy, but less happy than average.

Gustaf Arrhenius is the main person to look at on this topic. His website is here. Check out ch. 10-11 of his dissertation Future Generations: A Challenge for Moral Theory (though he has a forthcoming book that will make that obsolete). You may find more papers on his website. Look at the papers that contain the words "impossibility theorem" in the title.

Both you and Eliezer seem to be replying to this argument:

  • People only intrinsically desire pleasure.

  • An FAI should maximize whatever people intrinsically desire.

  • Therefore, an FAI should maximize pleasure.

I am convinced that this argument fails for the reasons you cite. But who is making that argument? Is this supposed to be the best argument for hedonistic utilitarianism?

Load More