Hm, sorry, it's looking increasingly difficult to reach a consensus on this, so I'm going to bow out after this post.
With that in mind, I'd like to say that what I have in mind when I say "an action is rational" is approximately "this action is the best one for achieving one's goals" (approximately because that ignores practical considerations like the cost of figuring out which action this is exactly). I also personally believe that insofar as ethics is worth talking about at all, it is simply the study of what we socially consider to be convenient to term good, not the search for an absolute, universal good, since such a good (almost certainly) does not exist. As such, the claim that you should always act ethically is not very convincing in my worldview (it is basically equivalent to the claim that you should try to benefit society and is similarly differently persuasive for different people). Instead, each individual should satisfy her own goals, which may be completely umm... orthogonal... to whatever we decide to use for "ethics". The class of agents that will indeed decide to care about the ethics we like seems like a tiny subset of all potential agents, as well as of all potential superintelligent agents (which is of course just a restatement of the thesis).
Consequently, to me, the idea that we should expect a superintelligence to figure out some absolute ethics (that probably don't exist) and decide that it should adhere to them looks fanciful.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
My hope was to get you to support that claim in an inside-view way. Oh well.
Why it would not do paperclip (or random value) maximization as a goal is explained more at length in the article. There is more than one reason. We're considering a generally superintelligent agent, assuming above-human philosophical capacity. In terms of personal identity, there is a lack of personal identities, so it would be rational to take an objective, impersonal view, taking account of values and reasonings of relevant different beings. In terms of meta-ethics, there is moral realism and values can be reduced to the quality of conscious experience, so it would have this as its goal. If one takes moral anti-realism to be true, at least for this type of agent we are considering, a lack of real values would be understood as a lack of real goals, and could lead to the tentative goal of seeking more knowledge in order to find a real goal, or having no reason to do anything in particular (this is still susceptible to the considerations from personal identity). I argue against moral anti-realism.