One perhaps useful analogy for super-intelligence going wrong is corporations.
We create corporations to serve our ends. They can do things we cannot do as individuals. But in subtle and not-so-subtle ways corporations can behave in very destructive ways. One example might be the way that they pursue profit at the cost of in some cases ruining people's lives, damaging the environment, corrupting the political process.
By analogy it seems plausible that super-intelligences may behave in a way that is against our interests.
It is not valid to assume that a super-intelligence will be smart enough to discern true human interests, or that it will be motivated to act on this knowledge.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Plausible guess, but actually my error was different: I hadn't noticed the bit of Jacobian's comment you quote there; I read what you wrote and made the mistake of assuming it was correct.
Those words "once you've decided on a course of action" were your words. I just quoted them. It does indeed appear that they don't quite correspond to what Jacobian wrote, and I should have spotted that, but the original misrepresentation of Jacobian's position was yours rather than mine.
(But I should make clear that you misrepresented Jacobian's position by making it look less unreasonable and less easy for you to attack, so there's something highly creditable about that.)
I am afraid I cannot claim here any particularly noble motives.
In Jacobian's text there are, basically, two decision points: the first one is deciding to do good, and the second one is deciding on a course of action. You lose empathy in between them. There are (at least) two ways to interpret this. In one when you decide "do good", you make just a very generic decision to do some unspecified good. All the actual choices are at the "course of action" point. In another one at the first decision point you already decide what particular good do you want to work towards and then the second decision point is just the details of implementation.
I didn't want to start dissecting Jacobian's post at this level of detail, so I basically simplified it by saying that you lose your empathy before making some (but not necessarily all) choices. I don't know if you want to classify it as "technically incorrect" :-/