It kind of seems like at the moment, you mainly want to find post-hoc reasons why the exercise was "useful".
I did use all of those reasons to justify why I thought I should do it beforehand. But I have noticed myself repeating those reasons to make myself feel more justified. (Also possible that my primary motivation in doing so in the first place was the social-skill development one)
In any case, I think your recommendations for how to proceed are good ones.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Your proposals are the kind of strawman utilitarianism that turns out to be both wrong and stupid, for several reasons.
Also, I don't think you understand what the SIAI argues about what an unFriendly intelligence would do if programmed to maximize, say, the personal wealth of its programmers. Short story, this would be suicide or worse in terms of what the programmers would actually want. The point at which smarter-than-human AI could be successfully abused by a selfish few is after the problem of Friendliness has been solved, rather than before.
Ah, another point about maximising. What if the AI uses CEV of the programmers or the corporation? In other words, it's programmed to maximise their wealth in a way they would actually want? Solving that problem is a subset of Friendliness.