XiXiDu comments on Risks from AI and Charitable Giving - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (126)
I believe that what he is suggesting is that if you reached a certain plateau then intelligence hits diminishing returns. Would Marilyn vos Savant be proportionally more likely to take over the world, if she tried to, than a 115 IQ individual?
Some anecdotal evidence:
Is there evidence that a higher IQ is useful beyond a certain level? The question is not just if it is useful but if it would be worth the effort it would take to amplify your intelligence to that point given that your goal was to overpower lower IQ agent's. Would a change in personality, more data, a new pair of sensors or some weapons maybe be more useful? If so, would an expected utility maximizer pursue intelligence amplification?
(A marginal note, bigger is not necessarily better.)
Sure. She's demonstrated that she can communicate successfully with millions and handle her own affairs quite successfully, generally winning at life. This is comparable to, say, Ronald Reagan's qualifications. I'd be quite unworried in asserting she'd be more likely to take over the world than a baseline 115 person.
I upvoted for the anecdote, but remember that you're referring to von Neumann, who invented both the basic architecture of computers and the self-replicating machine. I am not qualified to judge whether or not those are as original as relativity, but they are certainly big.