XiXiDu comments on The genie knows, but doesn't care - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (515)
So you believe that "understanding" is an all or nothing capability? I did never intend to use "understanding" like this. My use of the term is such that if my speech recognition software correctly transcribes 98% of what I am saying then it is better at understanding how certain sounds are related to certain strings of characters than a software that correctly transcribes 95% of what I said.
One enormous step or a huge number of steps? If the former, what makes you think so? If the latter, then at what point do better versions of Siri start acting in catastrophic ways?
Most of what humans understand is provided by other humans who themselves got another cruder version from other humans.
If an AI is not supposed to take over the world, then from the perspective of humans it is mistaken to take over the world. Humans got something wrong about the AI design if it takes over the world. Now if needs to solve a minimum of N problems correctly in order to take over the world, then this means that it succeeded N times at being general intelligent at executing a stupid thing. The question that arises here is whether it is more likely for humans to build an AI that works perfectly well along a number of dimensions at doing a stupid thing than an AI that fails at doing a stupid thing because it does other stupid things as well?
Sure, I do not disagree with this at all. AI will very likely lead to catastrophic events. I merely disagree with the dumb superintelligence scenario.
In other words, humans are likely to fail at AI in such a way that it works perfectly well in a catastrophic way.
I certainly do not reject that general AI is extremely dangerous in the hands of unfriendly humans and that only a friendly AI that takes over the world could eventually prevent a catastrophe. I am rejecting the dumb superintelligence scenario.