pjeby comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (432)
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
It really doesn't take that much human-level intelligence to change how things are done -- all it takes is a lack of attachment to the current ways.
And that's perhaps the biggest "natural resource" an AI has: the lack of status quo bias.
I'm not sure I understand what "shift the comparative advantage" could mean, and I have no idea why this is supposed to be a response to my point.
Maybe I didn't make my point clearly enough. My contention is that even if an AI is better at absolutely everything than a human being, it could still be better off trading with human beings for certain goods, for the simple reason that it can't do everything, and in such a scenario both human beings and the AI would get gains from trade.
As Nesov points out, if the AI has the option of, say, converting human beings into computational substrate and using them to simulate new versions of itself, then this ceases to be relevant.
I don't understand what are you arguing for. That people become better off doing something different, doesn't necessarily imply that they become obsolete, or even that they can't continue doing the less-efficient thing.