private_messaging comments on Wanted: "The AIs will need humans" arguments - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (83)
Well, given enough computing power, AIXI-tl is an artificial superintelligence. It also doesn't relate abstract mathematical self and the substrate that approximately computes it's abstract mathematical self; it can't care about the survival of the physical system that approximately computes it; it can't care to avoid being shut down. It's neither friendly nor unfriendly; far more bizarre and alien than speculations; not encompassed by 'general' concepts that SI thinks in terms of, like SI's oracle.
Yes, for now. When we get closer to creation of AGI not by SI, though, it is pretty clear that the first option becomes the only option.
I am trying to put it in the way for people whom are concerned about the AI risk. I don't think there's actual danger because I don't see some of the problems that are in the way of world destruction by AI as solvable, but if there were solutions to them it'd be dangerous. E.g. to self preserve, AI must relate it's abstracted-from-implementation high level self to the concrete electrons in the chips. Then, it has to avoid wireheading somehow (the terminal wireheading where the logic of infinite input and infinite time is implemented). Then, the goals on real world have to be defined. None of this is necessary to solve for creating a practically useful AI. Working on this is like solving the world power problems by trying to come up with a better nuclear bomb design because you think the only way to generate nuclear power is to blow up nukes in a chamber underground.
I am not sure about what basics are right. The very basic concept here is "utility function", which is a pretty magical something that e.g. gives you true number of paperclips in the universe. Everything else seem to have this as dependency, so if this concept is irrelevant, everything else also breaks.