Giles comments on [draft] Concepts are Difficult, and Unfriendliness is the Default: A Scary Idea Summary - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (39)
Is it worth thinking about not just a single AGI system, but technological development in general? Here is an outline for an argument - the actual argument requires a lot of filling in.
Definitions:
Someone creates an AGI. Then one of the following is true:
The AGI becomes a singleton. This isn't a job that we would trust to any current human, so for it to be safe the AGI would need not just human-level ethics but really amazing ethics. This is where arguments about the fragility of value and Kaj_Sotala's document come in.
The AGI doesn't become a singleton but it creates another AGI that does. This can be rolled into 1 (if we don't distinguish between "creates" and "becomes") or it can be rolled into 3 (if we don't make the distinction between humans and AGIs acting as programmers).
The AGI doesn't become a singleton and doesn't create one either. Then we just wait for someone to develop the next AGI.
Notes on point 3: