Gabriel comments on Other Existential Risks - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (120)
I don't think that "entities that are formed of humans and computers (and other objects) interacting" is sufficiently specific to be considered a type of existential risk. Any organization can be put into that category and unlike AGI, it's not true that most possible organizations have goal systems indifferent to human morals.
Also, the fact that organizations can be dangerous is well known and there doesn't seem to be a simple solution to that or anything else a small organization could do. The problem isn't about coming up with checks and balances or incentive systems, it's about making people sane enough to use those solutions.
True, but Johnicholas still has a point about "things that look like HAL," namely, that such scenarios presents the uFAI risk in an unconvincing manner. To most people, I suspect a scenario in which individuals and organizations gradually come to depend too much on AI would be more plausible.