cousin_it comments on Other Existential Risks - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (120)
SIAI's narrow focus on things that "look like HAL" neglects the risks of entities that are formed of humans and computers (and other objects) interacting. These entities already exist, they're already beyond human intelligence, and they're already existential risks.
Indeed, Lesswrong and SIAI are two obvious examples of these entities, and it's not clear at all how to steer them to become Friendly. Increasing individual rationality will help, but we also need to do social engineering - checks and balances and incentives (not just financial, but social incentives such as attention and praise) - and groupware research (e.g. karma and moderation systems, expert aggregation).
What makes you think LW is smarter than a human?
On some measures (breadth of knowledge, responsiveness at all hours, words-typed-per-month), LW is superhuman. On most other measures, LW can default to using one of its (human) component's capabilities, and thereby achieve human- comparable performance. I admit it has problems with cohesiveness and coherency.