cousin_it comments on Other Existential Risks - Less Wrong

32 Post author: multifoliaterose 17 August 2010 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (120)

You are viewing a single comment's thread. Show more comments above.

Comment author: Johnicholas 18 August 2010 03:35:55AM 6 points [-]

SIAI's narrow focus on things that "look like HAL" neglects the risks of entities that are formed of humans and computers (and other objects) interacting. These entities already exist, they're already beyond human intelligence, and they're already existential risks.

Indeed, Lesswrong and SIAI are two obvious examples of these entities, and it's not clear at all how to steer them to become Friendly. Increasing individual rationality will help, but we also need to do social engineering - checks and balances and incentives (not just financial, but social incentives such as attention and praise) - and groupware research (e.g. karma and moderation systems, expert aggregation).

Comment author: cousin_it 18 August 2010 07:02:55AM 0 points [-]

What makes you think LW is smarter than a human?

Comment author: Johnicholas 18 August 2010 07:12:29AM 4 points [-]

On some measures (breadth of knowledge, responsiveness at all hours, words-typed-per-month), LW is superhuman. On most other measures, LW can default to using one of its (human) component's capabilities, and thereby achieve human- comparable performance. I admit it has problems with cohesiveness and coherency.