You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on Stupid Questions Open Thread - Less Wrong Discussion

42 Post author: Costanza 29 December 2011 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (265)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 30 December 2011 04:44:43PM *  0 points [-]

An AGI that has access to massive computing power, can self improve and can get as much information (from the internet and other sources) as it wants, could easily be a global threat.

Interestingly, hypothetical UFAI (value drift) risk is something like other existential risks in its counterintuitive impact, but more so, in that (compared to some other risks) there are many steps where you can fail, that don't appear dangerous beforehand (because nothing like that ever happened), but that might also fail to appear dangerous after-the-fact, and therefore as properties of imagined scenarios where they're allowed to happen. The grave implications aren't easy to spot. Assuming soft takeoff, a prototype AGI escapes to the Internet - would that be seen as a big deal if it didn't get enough computational power to become too disruptive? In 10 years it grown up to become a major player, and in 50 years it controls the whole future...

Even without assuming intelligence explosion or other extraordinary effects, the danger of any misstep is absolute, and yet arguments against these assumptions are taken as arguments against the risk.