pjeby comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 01 November 2010 01:03:53AM 8 points [-]

entities that are somehow guaranteed that they are and will remain far, far more powerful than everyone else

And you don't think a self-improving AI will ever fall into this category? Hell, if you gave a human the ability to run billions of simulations per second to study how their decisions would turn out, they'd be able to take over the world and "remain far, far more powerful" than everyone else. (If they were actually more intelligent, and not just faster, even more so.)

Your so-called "limited edge case" is the main case being discussed: superhuman intelligence. (The problem of single-goal entities is of course also discussed here; see the idea of a "paper-clip maximizer", for example.)

In short, you seem to be saying that we shouldn't worry about those "edge" cases because in all non-"edge" cases, things work out fine. That's like saying we shouldn't worry about having fire departments or constructing homes according to a fire code, because a fire is an "edge" case, and normally buildings don't burn down.

Even if you were to make such an argument, it makes little sense to propose it at a meeting of the fire council. ;-)

It may be true that mostly, fires don't happen. However, it's also true that if you don't build the buildings with fire prevention (and especially, preventing the spread of fires) in mind, then, sooner or later, your whole city burns down. Because at that point, it only takes one fire to do it.