PhilGoetz comments on SIAI - An Examination - Less Wrong

143 Post author: BrandonReinhart 02 May 2011 07:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 05 May 2011 01:19:37PM 1 point [-]

The problem is if one organisation with dubious values gets far ahead of everyone else. That situation is likely to be result of keeping secrets in this area.

It's likely to be the result of organizations with dubious values keeping secrets in this area. The good guys being open doesn't make it better, it makes it worse, by giving the bad guys an asymmetric advantage.

Comment author: PhilGoetz 15 May 2011 04:38:55AM *  3 points [-]

In this case, I'm less afraid of "bad guys" than I am of "good guys" who make mistakes. The bad guys just want to rule the Earth for a little while. The good guys want to define the Universe's utility function.

Comment author: timtyler 15 May 2011 06:53:40PM *  0 points [-]

I'm less afraid of "bad guys" than I am of "good guys" who make mistakes.

Looking at history of accidents with machines, they seem to be mostly automobile accidents. Medical accidents are number two, I think.

In both cases, technology that proved dangerous was used deliberately - before the relevant safety features could be added - due to the benefits it gave in the mean time. It seems likely that we will see more of that - in conjunction with the overall trend towards increased safety.

My position on this is the opposite of yours. I think there are probably greater individual risks from a machine intelligence working properly for someone else than there are from an accident. Both positions are players, though.