robertzk comments on Andrew Ng dismisses UFAI concerns - Less Wrong

3 Post author: OphilaDros 06 March 2015 05:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 06 March 2015 04:17:09PM *  2 points [-]

AIs deviate from their intended programming, in ways that are dangerous for humans. And it's not thousands of years away, it's away just as much as a self-driving car crashing into a group of people to avoid a dog crossing the street.

But that's a very different kind of issue than AI taking over the world and killing or enslaving all humans.

EDIT:

To expand: all technologies introduce safety issues.
Once we got fire some people got burnt. This doesn't imply that UFFire (Unfriendly Fire) is the most pressing existential risk for humanity and we must devote huge amount of resources to prevent it and never use fire until we have proved that it will not turn "unfriendly".

Comment author: robertzk 07 March 2015 06:12:05AM *  0 points [-]

However, UFFire does not uncontrollably exponentially reproduce or improve its functioning. Certainly a conflagration on a planet covered entirely by dry forest would be an unmitigatable problem rather quickly.

In fact, in such a scenario, we should dedicate a huge amount of resources to prevent it and never use fire until we have proved it will not turn "unfriendly".

Comment author: Locaha 08 March 2015 10:00:17AM -2 points [-]

However, UFFire does not uncontrollably exponentially reproduce or improve its functioning. Certainly a conflagration on a planet covered entirely by dry forest would be an unmitigatable problem rather quickly.

Do you realize this is a totally hypothetical scenario?