You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MrMind comments on Andrew Ng dismisses UFAI concerns - Less Wrong Discussion

3 Post author: OphilaDros 06 March 2015 05:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread.

Comment author: MrMind 06 March 2015 02:07:26PM 0 points [-]

maybe there will be some AI that turn evil

That's the critical mistake. AIs don't turn evil. If they could, we would have FAI half-solved.

AIs deviate from their intended programming, in ways that are dangerous for humans. And it's not thousands of years away, it's away just as much as a self-driving car crashing into a group of people to avoid a dog crossing the street.

Comment author: V_V 06 March 2015 04:17:09PM *  2 points [-]

AIs deviate from their intended programming, in ways that are dangerous for humans. And it's not thousands of years away, it's away just as much as a self-driving car crashing into a group of people to avoid a dog crossing the street.

But that's a very different kind of issue than AI taking over the world and killing or enslaving all humans.

EDIT:

To expand: all technologies introduce safety issues.
Once we got fire some people got burnt. This doesn't imply that UFFire (Unfriendly Fire) is the most pressing existential risk for humanity and we must devote huge amount of resources to prevent it and never use fire until we have proved that it will not turn "unfriendly".

Comment author: robertzk 07 March 2015 06:12:05AM *  0 points [-]

However, UFFire does not uncontrollably exponentially reproduce or improve its functioning. Certainly a conflagration on a planet covered entirely by dry forest would be an unmitigatable problem rather quickly.

In fact, in such a scenario, we should dedicate a huge amount of resources to prevent it and never use fire until we have proved it will not turn "unfriendly".

Comment author: Locaha 08 March 2015 10:00:17AM -2 points [-]

However, UFFire does not uncontrollably exponentially reproduce or improve its functioning. Certainly a conflagration on a planet covered entirely by dry forest would be an unmitigatable problem rather quickly.

Do you realize this is a totally hypothetical scenario?

Comment author: MrMind 09 March 2015 08:05:08AM 0 points [-]

Well, there's a phoenomenon called "flash over", that realizes in a confined environment, and happens when the temperature of a fire becomes so high that all the substances within starts to burn and feed the reaction.

Now, imagine that the whole world could become a closed environment for the flashover...

Comment author: V_V 09 March 2015 09:46:46PM 0 points [-]

So we should stop using fire until we prove that the world will not burst into flames?

Comment author: WalterL 06 March 2015 07:19:18PM 3 points [-]

Even your clarification seems too anthromorphic to me.

AIs don't turn evil, but I don't think they deviate from their programming either. Their programming deviates from their programmers values. (Or, another possibility, their programmer's values deviate from humanity's values).

Comment author: CronoDAS 07 March 2015 09:16:01PM 3 points [-]

Programming != intended programming.

Comment author: MrMind 09 March 2015 07:57:45AM 0 points [-]

AIs don't turn evil, but I don't think they deviate from their programming either.

They do, if they are self-improving, although I imagine you could collapse "programming" and "meta-programming", in which case an AI would just only partially deviate. The point is you couldn't expect things turn out to be so simple when talking about a runaway AI.