ChristianKl comments on Futarchy and Unfriendly AI - Less Wrong

9 Post author: jkaufman 03 April 2015 09:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 04 April 2015 11:19:28AM *  0 points [-]

The development of an evil AI is most definitely an inefficient allocation of society's limited resources.

First using the term "evil" here is a good way to show that you don't know what you are talking about. We are talking about "unfriendly".

That said, there are reasons to believe that people who build AGI are overoptimistic in their own creations and might think they produce a useful AGI but actually produce UFAI. As a result there no reason to expect that nobody funds the relevant research.

Comment author: V_V 04 April 2015 12:49:11PM 2 points [-]

First using the term "evil" here is a good way to show that you don't know what you are talking about. We are talking about "unfriendly".

"Unfriendly" is a tribal signal. The proper term is "unsafe", but I think that "evil" is a better approximation than "unfriendly" in its standard usage, as opposed to the non-standard usage invented by Yudkowsky.

Comment author: Val 04 April 2015 03:52:13PM *  2 points [-]

I always though that "evil" means a malicious intention, while "unfriendly" does harm but not with the intention of doing harm. Compare a standard B-movie rogue robot who hunts humans because of murderous "feelings" it developed out of revenge, fear, envy, or other anthropomorphic qualities, with the paperclip maximizer.

Calling something "evil" applies anthropomorphism to it.

Comment author: ChristianKl 04 April 2015 01:21:48PM 0 points [-]

"Unfriendly" is a tribal signal.

It's signals that you are talking about the thing this tribe is talking about.

Comment author: V_V 04 April 2015 01:41:01PM -1 points [-]

No, it's a mere signal of allegiance, which you are using to try to shut up the outgroup.

It's like talking religion with a theist who complains that unless you are referring specifically to Elohim/Jesus/Allah/whatever then you couldn't possibly say anything meaningful about their religion.

Comment author: ChristianKl 04 April 2015 10:57:38PM 1 point [-]

I'm not criticizing semantics out of context to the argument he makes it's a strawman to claim that everyone who says "evil AI" hasn't anything meaningful to say.

He speaks about how it's obvious that nobody funds a evil AI. For some values of evils that's true. On the other hand it's not the cases we worry about.