kokotajlod comments on Request for concrete AI takeover mechanisms - Less Wrong

18 Post author: KatjaGrace 28 April 2014 01:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread. Show more comments above.

Comment author: kokotajlod 28 April 2014 10:21:39PM 1 point [-]

Haha, but seriously. The NSA probably meets the technical definition of friendliness, right? If it was given ultimate power, we would have an OK future.

Comment author: Lumifer 28 April 2014 11:50:29PM 0 points [-]

If it was given ultimate power, we would have an OK future.

No, I really don't think so.

Comment author: kokotajlod 29 April 2014 01:37:55AM 1 point [-]

I'm thinking relative to what would happen if we tried to hard-code the AI with a utility function like e.g. hedonistic utilitarianism. That would be much much worse than the NSA. Worst thing that would happen with the NSA is a aristocratic galactic police state. Right? Tell me how you disagree.

Comment author: ChristianKl 29 April 2014 12:47:13PM 0 points [-]

The NSA does invest money into building artificial intelligence. Having a powerful NSA might increase chances of UFAIs.

Comment author: Lumifer 29 April 2014 02:22:18AM 0 points [-]

Worst thing that would happen with the NSA is a aristocratic galactic police state. Right? Tell me how you disagree.

To quote Orwell, If you want a vision of the future, imagine a boot stamping on a human face - forever.

That's not an "OK future".

Comment author: kokotajlod 29 April 2014 03:34:05AM 1 point [-]

In the space of possible futures, it is much better than e.g. tiling the universe with orgasmium. So much better, in fact, that in the grand scheme of things it counts as OK.

Comment author: Lumifer 29 April 2014 05:14:05AM 0 points [-]

I evaluate an "OK future" on an absolute scale, not relative.

Relative scales lead you there.