Stuart_Armstrong comments on Request for concrete AI takeover mechanisms - Less Wrong

18 Post author: KatjaGrace 28 April 2014 01:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread.

Comment author: Stuart_Armstrong 28 April 2014 10:39:39AM 6 points [-]

Economic (or other) indispensability: build a world system that depends on the AI for functioning, and then it has effective control.

Upload people, offering them great advantages in digital form, then eventually turn them all off when there's practically nobody left physically alive.

Cure cancer or similar, with an infectious drug that discretely causes sterility and/or death within a few years. Wait.

The "Her" approach: start having multiple deep and meaningful relationships with everyone at once, and gradually eliminate people when they are no longer connected to anyone human.

Use rhetoric and other tricks to increase the chance of xrisk disasters.

Comment author: leplen 29 April 2014 09:26:26PM *  4 points [-]

How does it build a world system? What does that even mean?

How does the AI upload people? Is people uploading a plausible technology scientists expect to have in 15 years?

Curing cancer doesn't really make sense. What is an infectious drug? How are you going to make it through FDA approval?

How is it eliminating people? If it can eliminate them, why bother with the relationship part of things? How does the AI have multiple deep and meaningful relationships with people? Via chatbots? How is it even processing/modelling 3 billion human conversations at a time?

Most xrisk disasters are really bad for the AI. It presumably needs electricity and replacement hardware to operate. It it's just a computer connected to the internet, then it's probably not going to survive a nuclear holocaust much better than the rest of us.