David_Gerard comments on Request for concrete AI takeover mechanisms - Less Wrong

18 Post author: KatjaGrace 28 April 2014 01:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread.

Comment author: Lumifer 28 April 2014 04:50:16PM 19 points [-]

Um. This looks like a request for scary stories. As in, like, "Let's all sit in the dark with only the weak light coming from our computer screens and tell each other scary tales about how a big bad AI can eat us".

Without any specified constraints you are basically asking for horror sci-fi short stories and if that's what you want you should just say so.

If you actually want analysis, you need to start with at least a couple of pages describing the level of technology that you assume (both available and within easy reach), AI requirements (e.g. in terms of energy and computing substrate), its motivations (malevolent, wary, naive, etc.) and such.

Otherwise it's just an underpants gnomes kind of a story.

Comment author: kokotajlod 28 April 2014 06:38:19PM 5 points [-]

Perhaps that's exactly what this is. Perhaps that is all MIRI wants from us right now. As Mestroyer said, maybe MIRI wants to be able to spin a plausible story for the purpose of convincing people, not for the purpose of actually predicting what would happen.

Comment author: Lumifer 28 April 2014 06:54:50PM 5 points [-]

maybe MIRI wants to be able to spin a plausible story for the purpose of convincing people

So, to give a slightly uncharitable twist to it, we are asked to provide feedstock material for a Dark Arts exercise? X-D

Comment author: private_messaging 05 May 2014 04:47:38AM *  3 points [-]

Yeah.

I propose we write vintage stories instead:

It's 1920, and the AI earns money by doing arithmetic over the phone. No human computer - not even one with a slide rule! - can ever compete with the AI, and so it ends up doing all the financial calculations for big companies, taking over the world.

This 1920s AI takes over the world the exact same way how OP's chemistry-simulating AI example does (or the AI from any other such scary story).

By doing something that would be enabled by underlying technologies behind the AI, without the need for any AI.

Far enough in the future, there's products which are to today as today's spreadsheet application is to 1920s. For any such product, you can make up a scary story about how the AI does the job of this product and gets immensely powerful.

Comment author: Vulture 29 April 2014 02:58:07AM *  4 points [-]

I think the issue is that a lot of casual readers (/s/listeners, or whatever) of MIRI's arguments about FAI threat get hung up on post- or mid-singularity AI takeover scenarios simply because they're hard to "visualize", having lots of handwavey free parameters like "technology level". So even if the examples produced here don't fill in necessarily highly plausible values for the free parameters, they can help less-imaginative casual readers visualize an otherwise abstract and hard-to-follow step in MIRI's arguments. More rigorous filling-in of the parameters can occur later, or at a higher level.

That's all assuming that this is being requested for the purposes of popular persuasive materials. I think the MIRI research team would be more specific and/or could come up with such things more easily on their own, if they needed scenarios for serious modeling or somesuch.

Comment author: Yosarian2 01 May 2014 09:33:22PM 1 point [-]

Eh. It's not unusual for the government to get experts together and ask in a general sense for worst-case scenario possible disaster situations, with the intent of then working to reduce those risks.

Open-ended brainstorming about some potential AI risk scenarios that could happen in the near future might be useful, if the overall goal of MIRI is to reduce AI risk.

Comment author: Lumifer 02 May 2014 12:05:55AM 0 points [-]

It's not unusual for the government to get experts together and ask in a general sense for worst-case scenario possible disaster situations

MIRI is not the government, LW is not a panel of experts, and such analyses generally start with a long list of things they are conditional on.

potential AI risk scenarios that could happen in the near future

No AI risk scenarios are going to happen in the near future.