You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Summoning the Least Powerful Genie - Less Wrong Discussion

-1 Post author: Houshalter 16 September 2015 05:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 16 September 2015 08:39:10PM *  1 point [-]

Non agents aren't all that mysterious. We can already build non agents. Google is a non agent.

Compare: safe (in the FAI sense) computer programs aren't that mysterious. We can already build safe computer programs. Android is a safe computer program.

Do you have a valid argument that nonagentive programmes would be dangerous? Because saying "it would agentively do X" isn't a valid argument. Pointing out the hidden pitfalls of such programmes is something MIRI could usefully do. An unargued belief that everything is dangerous is not useful.

?Well, who cares if it doesn't do anything unless prompted, if it takes over the universe when prompted to answer a question

Oh, you went there.

Well: how likely is an AI designed to be nonagentive as a safety feature to have that particular failure mode?

And if you can rigorously tell it not to do that, you've already solved FAI.

You may have achieved safety., but it has nothing to do with "achieving FAI" in the MIRI sense of hardcoding the totality of human value. The whole point is that it is much easier, because you are just not building in agency.

Comment author: lmm 17 September 2015 08:11:24PM 0 points [-]

A program designed to answer a question necessarily wants to answer that question. A superintelligent program trying to answer that particular question runs the risk of acting as a paperclip maximizer.

Suppose you build a superintelligent program that is designed to make precise predictions, by being more creative and better at predictions than any human would. Why are you confident that one of the creative things this program does to make itself better at predictions isn't turning the matter of the Earth into computronium as step 1?

Comment author: Lumifer 17 September 2015 08:34:50PM 2 points [-]

A program designed to answer a question necessarily wants to answer that question.

I don't think my calculator wants anything.

Comment author: lmm 18 September 2015 08:24:43PM 1 point [-]

Does an amoeba want anything? Does a fly? A dog? A human?

You're right, of course, that we have better models for a calculator than as an agent. But that's only because we understand calculators and they have a very limited range of behaviour. As a program gets more complex and creative it becomes more predictive to think of it as wanting things (or rather, the alternative models become less predictive).

Comment author: Lumifer 18 September 2015 08:38:25PM *  0 points [-]

Notice the difference (emphasis mine):

A program designed to answer a question necessarily wants to answer that question

vs

...it becomes more predictive to think of it as wanting things

Comment author: VoiceOfRa 20 September 2015 08:42:13PM 2 points [-]

Well, the fundamental problem is that LW-style qualiafree-rationalism has no way to define what the word "want" means.

Comment author: lmm 20 September 2015 06:40:43PM -1 points [-]

Is there a difference between "x is y" and "assuming that x is y generates more accurate predictions than the alternatives"? What else would "is" mean?

Comment author: Lumifer 21 September 2015 03:08:49PM 1 point [-]

Is there a difference between "x is y" and "assuming that x is y generates more accurate predictions than the alternatives"? What else would "is" mean?

<boggle> Are you saying the model with the currently-best predictive ability is reality??

Comment author: lmm 25 September 2015 06:51:48AM -1 points [-]

Not quite - rather the everyday usage of "real" refers to the model with the currently-best predictive ability. http://lesswrong.com/lw/on/reductionism/ - we would all say "the aeroplane wings are real".

Comment author: Lumifer 25 September 2015 02:40:37PM *  1 point [-]

rather the everyday usage of "real" refers to the model with the currently-best predictive ability

Errr... no? I don't think this is true. I'm guessing that you want to point out that we don't have direct access to the territory and that maps is all we have, but that's not very relevant to the original issue of replacing "I find it convenient to think of that code as wanting something" with "this code wants" and insisting that the code's desires are real.

Anthropomorphization is not the way to reality.

Comment author: TheAncientGeek 17 September 2015 09:50:24PM *  0 points [-]

A program designed to answer a question necessarily wants to answer that question. A superintelligent program trying to answer that particular question runs the risk of acting as a paperclip maximizer.

What does that mean? It's necessarily satisfying a utility function? It isn't as Lumifer's calculator shows.

Suppose you build a superintelligent program that is designed to make precise predictions, by being more creative and better at predictions than any human would. Why are you confident that one of the creative things this program does to make itself better at predictions isn't turning the matter of the Earth into computronium as step 1?

I can be confident that nonagents wont't do agentive things.

Comment author: lmm 18 September 2015 08:25:30PM 0 points [-]

Why are you so confident your program is a nonagent? Do you have some formula for nonagent-ness? Do you have a program that you can feed some source code to and it will output whether that source code forms an agent or not?

Comment author: TheAncientGeek 19 September 2015 08:09:32AM 0 points [-]

It's all standard software engineering.

Comment author: lmm 20 September 2015 06:39:24PM 0 points [-]

I'm a professional software engineer, feel free to get technical.

Comment author: TheAncientGeek 21 September 2015 09:45:10AM 1 point [-]

Have you ever heard of someone designing a nonagentive programme that unexpectedly turned out to be agentive? Because to me that sounds like into the workshop to build a skateboard abd coming with a F1 car.

Comment author: lmm 25 September 2015 06:48:43AM 1 point [-]

I've known plenty of cases where people's programs were more agentive than they expected. And we don't have a good track record on predicting which parts of what people do are hard for computers - we thought chess would be harder than computer vision, but the opposite turned out to be true.

Comment author: Lumifer 25 September 2015 02:54:08PM 1 point [-]

I've known plenty of cases where people's programs were more agentive than they expected.

"Doing something other than what the programmer expects" != "agentive". An optimizer picking a solution that you did not consider is not being agentive.

Comment author: TheAncientGeek 28 September 2015 02:43:57PM 0 points [-]

I've known plenty of cases where people's programs were more agentive than they expected.

I haven't: have you any specific examples?

Comment author: ike 16 September 2015 08:53:28PM *  0 points [-]

Do you have a valid argument that nonagentive programmes would be dangerous? Because saying "it would agentively do X" isn't a valid argument. Pointing out the hidden pitfalls of such programmes is something MIRI could usefully do. An unargued belief that everything is dangerous is not useful.

I'm claiming that "nonagent" is not descriptive enough to actually build one. You replied that we already have non agents, and I replied that we already have safe computer programs. Just like we can't extrapolate from our safe programs that any AI will be safe, we can't extrapolate from our safe nonagents that any non-agent will be safe.

Well: how likely is an AI designed to be nonagentive as a safety feature to have that particular failure mode?

I still have little idea what you mean by nonagent. It's a black box, that may have some recognizable features from the outside, but doesn't tell you how to build it.

Comment author: TheAncientGeek 16 September 2015 09:19:49PM *  1 point [-]

I replied that we can already build nonagents.

It remains the case that if you think they could be dangerous, you need to explain how.

I still have little idea what you mean by nonagent. It's a black box, that may have some recognizable features from the outside, but doesn't tell you how to build it.

Again, we already know how to build them, in that we have them.

Worse than that. MIRI can't actually build anything they propose. It's just that some MIRI people have a reflex habit of complaining that anything outside of MIRI land is too vague.