Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on Detached Lever Fallacy - Less Wrong

26 Post author: Eliezer_Yudkowsky 31 July 2008 06:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 31 July 2008 09:01:40PM 8 points [-]

Robin: But can we also agree that the state of a "grown" AI program will depend on the environment in which it was "raised"?

It will depend on the environment in a way that it depends on its initial conditions. It will depend on the environment if it was designed to depend on the environment. The reason, presumably, why the AI is not inert in the face of the environment, like a heap of sand, is that someone went to the work of turning that silicon into an AI. Each bit of internal state change will happen because of a program that the programmer wrote, or that the AI programmed by the programmer wrote, and the chain of causality will stretch back, lawfully.

With all those provisos, yes, the grown AI will depend on the environment. Though to avoid the Detached Lever fallacy, it might be helpful to say: "The grown AI will depend on how you programmed the child AI to depend on the environment."

Doug: You have to be awake in order to recognize an apple

Dream on.