Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

TheOtherDave comments on Nonsentient Optimizers - Less Wrong

16 Post author: Eliezer_Yudkowsky 27 December 2008 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 02 December 2010 09:56:58PM 2 points [-]

Given the right profiling tools, I would decide this by evaluating the run-time neural behavior of the system doing the imagining.

If, for example, I find that every attempt to predict what the character does involves callouts to the same neural circuitry the novelist uses to predict their own future behavior, or that of other humans, that would be evidence that there is no separate imagined person, there is simply an imagined set of parameters to a common person-emulator.

Of course, it's possible that the referenced circuitry is a person-simulator, so I'd need to look at that as well. But evidence accumulates. It may not be a simple question, but it's an answerable one, given the right profiling tools.

Lacking those tools, I'd do my best at approximating the same tests with the tools I had.

There is a baby version of this problem that gets introduced in intro cognitive science classes, having to do with questions like "When I imagine myself finding a spot on a map, how similar is what I'm doing to actually finding an actual spot on an actual map?"

One way to approach the question is to look at measurable properties of the two tasks (actually looking vs. imagining) and asking questions like "Does it take me longer to imagine finding a spot that's further away from the spot I imagine starting on?" (Here, again, neither answer would definitively answer the original question, but evidence accumulates.)