Eliezer_Yudkowsky comments on A Much Better Life? - Less Wrong

61 Post author: Psychohistorian 03 February 2010 08:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (173)

You are viewing a single comment's thread. Show more comments above.

Comment author: quanticle 09 February 2010 03:07:46AM 2 points [-]

Well, if the simulation is that accurate (e.g. its AI passes the Turing Test, so you do think you're interacting with real people), then wouldn't it fulfill your every need?

Comment author: Eliezer_Yudkowsky 09 February 2010 11:26:30AM 6 points [-]

I have a need to interact with real people, not to think I'm interacting with real people.

Comment author: deconigo 09 February 2010 12:14:22PM 5 points [-]

How can you tell the difference?

Comment author: byrnema 09 February 2010 12:52:11PM *  7 points [-]

Related: what different conceptions of 'simulation' are we using that make Eliezer's statement coherent to him, but incoherent to me? Possible conceptions in order of increasing 'reality':

(i) the simulation just stimulates your 'have been interacting with people' neurons, so that you have a sense of this need being fulfilled with no memories of how it was fulfilled.

(ii) the simulation simulates interaction with people, so that you feel as though you've interacted with people and have full memories and most outcomes (e.g., increased knowledge and empathy, etc.) of having done so

(iii) the simulation simulates real people -- so that you really have interacted with "real people", just you've done so inside the simulation

(iv) reality is a simulation -- depending on your concept of simulation, the deterministic evolution/actualization of reality in space-time is one

Comment author: Eliezer_Yudkowsky 09 February 2010 02:56:38PM 7 points [-]

ii is a problem, iii fits my values but may violate other sentients' rights, and as for iv, I see no difference between the concepts of "computer program" and "universe" except that a computer program has an output.

Comment author: byrnema 09 February 2010 03:09:18PM 2 points [-]

So when you write that you need interaction with real people, you were thinking of (i) or (ii)? I think (ii) or (iii), but only not (ii) if there is any objective coherent difference.

Comment author: epigeios 01 December 2011 10:55:27AM -2 points [-]

I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.

Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.

The less judgments I make, the more difficult the Turing Test becomes; as it is no longer about meeting my expectations, but instead satisfying my desired level of complexity. This, by the nature of real-world interaction, is a complicated set of interacting chaotic equations; And each time I remove a judgment from my repertoire, the equation gains a level of complexity, gains another strange attractor to interact with.

At a certain point of complexity, the equation becomes impossible except by a "god".

Now, if an AI passes THAT Turing Test, I will consider it a real person.

Comment author: Nighteyes5678 08 June 2012 09:15:35PM *  1 point [-]

I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.

Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.

I think it'd be useful to hear an example of "observing reality without making judgements" and "observing reality with making judgements". I'm having trouble figuring out what you believe the difference to be.