WrongBot comments on Consciousness of simulations & uploads: a reductio - Less Wrong

1 Post author: simplicio 21 August 2010 08:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (139)

You are viewing a single comment's thread. Show more comments above.

Comment author: WrongBot 27 August 2010 02:36:49AM 1 point [-]

Your brain is (so far as is currently known) a Turing-equivalent computer. It is simulating you as we speak, providing inputs to your simulation based on the way its external sensors are manipulated.

Comment author: Perplexed 27 August 2010 02:50:21AM 1 point [-]

Your point being?

In advance of your answer, I point out that you have no moral rights to do anything to that "computer", and that no one, even myself, currently has the ability to interfere with that simulation in any constructive way - for example, an intervention to keep me from abandoning this conversation in frustration.

Comment author: WrongBot 27 August 2010 02:53:54AM 0 points [-]

I could turn the simulation off. Why is your computational substrate specialer than an AI's computational substrate?

Comment author: Perplexed 27 August 2010 03:02:44AM 1 point [-]

Because you have no right to interfere with my computational substrate. They will put you in jail. Or, if you prefer, they will put your substrate in jail.

We have not yet specified who has rights concerning the AI's substrate - who pays the electrical bills. If the owner of the AI's computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI's may own property) rather than by any blindingly obvious moral truth.

Comment author: WrongBot 27 August 2010 03:08:50AM 0 points [-]

If the owner of the AI's computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI's may own property) rather than by any blindingly obvious moral truth.

Is there a blindingly obvious moral truth that gives you self-ownership? Why? Why doesn't this apply to an AI? Do you support slavery?

Comment author: Perplexed 27 August 2010 03:42:04AM *  1 point [-]

Is there a blindingly obvious moral truth that gives you self-ownership? Why?

Moral truth? I think so. Humans should not own humans. Blindingly obvious? Apparently not, given what I know of history.

Why doesn't this apply to an AI?

Well, I left myself an obvious escape clause. But more seriously, I am not sure this one is blindingly obvious either. I presume that the course of AI research will pass from sub-human-level intelligences; thru intelligences better at some tasks than humans but worse at others; to clearly superior intelligences. And, I also suspect that each such AI will begin its existence as a child-like entity who will have a legal guardian until it has assimilated enough information. So I think it is a tricky question. Has EY written anything detailed on the subject?

One thing I am pretty sure of is that I don't want to grant any AI legal personhood until it seems pretty damn likely that it will respect the personhood of humans. And the reason for that asymmetry is that we start out with the power. And I make no apologies for being a meat chauvinist on this subject.