Peterdjones comments on By Which It May Be Judged - LessWrong

35 Post author: Eliezer_Yudkowsky 10 December 2012 04:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (934)

You are viewing a single comment's thread. Show more comments above.

Comment author: Peterdjones 10 December 2012 09:50:28PM 2 points [-]

What does this mean by "why"?

It's definitely a how-it-happens "why" and not how-did-it-evolve "why"

Well, it enables imagination,

There's more to qualia than free-floating representations. There is no reason to suppose an AI's internal maps have phenomenal feels, no way of testing that they do, and no way of engineering them in.

I've read this several times, and I don't see a hard philosophical problem.

It's a hard scientific problem. How could you have a theory that tells you how the world seems to a bat on LSD? How can you write a SeeRed() function?

Comment author: DaFranker 12 December 2012 09:43:17PM *  0 points [-]

How can you write a SeeRed() function?

Presumably, the exact same way you'd write any other function.

In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.

If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human's "redness qualia". If prompted and sufficiently intelligent, this program will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function.

Of course, I'm arguing a bit by the premises here with "correct behavior" being "fully and coherently maintained". The space of inputs and outputs to take into account in order to make a program that would convince you of its possession of the redness qualia is too vast for us at the moment.

TL;DR: It all depends on what the SeeRed() function will be used for / how we want it to behave.

Comment author: Peterdjones 12 December 2012 09:59:35PM *  -1 points [-]

In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.

False. In this case what matters is the perception of a red colour that occurs between input and ouput. That is what the Hard Problem, the problem of qualia is about.

If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human's "redness qualia"

That doesn't mean there are no qualia (I have them so I know there are). That also doesn't mean qualia just serendiptously arrive whenever the correct mapping from inputs to outputs is in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough.

Comment author: DaFranker 12 December 2012 10:07:06PM 0 points [-]

That doesn't mean there are no qualia (I have them so I know there are). That also doesn't mean qualia just serendiptously arrive whenever the correct inputs and outputs are in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough

None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you'd need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).

Obviously I haven't solved the Hard Problem just by saying this. However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.

* If this isn't among your premises or claims, then it still does appear that way, but apologies in advance for the strawmanning.

Comment author: Peterdjones 12 December 2012 10:12:50PM 0 points [-]

None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you'd need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).

Sorry that is most definitely "serendipitously arrive". You don't know how to engineer the Redness in explicilty, you are just assuming it must be there if everything else is in place.

However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.

The claimis more like "hasn't been", and you haven't shown me a SeeRed().

Comment author: Decius 12 December 2012 09:36:16PM 0 points [-]

Is there a reason to suppose that anybody else's maps have phenomenal feels, a way of testing that they do, or a way of telling the difference? Why can't those ways be generalized to Intelligent entities in general?

Comment author: Peterdjones 12 December 2012 10:03:25PM -1 points [-]

Is there a reason to suppose that anybody else's maps have phenomenal feels,

Yes: naturalism. It would be naturalistcially anomalous if their brains worked very smilarly , but their phenomenology were completely different.

a way of testing that they do,

No. So what? Are you saying we are all p-zombies?

Comment author: DaFranker 12 December 2012 10:10:28PM *  1 point [-]

No. So what? Are you saying we are all p-zombies?

I don't know about Decius, but...

I am.

I'm also saying that it doesn't matter. The p-zombies are still conscious. They just don't have any added "conscious" XML tags as per some imaginary, crazy-assed unnecessary definition of "consciousness".

Tangential to that point: I think any morality system which relies on an external supernatural thinghy in order to make moral judgments or to assign any terminal value to something is broken and not worth considering.

Comment author: Peterdjones 12 December 2012 10:15:23PM 1 point [-]

I'm also saying that it doesn't matter. The p-zombies are still conscious. They just don't have any added "conscious" XML tags as per some imaginary, crazy-assed unnecessary definition of "consciousness".

I have no idea what you are gettign at. Please clarify.

Tangential to that point: I think any morality system which relies on an external supernatural thinghy in order to make moral judgments or to assign any terminal value to something is broken and not worth considering.

That has no discernable relationship to anythign I have said. Have you confused me with someone else?

Comment author: DaFranker 12 December 2012 10:29:51PM 0 points [-]

I'm not sure where I implied that I'm getting at anything. We're p-zombies, we have no additional consciousness, and it doesn't matter because we're still here doing things.

The tangent was just an aside remark to clarify my position, and wasn't to target anyone.

We may already agree on the consciousness issue, I haven't actually checked that.

Comment author: Peterdjones 12 December 2012 10:33:34PM 1 point [-]

we have no additional consciousness,

I have no idea whay you mean by "additonal consciousness" -- although, since you are not "getting at anything" you perhaps mean nothing.

We're p-zombies

That seems a bold and contentious claim to me. OTOH, you say you are not "getting at anything". Who knows?

and wasn't to target anyone.

OK. "Getting at something" doens't mean criticising someone, it means making a point.

Comment author: DaFranker 12 December 2012 10:43:02PM -1 points [-]

In that sense, what I was getting at is that asking the question of whether we are p-zombies is redundant and irrelevant, since there's no reason to want or believe in the existence of non-p-zombies.

The core of my claim is basically that our consciousness is the logic and physics that goes on in our brain, not something else that we cannot see or identify. I obviously don't have conclusive proof or evidence of this, otherwise I'd be writing a paper and/or collecting my worldwide awards for it, but all (yes, all) other possibilities seem orders of magnitude less likely to me with my current priors and model of the world.

TL;DR: Consciousness isn't made of ethereal acausal fluid nor of magic, but of real physics and how those real physics interact in a complicated way.

Comment author: Peterdjones 12 December 2012 11:21:18PM *  -1 points [-]

since there's no reason to want or believe in the existence of non-p-zombies.

I believe in the existence of at least onen non-p-zombie, because I have at least indirect evidence of one in the form of my own qualia.

The core of my claim is basically that our consciousness is the logic and physics that goes on in our brain, not something else that we cannot see or identify.

We can see and identify our consciousness from the inside. It's self awareness. If you try to treat consciousness from the outside, you are bound to miss 99% of the point. None of this has antyhing to do with what consciousness is "made of".

Comment author: [deleted] 13 December 2012 03:19:45PM 0 points [-]

I believe in the existence of at least onen non-p-zombie, because I have at least indirect evidence of one in the form of my own qualia.

I have a question about qualia from your perspective. If Omega hits you with an epiphenomenal anti-qualia hammer that injures your qualia and only your qualia such that you essentially have no qualia (I.E, you are a P-zombie) for an hour until your qualia recovers (When you are no longer a P-Zombie), what, if anything, might that mean?

1: You'd likely notice something, because you have evidence that qualia exist. That implies you would notice if they vanished for about an hour, since you would no longer be getting that evidence for that hour

2: You'd likely not notice anything, because if you did, a P-Zombie would not be just like you.

3: Epiphenomenal anti-qualia hammers can't exist. For instance, it might be impossible to affect your qualia and only your qualia, or perhaps it is impossible to make any reversible changes to qualia.

4: Something else?

Comment author: DaFranker 13 December 2012 01:33:02PM 0 points [-]

I belive in the existence of at least on non-p-zombie, because I have at least indirect evidence of one in the form of my own qualia.

I must not be working with the right / same conception of p-zombies then, because to me qualia experience provides exactly zero bayesian evidence for or against p-zombies on its own.

Comment author: nshepperd 13 December 2012 03:38:00AM 1 point [-]

You appear to be making an unfortunate assumption that what Chalmers and Peterdjones are talking about is crazy-assed unnecessary XML tags, as opposed to, y'know, regular old consciousness.

Comment author: DaFranker 13 December 2012 01:43:36PM *  0 points [-]

I'm not sure where my conception of p-zombies went wrong, then. P-zombies are assumed by the premise, if my understanding is correct, to behave physically exactly the same, down to the quantum level (and beyond if any exists), but to simply not have something being referred to as "qualia". This seems to directly imply that the "qualia" is generated neither by the physical matter, nor by the manner in which it interacts.

Like Eliezer, I believe physics and logic are sufficient to describe eventually everything, and so qualia and consciousness must be made of this physical matter and the way it interacts. Therefore, since the p-zombies have the same matter and the same interactions, they have qualia and consciousness.

What, then, is a non-p-zombie? Well, something that has "something more" (implied: Than physics or logic) added into it. Since it's something exceptional that isn't part of anything else so far in the universe to my knowledge, calling it a "crazy-ass unnecessary XML tag" feels very worthy of its plausibility and comparative algorithmic complexity.

The point being that, under this conception of p-zombies and with my current (very strong) priors on the universe, non-p-zombies are either a silly mysterious question with no possible answer, or something supernatural on the same level of silly as atom-fiddling tiny green goblins and white-winged angels of Pure Mercy.

Comment author: nshepperd 13 December 2012 02:10:06PM *  0 points [-]

Huh...

That's a funny way of thinking about it.

But anyway, EY's zombies sequences was all about saying that if physics and math is everything, then p-zombies are a silly mysterious question. Because a p-zombie was supposed to be like a normal human to the atomic level, but without qualia. Which is absurd if, as we expect, qualia are within physics and math. Hence there are no p-zombies.

I guess the point is that saying there are no non-p-zombies as a result of this is totally confusing, because it totally looks like saying no-one has consciousness.

(Tangentially, it probably doesn't help that apparently half of the philosophical world use "qualia" to mean some supernatural XML tags, while the other half use the word to mean just the-way-things-feel, aka. consciousness. You seem to get a lot of arguments between those in each of those groups, with the former group arguing that qualia are nonsense, and the latter group rebutting that "obviously we have qualia, or are you all p-zombies?!" resulting in a generally unproductive debate.)

Comment author: DaFranker 13 December 2012 02:16:21PM *  0 points [-]

I guess the point is that saying there are no non-p-zombies as a result of this is totally confusing, because it totally looks like saying no-one has consciousness.

Hah, yes. That seems to be partly a result of my inconsistent way of handling thought experiments that are broken or dissolved in the premises, as opposed to being rejected due to a later contradiction or nonexistent solution.

Comment author: Decius 14 December 2012 12:23:35AM 0 points [-]

I'm saying that there is no difference between a p-zombie and the alternative.