Comment author: CronoDAS 03 July 2016 12:33:17AM 3 points [-]

Yeah, I meant in all possible cases. Start with a Brain In A Vat. Scan that brain and implement a GLUT in Platospace, then hook up the Brain-In-A-Vat and the GLUT to identical robots, and you'll have one robot that's conscious and one that isn't, right?

In response to comment by CronoDAS on Zombies Redacted
Comment author: kilobug 05 July 2016 02:03:21PM 1 point [-]

Did you read the GAZP vs GLUT article ? In the GLUT setup, the conscious entity is the conscious human (or actually, more like googolplex of conscious humans) that produced the GLUT, and the robot replaying the GLUT is no more conscious than a phone transmitting the answer from a conscious human to another - which is basically what it is doing, replaying the answer given by a previous, conscious, human from the same input.

In response to comment by Elo on Zombies Redacted
Comment author: ingres 02 July 2016 09:52:53PM 0 points [-]

Seconded.

In response to comment by ingres on Zombies Redacted
Comment author: kilobug 05 July 2016 12:27:37PM -6 points [-]

Sorry to go meta, but could someone explain me how "Welcome back!" can be at -1 (0 after my upvote) and yet "Seconded." at +2.

Doesn't sound like very consistent scoring...

In response to Zombies Redacted
Comment author: Furcas 02 July 2016 10:20:24PM *  7 points [-]

Nice.

So, when are you going to tell us your solution to the hard problem of consciousness?

Edited to add: The above wasn't meant as a sarcastic objection to Eliezer's post. I'm totally convinced by his arguments, and even if I wasn't I don't think not having a solution to the hard problem is a greater problem for reductionism than for dualism (of any kind). I was seriously asking Eliezer to share his solution, because he seems to think he has one.

In response to comment by Furcas on Zombies Redacted
Comment author: kilobug 05 July 2016 12:22:15PM 4 points [-]

Not having a solution doesn't prevent from criticizing an hypothesis or theory on the subject. I don't know what are the prime factors of 4567613486214 but I know that "5" is not a valid answer (numbers having 5 among their prime factors end up with 5 or 0) and that "blue" doesn't have the shape of a valid answer. So saying p-zombism and epiphenomenalism aren't valid answers to the "hard problem of consciousness" doesn't require having a solution to it.

In response to comment by VAuroch on Zombies Redacted
Comment author: turchin 04 July 2016 10:41:23PM *  1 point [-]

Maybe your are phlizombie))

I think we should add new type p-zombies: epistemic p-zombies: The ones, who claim that they don't have qualia, and we don't know why they claim it.

You are not only one who claimed absence of qualia. I think there are 3 possible solutions.

a) You are p-zombie

b) You don't know where to look

с) You are troll. "So I am sometimes fond of asserting that I have neither, mostly to get an interesting response."

In response to comment by turchin on Zombies Redacted
Comment author: kilobug 05 July 2016 12:03:08PM 3 points [-]

Or more likely :

d) the term "qualia" isn't very properly defined, and what turchin means with "qualia" isn't exactly what VAuroch means with "qualia" - basically an illusion of transparecny/distance of inference issue.

In response to Zombies Redacted
Comment author: turchin 03 July 2016 12:58:21PM *  4 points [-]

I know people who claim that they don't have qualia. I doubt that it is true, but based on their words they should be considered zombies. ))

I would like to suggest zombies of second kind. This is a person with inverted spectrum. It even could be my copy, which speaks all the same philosophical nonsense as me, but any time I see green, he sees red, but names it green. Is he possible? I could imagine such atom-exact copy of me, but with inverted spectrum. And if such second type zombies are possible, it is argument for epiphenomenalism. Now I will explain why.

Phenomenological judgments (PJ) about own consciousness, that is the ability to say something about your own consciousness, will be the same in me and my zombie of the second type.

But there are two types of PJ: quantitative (like "I have consciousness") and qualitative which describes exactly what type of qualia I experience now.

The qualitative type of PJ is impossible. I can't transfer my knowing about "green" in the words.

It means that the fact of existence of phenomenological judgments doesn't help in case of second type zombies.

So, after some upgrade, zombie argument still works as an argument for epiphenomenalism.

I would also recommend the following article with introduce "PJ" term and many problems about it (but I do not agree with it completely) "Experimental Methods for Unraveling the Mind-body Problem: The Phenomenal Judgment Approach" Victor Argonov http://philpapers.org/rec/ARGMAA-2

In response to comment by turchin on Zombies Redacted
Comment author: kilobug 05 July 2016 12:01:27PM 1 point [-]

I would like to suggest zombies of second kind. This is a person with inverted spectrum. It even could be my copy, which speaks all the same philosophical nonsense as me, but any time I see green, he sees red, but names it green. Is he possible? I could imagine such atom-exact copy of me, but with inverted spectrum.

I can't.

As a reductionist and materialist, it doesn't make sense - the feeling of "red" and "green" is a consequence of the way your brain is wired and structured, an atom-exact copy would have the same feelings.

But letting aside the reductionist/materialist view (which after all is part of the debate), it still wouldn't make sense. The special quality that "red" has in my consciousness, the emotions it call upon, the analogies it triggers, has consequences on how I would invoke the "red" color in poetry, or use the "red" color in a drawing. And on how I would feel about a poetry or drawing using "red".

If seeing #ff0000 triggers exactly all the same emotions, feelings, analogies in the consciousness of your clone, then he's getting the same experience than you do, and he's seeing "red", not "green".

In response to Zombies Redacted
Comment author: MockTurtle 04 July 2016 12:17:58PM *  5 points [-]

I wonder what probability epiphenomenalists assign to the theory that they are themselves conscious, if they admit that belief in consciousness isn't caused by the experiences that consciousness brings.

The more I think about it, the more absurdly self-defeating it sounds, and I have trouble believing that ANYONE could hold such views after having thought about it for a few minutes. The only reason I continue to think about it is because it's very easy to believe that some people, no matter how an AI acted and for how long, would never believe the AI to be conscious. And that bothers me a lot, if it affects their moral stance on that AI.

Comment author: kilobug 05 July 2016 11:50:40AM 3 points [-]

Another more directly worrying question, is why or if the p-zombie philosopher postulate that other persons have consciousness.

After all, if you can speak about consciousness exactly like we do and yet be a p-zombie, why doesn't Chalmer assume he's the only not being a zombie, and therefore letting go of all forms of caring for others and all morality ?

The fact that Chalmer and people like him still behave like they consider other people to be as conscious as they are probably points to the fact they have belief-in-belief, more than actual belief, in the possibility of zombieness.

In response to Zombies Redacted
Comment author: Piecewise 04 July 2016 04:27:44PM 3 points [-]

"a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious."

As someone with a medical background, I find it very hard to believe this is possible. Not unless Consciousness is reduced to something so abstract and disconnected from what we consider our "Selves" as to render it almost meaningless. After all, traumatic brain injury can alter every aspect of your personality, capacity to reason, and ability to perceive. And if "consciousness" isn't bound up in any of these things, if it exists as some sort of super disconnected "Thinking thing" like Descartes seemed to think, I really can't see the value of it. It's like the Greek interpretation of the afterlife where your soul exists as a senseless shadow, lacking any concept of self or any memory of your past life. What good is an existence that lacks all the things which make it unique?

Then again, as a somewhat brutal pragmatist, I cease to see the meaning in having an argument when it seems to devolve beyond any connection to observable reality.

In response to comment by Piecewise on Zombies Redacted
Comment author: kilobug 05 July 2016 11:47:44AM 1 point [-]

I agree with your point in general, and it does speak against an immaterial soul surviving death, but I don't think it necessarily apply to p-zombies. The p-zombie hypothesis is that the consciousness "property" has no causality over the physical world, but it doesn't say that there is no causality the other way around: that the state of the physical brain can't affect the consciousness. So a traumatic brain injury would (under some unexplained mysterious mechanism) reflect into that immaterial consciousness.

But sure, it's yet more epicycles.

Comment author: algekalipso 31 March 2016 03:36:18AM 0 points [-]

I have seen this argument before, and I must confess that I am very puzzled about the kind of mistake that is going on here. I might call it naïve functionalist realism, or something like that. So whereas in "standard" naïve realism people find it hard to dissociate their experiences with an existing mind-independent world, they then go on to perceive everything as "seeing the world directly, nothing else, nothing more." Naïve realists will interpret their experiences as direct, unmediated, impressions of the real world.

Of course this is a problematic view, and there killer arguments against it. For instance, hallucinations. However, naïve realists can still come back and say that you are talking about cases of "misapprehension", where you don't really perceive the world directly anymore. That does not mean you "weren't perceiving the world directly before." But here the naïve realist has simply not integrated the argument in a rational way. If you need to explain hallucinations as "failed representations of true objects" you don't, anymore, need to in addition restate one's previous belief in "perceiving the world directly." Now you end up having two ontologies instead of one: Inner representations and also direct perception. And yet, you only need one: Inner representations.

Analogously, I would describe your argument as naïve functionalist realism. Here you first see a certain function associated to an experience, and you decide to skip the experience altogether and simply focus on the function. In itself, this is reasonable, since the data can be accounted for with no problem. But when I mention LSD and dream, suddenly that is part of another category like a "bug" in one's mind. So here you have two ontologies, where you can certainly explain it all with just one.

Namely, the green is a particular qualia, which gets triggered under particular circumstances. Green does not refer to the wavelength of light that triggers it, since you can experience it without such light. To instead postulate that this is in fact just a "bug" of the original function, but that the original function is in and of itself what green is, simply adds another ontology which, when taken on its own, already can account for the phenomena.

Comment author: kilobug 31 March 2016 07:06:33AM 0 points [-]

No, it is much more simple than that - "green" is a wavelength of light, and "the feeling of green" is how the information "green" is encoded in your information processing system, that's it. No special ontology for qualia or whatever. Qualia isn't a fundamental component of the universe like quarks and photons are, it's only encoding of information in your brain.

But yes, how reality is encoded in an information system sometimes doesn't match the external world, the information system can be wrong. That's a natural, direct consequence of that ontology, not a new postulate, and definitely not any other ontology. The fact that "the feeling of green" is how "green wavelength" is encoded in an information processing system automatically implies that if you perturbate the information processing system by giving it LSD, it may very well encode "green wavelength" without "green wavelength" being actually present.

In short, ontology is not the right level to look at qualia - qualia is information in a (very) complex information processing system, it has no fundamental existence. Trying to explain it at an ontological level just make you ask invalid questions.

Comment author: kilobug 30 March 2016 03:49:03PM 2 points [-]

First, "Social justice" is a broad and very diverse movement of people wanting to reduce the amount of (real or perceived) injustice people face for a variety of reasons (skin color, gender, sexual orientation, place of birth, economical position, disability, ...). Like in any such broad political movement, subparts of the movement are less rational than others.

Overall, "social justice" is still mostly a force of reason and rationality against the most frequent and pervasive forms of irrationality in society, which are mostly religion-based, but yes it varies from subparts of the movement. It is, historically, a byproduct of the Enlightenment after all.

That said, there are several levels of "rationality" and "rationalism", and it might be very rational to make irrational demands.

When you make demands in social and political context, you know your demands will usually not be completely fulfilled. Asking for something "impossible" may be the best way, from a game theoretical point of view, to end up with having something not too far from what you really want - the same way that when you're bargaining the price of an item in an informal market (like in latam or maghreb).

It can also be a powerful way to make people think about a question in novel ways and try to find alternative solutions which aren't part of the hypothesis space they usually wander. "Abolish prisons" may seem an irrational demand, and it's very likely that something "like prison" will be required for a few very dangerous individuals, but it can make people think about possible alternatives to prison, something they don't usually do, and which could very well be used for 90% or even perhaps 99% of people currently in prison.

Of course, making "irrational" demands can also be counterproductive, it can discredit the movement, may you appear to be a lunatic, ... but it's a powerful tool to have in your toolbox when you rationally pursue some deep changes in society.

Comment author: kilobug 30 March 2016 08:11:54AM 0 points [-]

One issue I have with statements like "~50% of the variation is heritable and ~50% is due to non-shared environment" is that they assume the two kind of factors are unrelated, and you can do an arithmetic average between the two.

But very often, the effects are not unrelated, and it works more like a geometric average. In many ways it's more than genetic gives you a potential, an ease to learn/train yourself, but then it depends of your environment if you actually develop that potential or not. Someone with a very high "genetic IQ" but who is underfed and kept isolated and not even taught to read will likely not be a very bright adult, it'll not be "(genes + environment)/2" pour more "(genes * environment)".

Other times, it's more like the environment can help compensate for the genes, offsetting a disability, in a way that you end with "min(genes, environment)" rather than average.

The truth is that the interaction between genes and environment is much more complicated than a mere pondered arithmetic average, and this is rarely considered extensively when people speak of "how much is it genetic, how is it environmental".

View more: Prev | Next