All of Juno_Watt's Comments + Replies

My Favorite Liar. Tell people that you're going to make X deliberately incorrect statements every training session and they've got to catch them.

I can think of only one example of someone who actually did this, and that was someone generally classed a a mystic.

6arundelo
Yvain got the name of this technique from Kai Chang's "My Favorite Liar", about an economics professor who did this.

Not so! An AGI need not think like a human, need not know much of anything about humans, and need not, for that matter, be as intelligent as a human.

Is that a fact? No, it's a matter of definition. It's scarecely credible you are unaware that a lot of people think the TT is critical to AGI.

The problem I'm pointing to here is that a lot of people treat 'what I mean' as a magical category.

I can't see any evidence of anyone invlolved in these discussions doing that. It looks like a straw man to me.

Ok. NL is hard. Everyone knows that. But its got to

... (read more)
0Rob Bensinger
Let's run with that idea. There's 'general-intelligence-1', which means "domain-general intelligence at a level comparable to that of a human"; and there's 'general-intelligence-2', which means (I take it) "domain-general intelligence at a level comparable to that of a human, plus the ability to solve the Turing Test". On the face of it, GI2 looks like a much more ad-hoc and heterogeneous definition. To use GI2 is to assert, by fiat, that most intelligences (e.g., most intelligent alien races) of roughly human-level intellectual ability (including ones a bit smarter than humans) are not general intelligences, because they aren't optimized for disguising themselves as one particular species from a Milky Way planet called Earth. If your definition has nothing to recommend itself, then more useful definitions are on offer. * http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/9p8x?context=1#comments * http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/9p91?context=1#comments * http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/9q9m * http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/9qop * http://ieet.org/index.php/IEET/more/loosemore20121128 * http://nothingismere.com/2013/09/06/the-seed-is-not-the-superintelligence/#comments 'Mean', 'right', 'rational', etc. An AI doesn't need to be able to trick you in order for you to be able to give it instructions. All sorts of useful skills AIs have these days don't require them to persuade everyone that they're human. Read the article you're commenting on. One of its two main theses is, in bold: The seed is not the superintelligence. Yes. We should focusing on solving the values part of semantics, rather than the entire superset. Doesn't matter. Give an ancient or a modern society arbitrarily large amounts of power overnight, and the end results won't differ in any humanly important way. There won't be any nights after that. Setting aside the power issue: Because humans don't us

If an agent has goal G1 and sufficient introspective access to know its own goal, how would avoiding arbirtrariness in its goals help it achieve goal G1 better than keeping goal G1 as its goal?

Avoiding arbitrariness is useful to epistemic rationality and therefore to instrumental rationality. If an AI has rationality as a goal it will avoid arbitrariness, whether or not that assists with G1.

2JGWeissman
Avoiding giving credence to arbitrary beliefs is useful to epistemic rationality and therefor to instrumental rationality, and therefor to goal G1. Avoiding arbitrariness in goals still does not help with achieving G1 if G1 is considered arbitrary. Be careful not to conflate different types of arbitrariness. Rationality is not an end goal, it is that which you do in pursuit of a goal that is more important to you than being rational.
3MugaSofer
Although since "self-improvement" in this context basically refers to "improving your ability to accomplish goals"... Stop me if this is a non-secteur, but surely "having accurate beliefs" and "acting on those beliefs in a particular way" are completely different things? I haven't really been following this conversation, though.
9ArisKatsaris
Self-correcting software is possible if there's a correct implementation of what "correctness" means, and the module that has the correct implementation has control over the modules that don't have the correct implementation. Self-improving software are likewise possible if there's a correct implementation of the definition of "improvement". Right now, I'm guessing that it'd be relatively easy to programmatically define "performance improvement" and difficult to define "moral and ethical improvement".

Software that initially appears to care what you mean will be selected by market forces. But nearly all software that superficially looks Friendly isn't Friendly. If there are seasoned AI researchers who can't wrap their heads around the five theses, then how can I be confident that the Invisible Hand will both surpass them intellectually and recurrently sacrifice short-term gains on this basis?

Present day software may not have got far with regard to the evaluative side of doing what you want, but the XiXiDu's point seems to be that it is getting better at the semantic side. Who was it who said the value problem is part of the semantic problem?

A. Solve the Problem of Meaning-in-General in advance, and program it to follow our instructions' real meaning. Then just instruct it 'Satisfy my preferences', and wait for it to become smart enough to figure out my preferences.

That problem has got to be solved somehow at some stage, because something that couldn't pass a Turing Test is no AGI.

But there are a host of problems with treating the mere revelation that A is an option as a solution to the Friendliness problem.

  1. You have to actually code the seed AI to understand what we mean. Y

Why is tha... (read more)

3Eliezer Yudkowsky
Juno_Watt, please take further discussion to RobbBB's blog.
6wedrifid
No, it doesn't.
1Rob Bensinger
Not so! An AGI need not think like a human, need not know much of anything about humans, and need not, for that matter, be as intelligent as a human. To see this, imagine we encountered an alien race of roughly human-level intelligence. Would a human be able to pass as an alien, or an alien as a human? Probably not anytime soon. Possibly not ever. (Also, passing a Turing Test does not require you to possess a particularly deep understanding of human morality! A simple list of some random things humans consider right or wrong would generally suffice.) The problem I'm pointing to here is that a lot of people treat 'what I mean' as a magical category. 'Meaning' and 'language' and 'semantics' are single words in English, which masks the complexity of 'just tell the AI to do what I mean'. Nope! It could certainly be an AGI! It couldn't be an SI -- provided it wants to pass a Turing Test, of course -- but that's not a problem we have to solve. It's one the SI can solve for itself. No human being has ever created anything -- no system of laws, no government or organization, no human, no artifact -- that, if it were more powerful, would qualify as Friendly. In that sense, everything that currently exists in the universe is non-Friendly, if not outright Unfriendly. All or nearly all humans, if they were more powerful, would qualify as Unfriendly. Moreover, by default, relying on a miscellaneous heap of vaguely moral-sounding machine learning criteria will lead to the end of life on earth. 'Smiles' and 'statements of approval' are not adequate roadmarks, because those are stimuli the SI can seize control of in unhumanistic ways to pump its reward buttons. No, it isn't. And this is a non sequitur. Nothing else in your post calls orthogonality into question.

Some folks on this site have accidentally bought unintentional snake oil in The Big Hoo Hah That Shall not Be Mentioned. Only an intelligent person could have bought that particular puppy,

0linkhyrule5
Granted. And it may be that additional knowledge/intelligence makes yourself more vulnerable a Gatekeeper.

1) What seems (un)likely to an individual depends on their assumptions. If you regard consc. as a form of information processing, thern there is very little inferrential gap to a conclusion of functionalism or computationalism. But there is a Hard Problem of consc, precisely because some aspects --subjective experince, qualia -- don't have any theoretical or practical basis in functionalism of computer technology: we can build memory chips and write storage routines, but we can't even get a start on building emotion chips or writing seeRed().

2) It's not practical at the monent, and wouldn't answer the theoretical questions.

my intuition that [Mary] would not understand qualia disappears.

For any value of abnormal? SHe is only quantitatively superior: she does not have brain-rewiring abilities.

Isn't that disproved by paid-for networks, like HBO? And what about non-US broadcasters like the BBC?

0somervta
The reason companies like HBO can do a different sort of tv is that they don't have to worry about ratings - they're less bound by how many watch each show.

I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain "experiential knowledge" just by reading the verbal statements.

Mary isn't a normal human. The point of the story is to explore the limites of explanation. That being the case, Mary is granted unlimited intelligence, so that whatever limits he encountes are limits of explanation, and not her own limits.

I think the most l

... (read more)
3Ghatanathoah
If this is the case then, as I said before, my intuition that she would not understand qualia disappears.
5nshepperd
If you're asserting that Mary does not have the software problem that makes it impossible to derive "experential knowledge" from verbal data, then the answer to the puzzle is "Yes, Mary does know what red looks like, and won't be at all surprised. BTW the reason our intuition tells us the opposite is because our normal simulate-other-humans procedures aren't capable of imagining that kind of architecture." Otherwise, simply postulating that she has unlimited intelligence is a bit of a red herring. All that means is she has a lot of verbal processing power, it doesn't mean all bugs in her mental architecture are fixed. To follow the kernel object analogy: I can run a program on any speed of CPU, it will never be able to get a handle to a kernel redness object if it doesn't have access to the OS API. "Intelligence" of the program isn't a factor (this is how we're able to run high-speed javascript in browsers without every JS program being a severe security risk).

Why is she generating a memory? How is she generatign a memory?

So she's bound and gagged, with no ability to use her knowledge?

If by "using her knowledge" you mean performing neurosurgery in herself, I have to repeat that that is a cheat.Otherwise, I ha e to point put that knowledge of, eg. phontosynthesis, doesn't cause photosynthesis.

She could then generate such memories in her own brain,

Mary is a super-sceintist in tersm of intelligence and memory, but doesn't have special abilities to rewire her own cortex. Internally gerneating Red is a cheat, like pricking her thumb to observe the blood.

0hairyfigment
So she's bound and gagged, with no ability to use her knowledge? Seems implausible, but OK. (Did she get this knowledge by dictation, or by magically reaching out to the Aristotelian essences of neurons?) In any case, at least two of us have linked to orthonormal's mini-sequence on the matter. Those three posts seem much better than ESR's attempt at the quest.
2Ghatanathoah
She isn't generating Red, she's generating a memory of the feeling Red generates without generating Red. She now knows what emotional state Red would make her feel, but hasn't actually made herself see red. So when she goes outside she doesn't say "Wow" she says "Oh, those feelings again, just as I suspected."

If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work.#

I arguing about cases ofWEB and neurla replacement, which are stiuplated as not being 1:1 atom-for-atom replacements.

Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence

Not coincidence: a further stipulation that funcitonal equivalene is preserved in WBE;s.

Are you proposing that it's impossible

... (read more)
5FeepingCreature
Okay. I don't think it's possible to build a functional equivalent of a mind that talks of qualia because it has them, by 1:1 porting at the synapse level, and get something that talks of qualia without having any. You can stipulate that all day but I don't think it can actually be done. This is contingent on neurons being the computational elements of our minds. If it turns out that most of the computation of mindstates is done by some sort of significantly lower-scale process and synaptic connections are, if not coincidental, then at least not the primary element of the computation going on in our heads, I could imagine a neural-level functional equivalent that talked of qualia while running the sort of elaborate non-emulation described in my previous comment. But if neurons are the computational basis of our minds, and you did a 1:1 synapse-level identical functional copy, and it talked of qualia, it would strain credulty to say it talked of qualia for a different reason than the original did, while implementing the same computation. If you traced the neural impulses backwards all the way to the sensory input that caused the utterance, and verified that the neurons computed the same function in both systems, then what's there left to differentiate them? Do you think your talk of qualia is not caused by a computation in your neurons? Qualia are the things that make us talk about qualia, or else the word is meaningless. To say that the equivalent, different-substrate system talked about qualia out of the same computational processes (at neuron level), but for different, incorrect reasons - that, to me, is either Chalmers-style dualism or some perversion of language that carries no practical value.

Feelings are made out of firing neurons, which are in turn made out of atoms."

A claim that some X is made of some Y is not showing how X's are made of Y's. Can you explain why red is produced and not soemthing other.

I don't get the appeal of dualism.

I wasn't selling dualism, was noting that ESR's account is not particualrly phsycialist as well as being not particularly explanatory,

P-zombies and inverted spectrums deserve similar ridicule.

I find the Mary argument more convincing.

3Ghatanathoah
There are many different neuron firing patterns. Some produce various shades of red, other produce other stuff. The intuition that Mary's Room activates is that no amount of book-learning can substitute for firsthand experience. This is because we can't always use knowledge we obtain from reading about experiences to activate the same neurons that having those experiences would activate. The only way to activate them and experience those feeling is to have the activating experience. Now, in Dennett's RoboMary variation of the experience, RoboMary would probably not say "Wow!" That is because she is capable of constructing a brain emulator of herself seeing red inside her own head, and then transferring the knowledge of what those neurons (or circuits in this case) felt when activated. She already knows what seeing red feels like, even though she's never seen it.

That isn't a reductive explanaiton, becuase no attempt is made to show how Mary;s red quale breaks down into smaller component parts. In fact, it doens;t do much more than say subjectivity exists, and occurs in sync with brain states. As such, it is compatible with dualism.

Reading Wikipedia's entry on qualia, it seems to me that most of the arguments that qualia can't be explained by reductionism are powered by the same intuition that makes us think that you can give someone superpowers without changing them in any other way.

You mean p-zombie argument... (read more)

2hairyfigment
Sure, that would be this mini-sequence by orthonormal.
5Ghatanathoah
I presume that would be "Mary's qualia are caused by the feeling the color-processing pathways of her brain light up. The color processing parts are made of neurons, which are made of molecules, which are made of atoms. Those parts of the brain are then connected to another part of the brain by more neurons, which are similarly composed. When those color processing parts fire this causes the connecting neurons to fire in a certain pattern. These patterns of firings are what her feelings are made of. Feelings are made out of firing neurons, which are in turn made out of atoms." I don't get the appeal of dualism. Qualia can't run on machines made out of atoms and quarks, but there is some other mysterious substance that composes our mind, and qualia can run on machines made out of this substance? Why the extra step? Why not assume that atoms and quarks are the substrate that qualia run on? What hypothetical special properties does this substance have that let qualia run on it, but not on atoms? I'm sure that if we ever did discover some sort of disembodied soul made out of a weird previously unknown substance that was attached to the brain and appeared to contain our consciousness, Dave Chalmers would argue that qualia couldn't possibly be reduced down to something as basic as [newly discovered substance], and that obviously this disembodied soul couldn't possibly contain consciousness, that has to be contained somewhere else. There is no possible substance, no possible anything, that could ever satisfy the dualist's intuitions. Yes, plus the inverted spectrum argument, and all the other "conceivability arguments." I can conceive of myself walking on walls, bench-pressing semi-trucks, and flying without making any modifications to my body or changing the external world. But that's because my brain is bad at conceiving stuff and fudges using shortcuts. If I actually start thinking in extremely detailed terms of my muscle tissues and the laws of physics, it becomes o

It has always seemed to me that qualia exist, and that they can fully be explained by reductionism and physicalism

Can you point me to such an explanation??

3Ghatanathoah
There's actually one in that essay I linked to at the end of my post. Here is the most relevant paragraph (discussing the Mary's Room problem): Reading Wikipedia's entry on qualia, it seems to me that most of the arguments that qualia can't be explained by reductionism are powered by the same intuition that makes us think that you can give someone superpowers without changing them in any other way. Anyone with a basic knowledge of physiology knows the idea you can give someone the powers of Spider-Man or Aquaman without changing their physical appearance or internal anatomy is silly. Modern superhero writers have actually been forced to acknowledge this by occasionally referencing ways that such characters are physically different from humans (in ways that don't cosmetically affect them, of course). But because qualia are a property of our brain's interaction with external stimuli, rather than a property of our bodies, the idea that you could change someone's qualia without changing their brain or the external world fails to pass our nonsense detector. If I wake up and the spectrum is inverted, something is wrong with my brain, or something is wrong with the world.

Reductionism says there is some thing existing X which is composed of, undestandable in terms of, and ultimately identical to some other existing thing Y. ELiminativism says X doesn't exist. Heat has been reduced, phlogiston has been eliminated.

I agree with most of this, although I am not sure that the way strawberries taste to me is a posit.

If a change to the way your funcitonality is implemented alters how your consciousness seems to you, your consciosuness will seem different to you. If your funcitonality is preserved, you won't be able to report it. You will report tomatos are red even if they look grue or bleen to you. (You may also not be able to cognitively access--remember or think about--the change, if that is part of the preserved functionality, But if your experience changes, you can't fail to experience it).

3ESRogs
Hmm, it seems to me that any change that affects your experience but not your reports must have also affected your memory. Otherwise you should be able to say that the color of tomatoes now seems darker or cooler or just different than it did before. Would you agree?

Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your "qualia" are causally impotent and I'd go so far as to say, meaningless.

Doesn't follow, Qualia aren't causing Charles's qualia-talk, but that doens't mean thery aren't causing mine. Kidney dyalisis machines don't need nephrons, but that doens't mean nephrons are causally idle in kidneys.

The epiphenomenality argument works for atom-by-atom dupli... (read more)

6FeepingCreature
You keep bringing up that argument, but kidney dialysis machines are built specifically to replace the functionality of kidneys ("deliberately replacing them with a substitute"). If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work. If it did, you should question if that cell type actually does anything in kidneys. Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence - implying that there's a separate, unrelated reason why the replacement neurons talk of qualia, that has nothing to do with qualia, that was not deliberately engineered - that stretches belief past the breaking point. You're saying, essentially: "qualia cause talk of qualia in my meatbrain, but talk of qualia is not any indication of qualia in any differently built brain implementing the same spec". Then why are you so certain that your talk of qualia is caused by your supposed qualia, and not the neural analogue of what causes talk of qualia in WBE brains? It really does sound like your qualia are either superfluous or bizarre. [edit] Actually, I'm still not sure I understand you. Are you proposing that it's impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery? Is that what you mean by "functional equivalent"? I'm having serious trouble comprehending your position. [edit] I went back to your original comment, and I think we're using "functional equivalence" in a very different sense. To you, it seems to indicate "a system that behaves in the same way despite having potentially hugely different internal architecture". To me, it indicates a 1:1 neuron computational replacement; keeping the computational processes while running them on a different substrate. I agree that there may conceivably exist functionally equivalent

Accounting for qualia and starting from qualia are two entirely different things. Saying "X must have qualia" is unhelpful if we cannot determine whether or not a given thing has qualia.

We can tell that we have qualia, and our won consciousnessn is the ntarual starting point.

"Qualia" can be defined by giving examples: the way anchiovies taste, the way tomatos look, etc.

You are makiing heavy weather of the indefinability of some aspects of consciousness, but the flipside of that is that we all experience out won consciousness. It is n... (read more)

I don't see anything very new here.

Charles: "Uh-uh! Your operation certainly did disturb the true cause of my talking about consciousness. It substituted a different cause in its place, the robots. Now, just because that new cause also happens to be conscious—talks about consciousness for the same generalized reason—doesn't mean it's the same cause that was originally there."

Albert: "But I wouldn't even have to tell you about the robot operation. You wouldn't notice. If you think, going on introspective evidence, that you are in an

... (read more)
2ESRogs
I don't think I understand what you're saying here, what kind of change could you notice but not report?
3FeepingCreature
Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your "qualia" are causally impotent and I'd go so far as to say, meaningless. Are you sure you read Eliezer's critique of Chalmers? This is exactly the error that Chalmers makes. It may also help you to read making beliefs pay rent and consider what the notion of qualia actually does for you, if you can imagine a person talking of qualia for the same reason as you while not having any.

If we want to understand how consciousness works in humans, we have to accou t for qualia as part of it. Having an undertanding of human consc. is the best practical basis for deciding whether other entitieies have consc. OTOH, starting by trying to decide which entities have consc. is unlikely to lead anywhere.

The biological claim can be ruled out if it is incoherent, but not if it for being unproven, since the funciontal/computational alternative is also unproven.

2asparisi
Accounting for qualia and starting from qualia are two entirely different things. Saying "X must have qualia" is unhelpful if we cannot determine whether or not a given thing has qualia. Qualia can perhaps best be described, briefly, as "subjective experience." So what do we mean by 'subjective' and 'experience'? If by 'subjective' we mean 'unique to the individual position' and by 'experience' we mean 'alters its internal state on the basis of some perception' then qualia aren't that mysterious: a video camera can be described as having qualia if that's what we are talking about. Of course, many philosophers won't be happy with that sort of breakdown. But it isn't clear that they will be happy with any definition of qualia that allows for it to be distinguished. If you want it to be something mysterious, then you aren't even defining it. You are just being unhelpful: like if I tell you that you owe me X dollars, without giving you anyway of defining X. If you want to break it down into non-mysterious components or conditions, great. What are they? Let me know what you are talking about, and why it should be considered important. At this point, it's not a matter of ruling anything out as incoherent. It's a matter of trying to figure out what sort of thing we are talking about when we talk about consciousness and seeing how far that label applies. There doesn't appear to be anything inherently biological about what we are talking about when we are talking about consciousness. This could be a mistake, of course: but if so, you have to show it is a mistake and why.

Why? That doesn't argue any point relevant to this discussion.

7ESRogs
Did you read all the way to the dialogue containing this hypothetical? The following discussion seems very relevant indeed.

"qualia" labels part of the explanandum, not the explanation.

The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious.

A functional duplicate will talk the same way as whomever it is a duplicate of.

A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness),

A WBE of a specific person will respond to the s... (read more)

The argument against p-zombies is that there is no physical difference that could explain the difference in consciousness. That does not extend to silicon WBEs or AIs.

1mwengler
Two things. 1) that the same electronic functioning produces consciousness if implemented on biological goo but does not if implemented on silicon seems unlikely, what probability would you assign that this is the meaningful difference? 2) if it is biological goo we need to have consciousness, why not build an AI out of biological goo? Why not synthesize neurons and stack and connect them in the appropriate ways, and have understood the whole process well enough that either you assemble it working or you know how to start it? It would still be artificial, but made from materials that can produce consciousness when functioning.
5Eliezer Yudkowsky
http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/

The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious. It is a corollary of this that a zombie, which is physically identical, and therefore not deliberately programmed to imitate talk of consciousness but must still reproduce it, must talk about consciousness for the same reason we do. That is, the zombies must be conscious.

A faithful synaptic-level s... (read more)

Because while it's conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause

For some value of "cause". If you are interested in which synaptic signals cause ... (read more)

8Eliezer Yudkowsky
http://lesswrong.com/lw/p7/zombies_zombies/ http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/ http://lesswrong.com/lw/f1u/causal_reference/ More generally http://wiki.lesswrong.com/wiki/Zombies_(sequence)

I don't see the relevance. I was trying to argue that bioloigical claim could be read as more specific than the functional one.

-1torekp
I was agreeing. And trying to elaborate the magnetism analogy. I'm looking to break the hold that functionalism has on so many lesswrongers, but I'm not sure how to go about it.

Instead, if structural correspondence allowed for significant additional confidence that the em's professions of being conscious were true, wouldn't such a model just not stop, demanding "turtles all the way down"?

IOW, why assign "top" probability to the synaptic level, when there are further levels.

This comment:

EY to Kawoomba:

This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.

Appeals to contradict this comment:

EY to Juno_Watt

Since whole brains are not repeatable, verifying behavioral isomorphism with a target would require a small enough target that its internal interactions were repeatable. (Then, having verified the isomorpmism, you tile it across the whole brain.

If chatting with cute women is utilitous for you, your decision was rational. Rationality doesn't mean you have to restrict yourself to "official" payoffs.

Then why require causal isomporphism at the synaptic structure in addition to surface correspondence of behaviour?

Because while it's conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause. Thus this criterion is entirely sufficient (perhaps not necessary).

We also speak of surface... (read more)

0Juno_Watt
This comment: EY to Kawoomba: Appeals to contradict this comment: EY to Juno_Watt

What does "like" mean, there? The actual biochemistry, so that pieces of Em could be implanted in a real brain, or just accurate virtualisation, like a really good flight simulator?

6Eliezer Yudkowsky
Flight simulator, compared to instrumentation of and examination of biology. This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.
2DanArmak
I think the conversation might as well end here. I wasn't responsible for the first three downvotes, but after posting this reply I will add a fourth downvote. There was a clear failure to communicate and I don't feel like investing the time explaining the same thing over and over again.

I'm just asking you what the word means to you, because it demonstrably means different things to different people, even though they are all English users.

I have already stated those aspects of the meaning of "consciousness" necessary for my argument to go through. Why should I explain more?

2DanArmak
You mean these aspects? A lot of things would satisfy that definition without having anything to do with "consciousness". An inert lump of metal stuck in your brain would satisfy it. Are you saying you really don't know anything significant about what the word "consciousness" means beyond those two requirements?

I am saying it is not conceptually possible to have something that precisely mimics a biological entity without being biological.

2Luke_A_Somers
It will need to have a biological interface, but its insides could be nonbiological.
1lukstafi
Isn't it sufficient for computationalism that WBEs are conscious and that experience would be identical in the limit of behavioral identity? My intent with the claim is to weaken computationalism -- accommodate some aspects of identity theory -- but not to directly deny it.

I mean data about individuals like resumes and qualifications That racial-group info correlates with important things is unimportant, unless it correlates significantly more than individual data. However, the reverse is the case.

-3Eugine_Nier
First I don't understand the distinction your drawing between "individual data" and presumably "group data" since the people with a particular qualification are a group and conversely skin color, say, is a property of an individual. Back to the point: in the great-grandparent I was talking about affirmative action and the disparate impact. The logic of those is based on concluding that racism happened on the basis of disparate outcomes. This logic relies on the implicit premise that race isn't correlated with anything important. I don't see how anything you wrote in your two comments that addresses this issue.

If you use the word "consciousness", you ought to know what you mean by it.

The same applies to you. Any English speaker can attach a meaning to "consciousness". That doesn't imply the possession of deep metaphysical insight. I don't know what dark matter "is" either. I don't need to fully explain what consc. "is", since ..

"I don't think the argument requires consc. to be anything more than:

1) something that is there or not (not a matter of interpretation or convention).

2) something that is not entirely inferable from behaviour."

2DanArmak
You repeatedly miss the point of my argument. If you were teaching English to a foreign person, and your dictionary didn't contain the word "Conscoiusness", how would you explain what you meant by that word? I'm not asking you to explain to an alien. You can rely on shared human intuitions and so on. I'm just asking you what the word means to you, because it demonstrably means different things to different people, even though they are all English users.
2DanArmak
If you use the word "consciousness", you ought to know what you mean by it. You should always be able to taboo any word you use. So I'm asking you, what is this "consciousness" that you (and the OP) talk about?

This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological.

Why would that be possible? Neurons have to process biochemicals. A full replacement would have to as well. How could it do that without being at least partly biological?

It might be the case that an adequate replacement -- not a full replacment -- could be non-biological. But it might not.

2Furslid
It's a thought experiment. It's not meant to be a practical path to artificial consciousness or even brain emulation. It's a conceptually possible scenario that raises interesting questions.

Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness.

That would depend on the granularity of the WBE, which has not beens pecified, and the nature of the superveninece of experince on brains states, which is unknown.

1lukstafi
The truth of the claim, or the degree of difference? The claim is that identity obtains in the limit, i.e. in any practical scenario there wouldn't be identity between experiences of a biological brain and WBE, only similarity. OTOH identity between WBEs can obviously be obtained.

I wasn't arguing that differences in implementation are not important. For some purposes they are very important.

I am not arguing they are important. I am arguing that there are no facts about what is an implementation unless a human has decided what is being implemented.

We should not discuss the question of what can be conscious, however, without first tabooing "consciousness" as I requested.

I don't think they argument requires consc. to be anything more than:

1) something that is there or not (not a matter of interpetation or convention).

2) something that is not entirely inferable from behaviour.

2DanArmak
Fine, but what is it?
Load More