Not so! An AGI need not think like a human, need not know much of anything about humans, and need not, for that matter, be as intelligent as a human.
Is that a fact? No, it's a matter of definition. It's scarecely credible you are unaware that a lot of people think the TT is critical to AGI.
The problem I'm pointing to here is that a lot of people treat 'what I mean' as a magical category.
I can't see any evidence of anyone invlolved in these discussions doing that. It looks like a straw man to me.
...Ok. NL is hard. Everyone knows that. But its got to
If an agent has goal G1 and sufficient introspective access to know its own goal, how would avoiding arbirtrariness in its goals help it achieve goal G1 better than keeping goal G1 as its goal?
Avoiding arbitrariness is useful to epistemic rationality and therefore to instrumental rationality. If an AI has rationality as a goal it will avoid arbitrariness, whether or not that assists with G1.
Software that initially appears to care what you mean will be selected by market forces. But nearly all software that superficially looks Friendly isn't Friendly. If there are seasoned AI researchers who can't wrap their heads around the five theses, then how can I be confident that the Invisible Hand will both surpass them intellectually and recurrently sacrifice short-term gains on this basis?
A. Solve the Problem of Meaning-in-General in advance, and program it to follow our instructions' real meaning. Then just instruct it 'Satisfy my preferences', and wait for it to become smart enough to figure out my preferences.
That problem has got to be solved somehow at some stage, because something that couldn't pass a Turing Test is no AGI.
But there are a host of problems with treating the mere revelation that A is an option as a solution to the Friendliness problem.
- You have to actually code the seed AI to understand what we mean. Y
Why is tha...
1) What seems (un)likely to an individual depends on their assumptions. If you regard consc. as a form of information processing, thern there is very little inferrential gap to a conclusion of functionalism or computationalism. But there is a Hard Problem of consc, precisely because some aspects --subjective experince, qualia -- don't have any theoretical or practical basis in functionalism of computer technology: we can build memory chips and write storage routines, but we can't even get a start on building emotion chips or writing seeRed().
2) It's not practical at the monent, and wouldn't answer the theoretical questions.
I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain "experiential knowledge" just by reading the verbal statements.
Mary isn't a normal human. The point of the story is to explore the limites of explanation. That being the case, Mary is granted unlimited intelligence, so that whatever limits he encountes are limits of explanation, and not her own limits.
...I think the most l
If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work.#
I arguing about cases ofWEB and neurla replacement, which are stiuplated as not being 1:1 atom-for-atom replacements.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence
Not coincidence: a further stipulation that funcitonal equivalene is preserved in WBE;s.
...Are you proposing that it's impossible
Feelings are made out of firing neurons, which are in turn made out of atoms."
A claim that some X is made of some Y is not showing how X's are made of Y's. Can you explain why red is produced and not soemthing other.
I don't get the appeal of dualism.
I wasn't selling dualism, was noting that ESR's account is not particualrly phsycialist as well as being not particularly explanatory,
P-zombies and inverted spectrums deserve similar ridicule.
I find the Mary argument more convincing.
That isn't a reductive explanaiton, becuase no attempt is made to show how Mary;s red quale breaks down into smaller component parts. In fact, it doens;t do much more than say subjectivity exists, and occurs in sync with brain states. As such, it is compatible with dualism.
Reading Wikipedia's entry on qualia, it seems to me that most of the arguments that qualia can't be explained by reductionism are powered by the same intuition that makes us think that you can give someone superpowers without changing them in any other way.
You mean p-zombie argument...
If a change to the way your funcitonality is implemented alters how your consciousness seems to you, your consciosuness will seem different to you. If your funcitonality is preserved, you won't be able to report it. You will report tomatos are red even if they look grue or bleen to you. (You may also not be able to cognitively access--remember or think about--the change, if that is part of the preserved functionality, But if your experience changes, you can't fail to experience it).
Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your "qualia" are causally impotent and I'd go so far as to say, meaningless.
Doesn't follow, Qualia aren't causing Charles's qualia-talk, but that doens't mean thery aren't causing mine. Kidney dyalisis machines don't need nephrons, but that doens't mean nephrons are causally idle in kidneys.
The epiphenomenality argument works for atom-by-atom dupli...
Accounting for qualia and starting from qualia are two entirely different things. Saying "X must have qualia" is unhelpful if we cannot determine whether or not a given thing has qualia.
We can tell that we have qualia, and our won consciousnessn is the ntarual starting point.
"Qualia" can be defined by giving examples: the way anchiovies taste, the way tomatos look, etc.
You are makiing heavy weather of the indefinability of some aspects of consciousness, but the flipside of that is that we all experience out won consciousness. It is n...
I don't see anything very new here.
...Charles: "Uh-uh! Your operation certainly did disturb the true cause of my talking about consciousness. It substituted a different cause in its place, the robots. Now, just because that new cause also happens to be conscious—talks about consciousness for the same generalized reason—doesn't mean it's the same cause that was originally there."
Albert: "But I wouldn't even have to tell you about the robot operation. You wouldn't notice. If you think, going on introspective evidence, that you are in an
If we want to understand how consciousness works in humans, we have to accou t for qualia as part of it. Having an undertanding of human consc. is the best practical basis for deciding whether other entitieies have consc. OTOH, starting by trying to decide which entities have consc. is unlikely to lead anywhere.
The biological claim can be ruled out if it is incoherent, but not if it for being unproven, since the funciontal/computational alternative is also unproven.
The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious.
A functional duplicate will talk the same way as whomever it is a duplicate of.
A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness),
A WBE of a specific person will respond to the s...
The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious. It is a corollary of this that a zombie, which is physically identical, and therefore not deliberately programmed to imitate talk of consciousness but must still reproduce it, must talk about consciousness for the same reason we do. That is, the zombies must be conscious.
A faithful synaptic-level s...
Because while it's conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause
For some value of "cause". If you are interested in which synaptic signals cause ...
Instead, if structural correspondence allowed for significant additional confidence that the em's professions of being conscious were true, wouldn't such a model just not stop, demanding "turtles all the way down"?
IOW, why assign "top" probability to the synaptic level, when there are further levels.
This comment:
EY to Kawoomba:
This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.
Appeals to contradict this comment:
EY to Juno_Watt
Since whole brains are not repeatable, verifying behavioral isomorphism with a target would require a small enough target that its internal interactions were repeatable. (Then, having verified the isomorpmism, you tile it across the whole brain.
Because while it's conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause. Thus this criterion is entirely sufficient (perhaps not necessary).
We also speak of surface...
If you use the word "consciousness", you ought to know what you mean by it.
The same applies to you. Any English speaker can attach a meaning to "consciousness". That doesn't imply the possession of deep metaphysical insight. I don't know what dark matter "is" either. I don't need to fully explain what consc. "is", since ..
"I don't think the argument requires consc. to be anything more than:
1) something that is there or not (not a matter of interpretation or convention).
2) something that is not entirely inferable from behaviour."
This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological.
Why would that be possible? Neurons have to process biochemicals. A full replacement would have to as well. How could it do that without being at least partly biological?
It might be the case that an adequate replacement -- not a full replacment -- could be non-biological. But it might not.
Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness.
That would depend on the granularity of the WBE, which has not beens pecified, and the nature of the superveninece of experince on brains states, which is unknown.
I wasn't arguing that differences in implementation are not important. For some purposes they are very important.
I am not arguing they are important. I am arguing that there are no facts about what is an implementation unless a human has decided what is being implemented.
We should not discuss the question of what can be conscious, however, without first tabooing "consciousness" as I requested.
I don't think they argument requires consc. to be anything more than:
1) something that is there or not (not a matter of interpetation or convention).
2) something that is not entirely inferable from behaviour.
I can think of only one example of someone who actually did this, and that was someone generally classed a a mystic.