Not so! An AGI need not think like a human, need not know much of anything about humans, and need not, for that matter, be as intelligent as a human.
Is that a fact? No, it's a matter of definition. It's scarecely credible you are unaware that a lot of people think the TT is critical to AGI.
The problem I'm pointing to here is that a lot of people treat 'what I mean' as a magical category.
I can't see any evidence of anyone invlolved in these discussions doing that. It looks like a straw man to me.
...Ok. NL is hard. Everyone knows that. But its got to
If an agent has goal G1 and sufficient introspective access to know its own goal, how would avoiding arbirtrariness in its goals help it achieve goal G1 better than keeping goal G1 as its goal?
Avoiding arbitrariness is useful to epistemic rationality and therefore to instrumental rationality. If an AI has rationality as a goal it will avoid arbitrariness, whether or not that assists with G1.
Present day software may not have got far with regard to the evaluative side of doing what you want, but the XiXiDu's point seems to be that it is getting better at the semantic side. Who was it who said the value problem is part of the semantic problem?
A. Solve the Problem of Meaning-in-General in advance, and program it to follow our instructions' real meaning. Then just instruct it 'Satisfy my preferences', and wait for it to become smart enough to figure out my preferences.
That problem has got to be solved somehow at some stage, because something that couldn't pass a Turing Test is no AGI.
But there are a host of problems with treating the mere revelation that A is an option as a solution to the Friendliness problem.
- You have to actually code the seed AI to understand what we mean. Y
Why is tha...
Some folks on this site have accidentally bought unintentional snake oil in The Big Hoo Hah That Shall not Be Mentioned. Only an intelligent person could have bought that particular puppy,
1) What seems (un)likely to an individual depends on their assumptions. If you regard consc. as a form of information processing, thern there is very little inferrential gap to a conclusion of functionalism or computationalism. But there is a Hard Problem of consc, precisely because some aspects --subjective experince, qualia -- don't have any theoretical or practical basis in functionalism of computer technology: we can build memory chips and write storage routines, but we can't even get a start on building emotion chips or writing seeRed().
2) It's not practical at the monent, and wouldn't answer the theoretical questions.
my intuition that [Mary] would not understand qualia disappears.
For any value of abnormal? SHe is only quantitatively superior: she does not have brain-rewiring abilities.
Isn't that disproved by paid-for networks, like HBO? And what about non-US broadcasters like the BBC?
I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain "experiential knowledge" just by reading the verbal statements.
Mary isn't a normal human. The point of the story is to explore the limites of explanation. That being the case, Mary is granted unlimited intelligence, so that whatever limits he encountes are limits of explanation, and not her own limits.
...I think the most l
Why is she generating a memory? How is she generatign a memory?
So she's bound and gagged, with no ability to use her knowledge?
If by "using her knowledge" you mean performing neurosurgery in herself, I have to repeat that that is a cheat.Otherwise, I ha e to point put that knowledge of, eg. phontosynthesis, doesn't cause photosynthesis.
She could then generate such memories in her own brain,
Mary is a super-sceintist in tersm of intelligence and memory, but doesn't have special abilities to rewire her own cortex. Internally gerneating Red is a cheat, like pricking her thumb to observe the blood.
If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work.#
I arguing about cases ofWEB and neurla replacement, which are stiuplated as not being 1:1 atom-for-atom replacements.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence
Not coincidence: a further stipulation that funcitonal equivalene is preserved in WBE;s.
...Are you proposing that it's impossible
Feelings are made out of firing neurons, which are in turn made out of atoms."
A claim that some X is made of some Y is not showing how X's are made of Y's. Can you explain why red is produced and not soemthing other.
I don't get the appeal of dualism.
I wasn't selling dualism, was noting that ESR's account is not particualrly phsycialist as well as being not particularly explanatory,
P-zombies and inverted spectrums deserve similar ridicule.
I find the Mary argument more convincing.
I can think of only one example of someone who actually did this, and that was someone generally classed a a mystic.