Yes, agreed... for a program running in a black box to convince me that it was the same kind of person that I am, one of the things it would have to do is report a first-person experience. (That's also a criterion I apply to the programs running on humanoid brains all around me.)
I basically agree with what you are saying here, that if in protracted conversation the box can convince me it is conscious then I'll conditionally afford it some large fraction of the consideration I give to meat-people.
The criteria I apply to the "programs running on humanoid brains all around me" are significantly more relaxed than the criterion I would apply to a new candidate mechanical consciousness. The level at which I interact with the vast majority of people in the world, they are no more convincingly real to me than are the hookers in grand theft auto. HOWEVER I afford them significant consideration as consciousnesses because, as a class, meat-people have been extremely well tested for thousands of years for consciousness, and Occams razor suggests to me that if the meat-people I have personally tested seem likely conscious like me that the meat-people I haven't yet tested are liekly conscious like me. This is the same kind of reasonging physicicsts use when they believe all electrons have the same amount of electric charge, even though they have measured some almost vanishingly small fraction of the electrons on earth, fuhgedaboud the total number of electrons in the universe.
But I haven't seen a mechanical intelligence yet. Further I have seen simulations of things other than intelligence. And the feature of a simulation is it is DESIGNED to behave like the real thing. At some level, you know that a simple simulation of concsiousness will simply be programmed to answer "yes" if asked if it is conscious. One may think such a consideration is aced-out by running a human-brain emulation, but I don't think it is. I am sure that the process of finally getting a human brain emulation to work will be a gigantically complex process, comparable to any super-large programming effort. Humans don't know everything they have programmed in to large systems, at a certain point their "design" process consists of humans filing bugs against the existing system and other humans patching up the code to squash the bugs wtihout particularly great coordination with each other. A further set of automated tests are run after each fix in a prayerful attempt at believing that more good than harm is being done by these bug fixes. So when the emulation is finally delivered, who knows what hacks have been made to get it to have a conversation? This is why detailed examination is necessary (but not sufficient actually) for me to begin to believe that the emulation is really conscious.
So a mechanical intelligence has a gigantic bar to leap across to convince me it is conscious. And even if it passes that bar it will still be on probation in my mind for at least a few hundred years, if not longer.
Handful of things:
I agree that the meat-people I've met establish a decent reference class to derive prior beliefs about meat-people I haven't yet met, including beliefs about whether they are people in the first place.
I agree that the same is true about computer programs and mechanical systems, and that consequently I need more evidence to conclude that a mechanical system is a person than to conclude that a biological humanoid system is, and this is perfectly reasonable.
I agree that simulations are designed to behave like what they simulate, and o
Suppose I have choice between the following:
A) One simulation of me is run for me 100 years, before being deleted.
B) Two identical simulations of me are run for 100 years, before being deleted.
Is the second choice preferable to the first? Should I be willing to pay more to have multiple copies of me simulated, even if those copies will have the exact same experiences?
Forgive me if this question has been answered before. I have Googled to no avail.