...Even if you are willing to consider a computer program with many falsifiable groundings to be conscious, you still needn't worry about how you treat that computer program. Because you can't be nice to it no matter how hard you try. It's pointless to treat an agent as having rights if it doesn't have a stable symbol-grounding, because what is desirable to it at one moment might cause it indescribable agony in the next. Even if you are nice to the consciousness with the grounding intended by the system's designer, you will be causing misery to an astrono
I'm reading a review of studies using "inverting spectacles", which "invert" the visual field. Experimenters then see whether subjects report having the same subjective experience as originally after adapting to the inverting spectacles.
The problem is that I can't tell from the paper what inverting spectacles do! It's vital to know whether they invert the visual field around just 1 axis, or around 2 axis at the same time (which is what a lens or pinhole does). If it's around 2, then the effect is to rotate the visual field 180 degrees. This would not cause the wearer to experience a mirror-image world.
The number in the conclusion just changed by a factor of 40. I double-checked the math in the last section, and found it was overly complex and completely wrong. I also forgot to remove the number of predicates p from the dictionary-based estimate for s.
BTW, note that even if you can prove that an AI isn't conscious, that doesn't prove that it isn't dangerous.
Are you treating the arguments in the assertions as independent random variables? Because the same symbol will show up in several assertions, in which case the meanings aren't independent.
"and our conscious experience would much more likely consist of periods in which the world behaved sensibly, punctuated by inconsistent dream-like periods"
This might not be the best phrasing to use, considering that almost everyone experiences exactly that on a regular basis. I'd guess that it would be important to emphasize that the deam-like periods would occur more arbitrarily than our nice regular sleep cycle.
Now I'm stuck trying to distinguish a grounding from a labeling. I thought of a grounding as being something like "atom G234 is associated with objects reflecting visible light of wavelengths near 650nm most strongly", and a labeling as being "the atom G234 has the label red".
But this grounding is something that could be expressed within the knowledge base. This is the basic symbol-grounding problem: Where does it bottom out? The question I'm facing is whether it bottoms out in a procedural grounding like that given above, or whethe...
The basic problem with these equations is that the test is more stringent for large a than for small a, because parameters that gave an expected 1 random grounding for a assertions would give only ¼ of a random grounding for a/4 assertions.
I'm surprised no one has come up with the objection that we believe things that aren't true. That's a tough one to handle.
Something worries me about this. It seems to say that Consciousness is a quantifiable thing, and as such, certain living things (babies, cows, dogs/cats, fish) may not meet the standards for Consciousness?
Or, am I reading it wrong??? I'm probably reading it wrong, please tell me I'm reading it wrong (if that is what I am doing).
I think this article would be better if it started with a definition of consciousness (or maybe even two definitions, a formal one and an operational one). It's hard to figure out how much sense the calculations make if we don't know what the results mean.
So if I understand correctly, your basic claim underlying all of this is that a system can be said not to be conscious if its set of beliefs remains equally valid when you switch the labels on some of the things it has beliefs about. I have a few concerns about this point, which you may have already considered, but which I would like to see addressed explicitly. I will post them as replies to this post.
If I am mischaracterizing your position, please let me know, and then my replies to this post can probably be ignored.
The number of assertions needed is now so large that it may be difficult for a human to acquire that much knowledge. Does anyone have an estimate for how many facts a human knows at different ages? Vocabulary of children entering grade school is said to be around 3000 words, IIRC.
An interesting result is that it suggests that that rate at which we can learn new concepts is not limited by our ability to learn the concepts themselves, but by our ability to learn enough facts using the concepts that we can be truly conscious of that knowledge. Or - if you ...
This is a summary of an article I'm writing on consciousness, and I'd like to hear opinions on it. It is the first time anyone has been able to defend a numeric claim about subjective consciousness.
ADDED: Funny no one pointed out this connection, but the purpose of this article is to create a nonperson predicate.
1. Overview
I propose a test for the absence of consciousness, based on the claim that a necessary, but not sufficient, condition for a symbol-based knowledge system to be considered conscious is that it has exactly one possible symbol grounding, modulo symbols representing qualia. This supposition, plus a few reasonable assumptions, leads to the conclusion that a symbolic artificial intelligence using Boolean truth-values and having an adult vocabulary must have on the order of 106 assertions before we need worry whether it is conscious.
Section 2 will explain the claim about symbol-grounding that this analysis is based on. Section 3 will present the math and some reasonable assumptions for computing the expected number of randomly-satisfied groundings for a symbol system. Section 4 will argue that a Boolean symbol system with a human-level vocabulary must have millions of assertions in order for it to be probable that no spurious symbols groundings exist.
2. Symbol grounding
2.1. A simple representational system
Consider a symbolic reasoning system whose knowledge base K consists of predicate logic assertions, using atoms, predicates, and variables. We will ignore quantifiers. In addition to the knowledge base, there is a relatively small set of primitive rules that say how to derive new assertions from existing assertions, which are not asserted in K, but are implemented by the inference engine interpreting K. Any predicate that occurs in a primitive rule is called a primitive predicate.
The meaning of primitive predicates is specified by the program that implements the inference engine. The meaning of predicates other than primitive rules is defined within K, in terms of the primitive predicates, in the same way that LISP code defines a semantics for functions based on the semantics of LISP's primitive functions. (If this is objectionable, you can devise a representation in which the only predicates are primitive predicates (Shapiro 2000).)
This still leaves the semantics of the atoms undefined. We will say that a grounding g for that system is a mapping from the atoms in K to concepts in a world W. We will extend the notation so that g(P) more generally indicates the concept in W arrived at by mapping all of the atoms in the predication P using g. The semantics of this mapping may be referential (e.g., Dretske 1985) or intensional (Maida & Shapiro 1982). The world may be an external world, or pure simulation. What is required is a consistent, generative relationship between symbols and a world, so that someone knowing that relationship, and the state of the world, could predict what predicates the system would assert.
2.2. Falsifiable symbol groundings and ambiguous consciousness
If you have a system that is meant to simulate molecular signaling in a cell, I might be able to define a new grounding g' that re-maps its nodes to things in W, so that for every predication P in K, g'(P) is still true in W; but now the statements in W would be interpreted as simulating traffic flow in a city. If you have a system that you say models disputes between two corporations, I might re-map it so that it is simulating a mating ritual between two creatures. But adding more information to your system is likely to falsify my remapping, so that it is no longer true that g(P) is true in W for all propositions P in K. The key assumption of this paper is that a system need not be considered conscious if such a currently-true but still falsifiable remapping is possible.
A falsifiable grounding is a grounding for K whose interpretation g(K) is true in W by chance, because the system does not contain enough information to rule it out. Consider again the system you designed to simulate a dispute between corporations, that I claim is simulating mating rituals. Let's say for the moment that the agents it simulates are, in fact, conscious. Furthermore, since both mappings are consistent, we don't get to choose which mapping they experience. Perhaps each agent has two consciousnesses; perhaps each settles on one interpretation arbitrarily; perhaps they flickers between the two like a person looking at a Necker cube.
Although in this example we can say that one interpretation is the true interpretation, our knowledge of that can have no impact on which interpretation the system consciously experiences. Therefore, any theory of consciousness that claimed the system was conscious of events in W using the intended grounding, must also admit that it is conscious of a set of entirely different events in W using the accidental grounding, prior to the acquisition of new information ruling the latter out.
2.3 Not caring is as good as disproving
I don't know how to prove that multiple simultaneous consciousnesses don't occur. But we don't need to worry about them. I didn't say that a system with multiple groundings couldn't be conscious. I said it needn't be considered conscious.
Even if you are willing to consider a computer program with many falsifiable groundings to be conscious, you still needn't worry about how you treat that computer program. Because you can't be nice to it no matter how hard you try. It's pointless to treat an agent as having rights if it doesn't have a stable symbol-grounding, because what is desirable to it at one moment might cause it indescribable agony in the next. Even if you are nice to the consciousness with the grounding intended by the system's designer, you will be causing misery to an astronomical number of equally-real alternately-grounded consciousnesses.
3. Counting groundings
3.1 Overview
Let g(K) denote the set of assertions about the world W that K represents that are produced from K using a symbol-grounding mapping g. The system fails the unique symbol-grounding test if there is a permutation function f (other than the identity function) mapping atoms into other atoms, such that g(K) is true and g(f(K)) is true in W. Given K, what is the probability that there exists such a permutation f?
We assume Boolean truth-values for our predicates. Suppose the system represents s different concepts as unique atoms. Suppose there are p predicates in the system, and a assertions made over the s symbols using these p predicates. We wish to know that it is not possible to choose a permutation f of those s symbols, such that the knowledge represented in the system would still evaluate to true in the represented world.
We will calculate the probability P(p,a) that each of a assertions using p predicates evaluates to true. We will also calculate the number N(s) of possible groundings of symbols in the knowledge base. We then can calculate the expected number E of random symbol groundings in addition to the one intended by the system builder as
N(s) x P(p,a) = E
Equation 1: Expected number of accidental symbol groundings
3.2 A closed-form approximation
This section, which I'm saving for the full paper, proves that, with certain reasonable assumptions, the solution to this equation is
sln(s)-s < aln(p)/2
Equation 7: The consciousness inequality for Boolean symbol systems
As s and p should be similar in magnitude, this can be approximated very well as 2s < a.
4. How much knowledge does an AI need before we need worry whether it is conscious?
How complex must a system that reasons something like a human be to pass the test? By “something like a human” we mean a system with approximately the same number of categories as a human. We will estimate this from the number of words in a typical human’s vocabulary.
(Goulden et al. 1990) studied Webster's Third International Dictionary (1963) and concluded it contains less than 58,000 distinct base words. They then tested subjects for their knowledge of a sample of base words from the dictionary, and concluded that native English speakers who are university graduates have an average vocabulary of around 17,000 base words. This accounts for concepts that have their own words. We also have concepts that we can express only by joining words together ("back pain" occurs to me at the moment); some concepts that we would need entire sentences to communicate; and some concepts that share the same word with other concepts. However, some proportion of these concepts will be represented in our system as predicates, rather than as atoms.
I used 50,000 as a ballpark figure for s. This leaves a and p unknown. We can estimate a from p if we suppose that the least-common predicate in the knowledge base is used exactly once. Then we have a/(p ln(p)) = 1, a = pln(p). We can then compute the smallest a such that Equation 7 is satisfied.
Solving for p would be difficult. Using 100 iterations of Newton's method (from the starting guess p=100) finds p=11,279, a=105,242. This indicates that a pure symbol system having a human-like vocabulary of 50,000 atoms must have at least 100,000 assertions before one need worry whether it is conscious.
Children are less likely to have this much knowledge. But they also know fewer concepts. This suggests that the rate at which we learn language is limited not by our ability to learn the words, but by our ability to learn enough facts using those words for us to have a conscious understanding of them. Another way of putting this is that you can't short-change learning. Even if you try to jump-start your AI by writing a bunch of rules ala Cyc, you need to put in exactly as much data as would have been needed for the system to learn those rules on its own in order for it to satisfy Equation 7.
5. Conclusions
The immediate application of this work is that scientists developing intelligent systems, who may have (or be pressured to display) moral concerns over whether the systems they are experimenting with may be conscious, can use this approach to tell whether their systems are complex enough for this to be a concern.
In popular discussion, people worry that a computer program may become dangerous when it becomes self-aware. They may therefore imagine that this test could be used to tell whether a computer program posed a potential hazard. This is an incorrect application. I suppose that subjective experience somehow makes an agent more effective; otherwise, it would not have evolved. However, automated reasoning systems reason whether they are conscious or not. There is no reason to assume that a system is not dangerous because it is unconscious, any more than you would conclude that a hurricane is not dangerous because it is unconscious.
More generally, this work shows that it is possible, if one considers representations in enough detail, to make numeric claims about subjective consciousness. It is thus an existence proof that a science of consciousness is possible.
References
Fred Dretske (1985). Machines and the mental. In Proceedings and Addresses of the American Philosophical Association 59: 23-33.
Robin Goulden, Paul Nation, John Read (1990). How large can a receptive vocabulary be? Applied Linguistics 11: 341-363.
Anthony Maida & Stuart Shapiro (1982). Intensional concepts in propositional semantic networks. Cognitive Science 6: 291-330. Reprinted in Ronald Brachman & Hector Levesque, eds., Readings in Knowledge Representation, Los Altos, CA: Morgan Kaufmann 1985, pp. 169-189.
William Rapaport (1988). Syntactic semantics: Foundations of computational natural-language understanding. In James Fetzer, ed., Aspects of Artificial Intelligence (Dordrecht, Holland: Kluwer Academic Publishers): 81-131; reprinted in Eric Dietrich (ed.), Thinking Computers and Virtual Persons: Essays on the Intentionality of Machines (San Diego: Academic Press, 1994): 225-273.
Roger Schank (1975). The primitive ACTs of conceptual dependency. Proceedings of the 1975 workshop on Theoretical Issues in Natural Language Processing, Cambridge MA.
Stuart C. Shapiro (2000). SNePS: A logic for natural language understanding and commonsense reasoning. In Lucja Iwanska & Stuart C. Shapiro (eds.), Natural Language Processing and Knowledge Representation: Language for Knowledge and Knowledge for Language (Menlo Park, CA/Cambridge, MA: AAAI Press/MIT Press): 175-195.
Yorick Wilks (1972). Grammar, Meaning, and the Machine Analysis of Language. London.