If we are extended systems, we are not agents of any sort; we are only systems of diverse and fluctuating parts.
¿Por qué no los dos?
Agency is a spectrum -- some people are more agenty than others.
The parts do not fluctuate that much -- does Otto completely rewrite or replace his notebook every day?
Diverse parts -- something like forebrain, midbrain, hindbrain, pons, medulla...?
I think that Baker would reply that only the whole brain, or perhaps some specific part of the brain, is a whole agent in the ways she considers important. One of the most crucial characteristics of a person is that they have "a first-person perspective." Hammers do not have any kind of first-person perspective (i.e. there is no "what it is like to be a hammer"), so they are not persons.
I'm currently working on a précis for one of her books, but suffice it to say, I'm no more convinced by her arguments when she has a whole book in which to expound on them.
Salticidae Philosophiae is a series of abstracts, commentaries, and reviews on philosophical articles and books.
Lynne Rudder Baker asks under what circumstances (if any) external objects can be considered part of us in the same manner as an arm or brain.
You can find the paper here, on JSTOR.
Highlights
New or uncommon terminology
Overview
Baker starts by pointing out that we are beginning to merge with technology (e.g. cochlear implants) and technology is being developed that will further blur this line, such as computers that are commanded by the firing of neurons.
Following this is an overview of the "extended-mind thesis" (hereafter EMT). Baker describes a being in this paradigm as a system, reaching out and discarding various components and generating an ongoing, retroactive narrative that gives the system a (false) sense of self. She refers to this system as a "grab bag of tools" and wonders how such a thing can generate a narrative in order to make sense of its actions.
Her focus is on the people. Specifically, "Where did all the people go?" Baker wants to know whether an extended system can remain a person (assuming that it was a person prior to extension). Is it possible for an extended system to be a rational or moral agent? Are extended systems capable of understanding what they do at the time that they are doing it?
Andy Clark and David Chalmers, the philosophers behind EMT, illustrated their idea with a thought experiment involving two people named Otto and Inga. In brief, both Otto and Inga want to go to the museum, but while Inga simply consults her memory, Otto has an impaired memory and instead consults his notebook. Their position is that the notebook is functioning as an extension of Otto's mind and forms "Extended Otto."
Baker, who suggests that personhood has to do with intent, says that "Unextended Otto" is the only person, notebook or no notebook, because intent begins within the physical body that we call Unextended Otto. The idea of intent is important to Baker because the intentional position is a useful way for figuring, if not where the persons are, then what we should treat as persons (i.e. maybe Alice is a philosophical zombie, but the intentional stance can give us a good reason to treat Alice otherwise).
After going over the Otto-and-Inga thought experiment, Baker suggests ways to "rescue" personhood from its apparent destruction at the hands of EMT. The first is to say that "persons," as such, do not actually exist, but it is useful to employ the term and so we shall. She refers to this as a "deflated" view, and does not like it at all. Part of her dislike, besides an understandable aesthetic distaste for it, is her idea that a "complete understanding of reality" would require reference in some cases to things or actions comprehensible only from the intentional stance. This can be contrasted with the view of Daniel Dennett, who invented the intentional stance but, perhaps paradoxically, holds that it is theoretically possible to give a full description of reality and its events without mentioning persons or even the idea or possibility of persons.
Baker, however, argues that the mind's personal level (contrasted with the subpersonal, roughly analogous to the subconscious) feels real and distinct to us (i.e. we feel like persons) and that this would not be true if we were just systems. A "grab bag of tools," she says, cannot understand itself.
Ultimately, she concludes that personhood is rooted in the possession of a first-person perspective, and can be biological, non-biological, or a mixture of the two. Only things that are "functionally integrated" into our bodies are part of us and qualify as extensions of our mind (e.g. bionic eye yes, pencil no). Step by step, functional integration could even mean the replacement of our biological bodies, but she implies that e.g. a quick uploading would be the death of one person and (presumably) the creation of another rather than the transfer of a single person to a new substrate.
Comments
Much of Baker's argument rests on the idea that people understand what they're doing at the time that they do it, and, for that matter, that people exist at all. We are coming to learn that, even internally, a human is a coalition of agents and sub-agents rather than one unified agent. It appears that intent can be generated on a level below what Baker would refer to as Otto; what if it were possible for intent to be generated from other parts of an extended system, say, from an advanced computer program? What if there is a back and forth between the components of a mind (as there very well may be in the human body), so that intent is not held by one component alone?
Baker refers to "Otto himself" to suggest that there is a definitive locus of personhood, but maybe this is just a trick of language. It is also not clear that minds must be identical to persons. Perhaps we need to either revise our understanding of "person" or bite the bullet and say that there are no persons.
Also, why does a first-person perspective matter to the issue of agency? If, for example, we say that a given computer program does not have any consciousness at all, but, because it has been given the capability of learning from the results of its actions, it is an entity (or something like an entity) which we ought to treat as if it had agency (because the feedback loop will influence its future output), then maybe it actually does have agency. Or, on the other hand, maybe nothing has agency, not even humans, but then that word becomes useless. In any case, given that we live in a universe that is governed by physical laws, I'm not sure how we could draw a sharp line between the decision-making processes of a human and of a sufficiently-advanced (but still not conscious) program, and basing it on something like "has a first-person conscious perspective" seems like a last-ditch attempt from out of left field.
Favorite passage
Author biography
Lynne Rudder Baker completed a B.A. in mathematics at Vanderbilt University in Nashville, Tennessee, in 1966. After a year studying philosophy at Johns Hopkins University on a National Defense Education Act Fellowship (1967–1968), she returned to Nashville to marry. She resumed her graduate studies at Vanderbilt, completing an M.A. in 1971 and a Ph.D. in 1972, both in philosophy.
In addition to her numerous contributions to books and journal articles, she has published three monographs: Saving Belief: A Critique of Physicalism (1987); Explaining Attitudes: A Practical Approach to the Mind (1995); and Persons and Bodies: A Constitution View (2000). A volume of critical essays devoted to her work, entitled Explaining Beliefs: Lynne Rudder Baker and Her Critics, was edited by Anthonie Meijers (2001).
Philosophers & works mentioned
Also check out...