Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
The anthropic principle creeps in again here, and methinks you missed it. The ability to make this argument is contingent upon being an entity capable of a certain level of formal introspection. Since you have enough introspection to make the argument, you can't be an animal. In your next million lives, so to speak, you won't be able to make this argument, though someone else out there will.
If you'd be any other animal on Earth, you wouldn't be considering what it would be like to be something else. Doomsday argument and arguments like it are usually formulated in a way "Of all the persons that could reason like me, only this small percentage ever were wrong". When animals are prevented, due to their neurological limitations, from reasoning as necessiated by the argument, they're not part of this consideration.
This doesn't mean that they're not sentient, it just means that by thinking about anthropic problems you're part of much narrower set of beings than just sentient ones.
Why not limit the set of people who could reason like me to "people who are using anthropic reasoning" and just assume people will stop using anthropic reasoning in the next hundred years? Is this a reductio ad absurdum, or do you think it's a valid conclusion?
I'm sorry, but I'm a bit shocked how people on this site can seriously entertain ideas like "why am I me?" or "why do I live in the present?" except as early april's fool jokes. I am of course necessarily me because I call whoever I am me. And I live necessarily in the present because I call the time I live in the present. The question "Why am I not somebody else?" is nonsensical because for almost anybody I am somebody else. I think the confusion stems from treating your own consciousness at the same time as something special and not.
It only sounds nonsensical because of the words in which it's asked. The question raised by anthropic reasoning isn't "why do I live in a time I call the present" (to which, as you say, the answer is linguistic - of course we'd call our time the present) but rather "why do I live in the year 2010?" or, most precisely of all, "Given that I have special access to the subjective experience of one being, why would that be the experience of a being born in the late 20th century, as opposed to some other time?"
That may still sound tautological - after all, if it wasn't the 20th century, it'd be somewhen else and we'd be asking the same question - but in fact it isn't. Consider these two questions:
The correct answer to the second is not saying, "Well, if you were made out of helium, you could just ask why you were made out of helium, so it's a dumb question", it's pointing out the special chemical properties of carbon. Anthropic reasoning suggests that we can try doing the same to point out certain special properties of the 20th century.
The big difference is that the first question can be easily rephrased to "why are people made out of carbon and not of helium", but the second can't. But that difference isn't enough to make the second tautological or meaningless.
I think maybe some of this was meant for the comment above me.
That said I think the "I" really is the source of some if not all of these confusions and:
The big difference is that the first question can be easily rephrased to "why are people made out of carbon and not of helium", but the second can't. But that difference isn't enough to make the second tautological or meaningless.
I think the difference is exactly enough to make the second one tautological or meaningless. What you have to do is identify some characteristics of "I" and then ask: Why do entities of this type exist in the 20th century, as opposed to the 30th? If you have identified features that distinguish 20th century people from 30th century people you will have asked something interesting and meaningful.
The key point I will remember from reading this post is that the anthropic Doomsday argument can safely be put away in a box labelled 'muddled thinking about consciousness' alongside 'how can you get blue from not-blue?', 'if a tree falls in a forest with nobody there does it make a sound?' and 'why do quantum events collapse when someone observes them?'.
There are situations in which anthropic reasoning can be used but it is a mistake to think that this is because of the ability of a bunch of atoms to perform the class of processing we happen to describe as consciousness.
The probability of a randomly picked currently-living person having a Finnish nationality is less than 0.001. I observe myself being a Finn. What, if anything, should I deduce based on this piece of evidence?
The results of any line of anthropic reasoning are critically sensitive to which set of observers one chooses to use as the reference class, and it's not at all clear how to select a class that maximizes the accuracy of the results. It seems, then, that the usefulness of anthropic reasoning is limited.
That's an interesting observation.
There's a problem in assuming that consciousness is a 0/1 property; that you're either conscious, or not.
There's another problem in assuming that YOU are a 0/1 property; that there is exactly one atomic "your consciousness".
Reflect on the discussion in the early chapters of Daniel Dennet's "Consciousness Explained", about how consciousness is not really a unitary thing, but the result of the interaction of many different processes.
An ant has fewer of these processes than you do. Instead of asking "What are the odds that 'I' ended up as me?", ask, "For one of these processes, what are the odds that it would end up in me, rather than in an ant?"
According to Wikipedia's entry on biomass, ants have 10-100 times the biomass of humans today.
According to Wikipedia's list of animals by neuron count, ants have 10,000 neurons.
According to that page, and this one, humans have 10^11 neurons.
Information is proportional not to the number of neurons, but to the number of patterns that can be stored in those neurons, which is likely somewhere between N and N^2. I'm gonna call it NlogN.
I weigh as much as 167,000 ants. Each...
"why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?".
Well, quite. Both are absurd.
At one time I wondered, why am I not a particle? The anthropic "explanation" is that particles can't be conscious. But that doesn't remove the prior improbability of my existence in this form. Empirically I know I'm conscious, so being a particle (under the usual assumptions) has a posterior probability of zero. But if I think of myself as a random sample from the set of all entities - and why shouldn't I? - then my apriori probability of having been conscious is vanishingly small. (Unless I change my notion of reality rather radically.)
Let's look at examples where we know the 'right' answer:
Someone flips a coin. If it's heads they copy you a thousand times and put 1 of you in a green room and 999 of you in a red room. If it's tails they do the opposite.
You wake up in a green room and conclude that the coin was likely tails.
Now assume that in addition to copying you 1000 times, 999 of you were randomly selected to have the part of your brain that remembers to apply anthropic reasoning erased. You wake up in a green room and remember to apply the anthropic principle, but, knowing that you ...
Following bogus, I could imagine endorsing a weaker form of the argument: not that it's like nothing to be a bat, but that it's like less to be a bat than to be a human.
In fact, if you've ever wondered why you happen to be the person you are, and not someone else, it may be that the reflectivity you are displaying by asking this question puts you in a more-strongly-anthropically-weighted reference class.
and the rest of us can eat veal and foie gras guilt-free.
I don't think this works.
Obama can use the same argument to decide that, since if he could have been any person, it would be vanishingly likely that he'd be the president of the most powerful nation on earth. Thus, clearly, the rest of us (he would conclude) have no conscious experience, and he had better go ahead and be an egoist, and run the country in whatever way gives him the most personal gain.
I don't want Obama to do this, so I think I had better not do it either.
Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
Only in the sense that it's impossible for you to be a rock, or a tree, or an alien, or another person, because you clearly aren't any of those things. All this tells you is that you should be nearly 100% certain that you are you, and that's no great insight.
The anthropic principle seems to imply that our subjective experiences take place in amazingly common ancestor simulations that don't simulate animals in sufficient detail to give them subjective experience. That I find myself experiencing being a human rather than being a bat, even though bats are in principle capable of subjective experience, is because there are vastly more detailed simulations of humans than of bats.
The fact that you are human is evidence that only humans are conscious, but it's far from proof. If you have no a priori reason to believe that only humans are conscious, that means it's just as likely that it's only humans as only bats. If the a priori probability of all animals being conscious is only the same as the probability that it's just a given species (I'd say it's much, much larger), and it's impossible for it to just be two species etc., then a posteriori, there would still be a 50:50 chance that all animals are conscious.
Of course, there is an...
Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be.
I do not really think you need an anthropic argument to prove that "you" couldn't be an animal; it is more a matter of definition, i.e. by definition you are not an animal. For example, there is no anthropic reason that "I" couldn't have been raised in Alabama, but what would it even mean to say that I could have been raised in Alabama? That somebody ...
Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
You assume that you have equal probability of being any conscious being. The internal subjective experience of humans stands out in its complexity; perhaps more complex subjective experiences have higher weight for some reason.
Anthropic reasoning is what leads people to believe in miracles. Rare events have a high probability of occurring if the number of observations is large enough. But whoever that rare event happens to will feel like it couldn't have just happened by chance, because the odds of it happening to them was so large.
If you wait until the event occurs, and then start treating it as a random event from a single trial, forming your hypothesis after seeing the data, you'll make inferential errors.
Imagine that there are balls in an urn, labeled with numbers 1, 2,....
The phrase "for me to be an animal" may sound nonsensical, but "why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?".
Agreed - they are both equally silly. The only answer I can think of is 'How do you know you are not?" If you had, in fact, been turned into an animal, and an animal into you, what differences would you expect to see in the world?
Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. (...) ...we still have no idea what it's like to feel a subjective echolocation quale.
(Excuse me for being off topic)
Reductionism is true; if we really know everything about a bat brain, bat quale would be included in the package. Imagine a posthuman that is able to model a bat's brain and sensory modalities on a neural level, in its own mind. There is no way it'd find anything missing about the bat; there is no way it'd comp...
I wondered about this today, googled it, and should not be surprised that Scott Alexander thought about it years ago:)
A couple of thoughts, very late to this discussion.
First, perhaps human consciousness is highly individuated, so each human counts for one, when we’re reasoning anthropically. But if there are hive-minds, then maybe thousands of ants count for only one. Perhaps even Norway rats are similar enough to each other that, though they’re not a hive mind, they have less anthropic weighting. Perhaps the proper reference class is types of ...
Considering the vast number of non-elephant animals in the world, the probability of being an elephant is extremely low.
[Edited, because it was wrong.]
The doomsday argument is,
O(X) = random human me observes some condition already satisfied for X humans
pt(X) = P(there will be X humans total over the course of time)
pt(2X | O(X|2)) / pt(2X) < pt(X | O(X/2)) / pt(X)
This is true if your observation O(X) is, "X people lived before I was born", or, "There are X other people alive in my lifetime".
But if your observation O(X) is "I am the Xth human", then you get
pt(2X | O(X|2)) / pt(2X) = pt(X | O(X/2)) / pt(X)
and the Doomsday argument fails.
So which definition of O(X) is the right observation to use?
The anthropic principle is contingent on no additional information. For example, if sentient life exists elsewhere in the universe, your odds of being a human are vanishingly small. This would suggest sentient life does not exist elsewhere in the universe. However, given that there appears to be nothing so special about earth that it wouldn't reoccur many times among trillions and trillions of stars, we can still conclude that sentient life does likely exist elsewhere in the universe.
Similarly, in this context, the fact that animals have brains that are r...
You're saying, "I rolled a die. The die came up 1. Therefore, this die probably has a small number of sides."
But "human" is just "what we are". Humans are not "species number 1". So your logic is really like saying, "I rolled a die. The die landed with some symbol on top. Therefore, it probably has a small number of sides."
You can't be a toaster, because toasters don't have any awareness at all. As a philosophical ponderer, you likewise can't be an animal lower than H. Sap. If you were, you wouldn't be able to reflect on it.
Re: "If the doomsday argument is sufficient to prove that some catastrophe is preventing me from being one of a trillion spacefaring citizens of the colonized galaxy, this argument hints that something is preventing me from being one of a trillion bats or birds or insects."
The doomsday argument? It seems like a dubious premise.
The following is copypasted from some stream-of-consciousness-style writing from my own experimental wave/blog/journal, so it may be kinda messy. If this gets upvoted, I might take the time to clean it up some more. The first part of this is entirely skippable.
(skippable part starts here)
I just read this LW post. I think the whole argument is silly. But I still haven't figured out how to explain the reasons clearly enough to post a comment about it. I'll try to write about it here.
Some people have posted objections to it in the comments, but so far no...
Another reason I wouldn't put any stock in the idea that animals aren't conscious is that the complexity cost of a model in we are and they (other animals with complex brains) are not is many bits of information. 20 bits gives a prior probability factor of 10^-6 (2^-20). I'd say that would outweigh the larger # of animals, even if you were to include the animals in the reference class.
If there is an infinite number of conscious minds, how do the anthropic probability arguments work out?
In a big universe, there are infinitely many beings like us.
An (insufficiently well designed) AI might use this kind of reasoning to conclude that it's not like anything to be a human. (I mentioned this as an AI risk at the bottom of this SL4 post.)
Re: "Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal."
Your priors say that you are a human. It is evidence that is hard to ignore, no matter how unlikely may seem. Concrete evidence that you are part of a minority trumps the idea that being part of a minority is statistically unlikely.
Since this is true regardless of whether or not it "feels like something" to be a bat, the mere evidence of your existence as a human doesn't allow you to draw conclusions about Nagel's bat speculations.
This argument would have to apply to people who were born completely blind, or completely deaf. Just imagine that all humans are echolocation-deaf/blind.
If you randomly selected from the set of all sentient beings throughout time and space, the odds are vanishingly low that you would get the Little Prince as well.
Suppose that he ponders his situation, and concludes that if there were places in the universe where many, many humans can coexist, then it would be unlikely that he would find himself living alone on an asteroid.
If we accept for the sake of an argument that he exists, then someone must be the Little Prince, and be doomed to make incorrect inferences about the representativeness of their situatio...
Can we dismiss all anthropic reasoning by saying that probability is meaningless for singular events? That is, the only way to obtain probability is from statistics, and I cannot run repeated experiments of when, where, and as what I exist.
Instead of showing that non-human animals are unconscious, anthropic reasoning may show that such animals are conscious if we are not ourselves soon doomed to extinction. Expanding the class of observers to include such animals makes it less surprising that we find ourselves living at this comparatively early stage of human evolution, since "we" refers to conscious rather than to merely human beings.
This argument assumes that most non-human animals will soon go extinct. But this assumption makes sense under many of the possible scenarios involving human survival.
An arrangement of particles in space can embody a blink reflex with no problems, because blinking is motion, and so it just means they're changing position in space.
Generating meaningful sentences - here we begin to run into problems, though not so severe as the problem with color. If the sentences are understood to be physical objects, such as sequences of sound waves or sequences of letter-shapes, then they can fit into physical ontology. We might even be able to specify a formal grammar of allowed sentences, and a combinatorial process which only produces physical sentences from that grammar. But meaning per se, like color, is not a physical property as ordinarily understood. (I know I'll get into extra trouble here, because some people are with me on the color qualia being a problem, but believe that causal theories of reference can reduce meaning to a conjunction of known physical properties. However, so far as I can see, intrinsic meaning is a property only of certain constituents of mental states - the meaning of sentences and all other intersubjective signs is not intrinsic and derives from a shared interpretive code - and the correct ontology of meaning is going to be bound up with the correct ontology of consciousness in general.)
Anyway, you say it's not obvious to you that "arrangements of matter simply aren't the kind of thing that can be an experience of color". Okay. Let's suppose there is an arrangement of matter in space which is an experience of color. Maybe it's a trillion particles in a certain arrangement executing a certain type of motion. Now, we can think about progressively simpler arrangements and motions of particles - subtracting one particle at a time from the scenario, if necessary... progressively simpler until we get all the way back to empty space. Somewhere in that conceptual progression we stopped having an experience of color there. Can you give me the faintest, slightest hint of where the magic transition occurs - where we go from "arrangement of particles that's an experience of color" to "arrangement of particles that's not an experience of color"?
I could also simply ask for you to indicate where in the magic arrangement of particle the color is. That is, assuming that you agree that one aspect of the existence of an experience of color is that something somewhere actually is that color. If it turns out that, according to you, brain state X is an experience of only because the brain in question outputs the word "red" when queried, or only because a neural network somewhere is making the categorization "red" - then that is eliminativism. There's no actual , no actual color, just color words or color categories.
The reason it is obvious that there is no color inherently inhabiting an arrangement of particles in space is because it's easy to see what the available ontological ingredients are, and it's easy to see what you can and cannot make by combining them. If we include dynamics and a notion of causality, then the ingredients are position, time, and causal dependence. What can you construct from such ingredients? You can make complicated structures; you can make complicated motions; you can make complicated causal dependencies among structures and motions. As you can see, it's no mystery that such an ontological scheme can encompass something like a blink reflex, which is a type of motion with a specified causal dependency.
With respect to the historical case of vitalism, it's interesting that what the vitalists posited was a "vital force". That's not an objection to the logical possibility of reducing life, and especially replication, to matter in motion. They just didn't believe that the known forces were capable of producing the right sort of motion, so they felt the need to postulate a new, complicated form of causal interaction, capable of producing the complexly orchestrated motion which must be occurring for living things to take shape. As it turned out, there was no need to postulate a special vital force to do that; the orchestration can be produced by the same forces which are at work in nonliving matter.
I'm emphasizing the way in which the case of vitalism differs from the case of qualia, because it is so often cited as a historical precedent. The vitalists - at least, the ones who talked about vital forces - were not saying that life is not material. They just postulated an extra force; in that respect, they were proposing only a conservative extension to the physical ontology of their time. But the observation that consciousness presents a basic ontological problem, in a universe consisting of nothing but matter in motion through space, has been around for a very long time. Democritus took note of this objection. I think Leibniz stated it in a recognizably modern form. It is an old insight, and it has not gone away just because the physical sciences have been so successful. Celia Green writes that this success actually sharpens the problem: the clearer our conception of material ontology and our causal account of the world becomes, the more obvious it becomes that this concept and this account do not contain the "secondary qualities" like your .
Even at the dawn of modern physical science, in the time of Galileo, there was some discussion as to how these qualities were being put aside, in favor of an exclusive focus on space, time, motion, extension. It's quite amazing that from humble beginnings like Kepler's laws, we've come as far as quantum mechanics, string theory, molecular biology, all the time maintaining that exclusion. Some new ontological factors did enter the set of ingredients that physical ontology can draw upon, especially probability, but those elementary sensory qualities remain absent from the physical conception of reality. The 20th-century revolution in thought regarding information, communication, and computation goes just a little way towards bringing them back, but in the end it's nowhere near enough, because when you ask, what are these information states really, you end up having to reduce them to statistical properties of particles in space, because that's still all that the physical ontology gives you to work with.
I'm probably an idiot for responding at such length on this topic, because all my experience to date suggests that doing so changes nothing fundamentally. Some people get that there's a problem, but don't know how to solve it and can only hope that the future does so, or they embrace a fuzzy idea like emergence dualism or panpsychism out of intellectual desperation. Some people don't get that there's a problem - don't perceive, for example, that "what it feels like to be a bat" is an extra new property on top of all the ordinary physical properties that make up a bat - and are happy with a philosophical formula like "thought is computation".
I believe there is a problem to be solved, a severe problem, a problem of the first order, whose solution will require a change of perspective as big as the one which introduced us to the problem. Once, we had naive realism. The full set of objects and properties which experience reveals to us were considered equally real. They all played a part in the makeup of reality, to which the human mind had a partial but mysteriously direct access. Now, we have physics; ontological atomism, plus calculus. Amazingly, it predicts the behavior of matter with incredible precision, so it's getting something right. But mind, and everything that is directly experienced, has vanished from the model of reality. It hasn't vanished in reality; everything we know still comes to us through our minds, and through that same multi-sensory experience which was once naively identified with the world itself, and which we now call conscious experience. The closest approximation within the physical ontology to all of that is computation within the nervous system. But when you ask what neural computation are, physically, it once again reduces to matter in motion through space, and the same mismatch between the apparent character of experience, and the physical character of the brain, recurs. Since denying that experience does have this distinct character is false and therefore hopeless, the only way out must be to somehow reconceive physical ontology so that it contains, by construction, consciousness as it actually is, and so that it preserves the causal structural relations (between fundamental entities whose inner nature is opaque and therefore undetermined by the theory) responsible for the success of quantitative predictions.
I imagine my manifesto there is itself opaque, if you're one of those people who don't get the problem to begin with. Nonetheless, I believe that is the principle which has to be followed in order to solve the problem of consciousness. It's still only the barest of beginnings, you still have to step into darkness and guess which way to turn, many times over, in order to get anywhere, and if my private ideas about how to proceed are right, then you have to take some really big leaps in the darkness. But that's the kernel of my answer.
Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand.
Let's try to communicate through intuition pumps:
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses - they had to be, in ...
...at least not if you accept a certain line of anthropic argument.
Thomas Nagel famously challenged the philosophical world to come to terms with qualia in his essay "What is it Like to Be a Bat?". Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. Even if we deduce all the physical principles behind echolocation, even if we could specify the movement of every atom in a bat's senses and nervous system that represents its knowledge of where an echolocated insect is, we still have no idea what it's like to feel a subjective echolocation quale.
Anthropic reasoning is the idea that you can reason conditioning on your own existence. For example, the Doomsday Argument says that you would be more likely to exist in the present day if the overall number of future humans was medium-sized instead of humongous, therefore since you exist in the present day, there must be only a medium-sized number of future humans, and the apocalypse must be nigh, for values of nigh equal to "within a few hundred years or so".
The Buddhists have a parable to motivate young seekers after enlightenment. They say - there are zillions upon zillions of insects, trillions upon trillions of lesser animals, and only a relative handful of human beings. For a reincarnating soul to be born as a human being, then, is a rare and precious gift, and an opportunity that should be seized with great enthusiasm, as it will be endless eons before it comes around again.
Whatever one thinks of reincarnation, the parable raises an interesting point. Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
The phrase "for me to be an animal" may sound nonsensical, but "why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?". If the doomsday argument is sufficient to prove that some catastrophe is preventing me from being one of a trillion spacefaring citizens of the colonized galaxy, this argument hints that something is preventing me from being one of a trillion bats or birds or insects.
And this could be that animals lack subjective experience. This would explain quite nicely why I'm not an animal: because you can't be an animal, any more than you can be a toaster. So Thomas Nagel can stop worrying about what it's like to be a bat, and the rest of us can eat veal and foie gras guilt-free.
But before we break out the dolphin sausages - this is a pretty weird conclusion. It suggests there's a qualitative and discontinuous difference between the nervous system of other beings and our own, not just in what capacities they have but in the way they cause experience. It should make dualists a little bit happier and materialists a little bit more confused (though it's far from knockout proof of either).
The most significant objection I can think of is that it is significant not that we are beings with experiences, but that we know we are beings with experiences and can self-identify as conscious - a distinction that applies only to humans and maybe to some species like apes and dolphins who are rare enough not to throw off the numbers. But why can't we use the reference class of conscious beings if we want to? One might as well consider it significant only that we are beings who make anthropic arguments, and imagine there will be no Doomsday but that anthropic reasoning will fall out of favor in a few decades.
But I still don't fully accept this argument, and I'd be pretty happy if someone could find a more substantial flaw in it.