Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
The anthropic principle creeps in again here, and methinks you missed it. The ability to make this argument is contingent upon being an entity capable of a certain level of formal introspection. Since you have enough introspection to make the argument, you can't be an animal. In your next million lives, so to speak, you won't be able to make this argument, though someone else out there will.
If you'd be any other animal on Earth, you wouldn't be considering what it would be like to be something else. Doomsday argument and arguments like it are usually formulated in a way "Of all the persons that could reason like me, only this small percentage ever were wrong". When animals are prevented, due to their neurological limitations, from reasoning as necessiated by the argument, they're not part of this consideration.
This doesn't mean that they're not sentient, it just means that by thinking about anthropic problems you're part of much narrower set of beings than just sentient ones.
Why not limit the set of people who could reason like me to "people who are using anthropic reasoning" and just assume people will stop using anthropic reasoning in the next hundred years? Is this a reductio ad absurdum, or do you think it's a valid conclusion?
I'm sorry, but I'm a bit shocked how people on this site can seriously entertain ideas like "why am I me?" or "why do I live in the present?" except as early april's fool jokes. I am of course necessarily me because I call whoever I am me. And I live necessarily in the present because I call the time I live in the present. The question "Why am I not somebody else?" is nonsensical because for almost anybody I am somebody else. I think the confusion stems from treating your own consciousness at the same time as something special and not.
It only sounds nonsensical because of the words in which it's asked. The question raised by anthropic reasoning isn't "why do I live in a time I call the present" (to which, as you say, the answer is linguistic - of course we'd call our time the present) but rather "why do I live in the year 2010?" or, most precisely of all, "Given that I have special access to the subjective experience of one being, why would that be the experience of a being born in the late 20th century, as opposed to some other time?"
That may still sound tautological - after all, if it wasn't the 20th century, it'd be somewhen else and we'd be asking the same question - but in fact it isn't. Consider these two questions:
The correct answer to the second is not saying, "Well, if you were made out of helium, you could just ask why you were made out of helium, so it's a dumb question", it's pointing out the special chemical properties of carbon. Anthropic reasoning suggests that we can try doing the same to point out certain special properties of the 20th century.
The big difference is that the first question can be easily rephrased to "why are people made out of carbon and not of helium", but the second can't. But that difference isn't enough to make the second tautological or meaningless.
I think maybe some of this was meant for the comment above me.
That said I think the "I" really is the source of some if not all of these confusions and:
The big difference is that the first question can be easily rephrased to "why are people made out of carbon and not of helium", but the second can't. But that difference isn't enough to make the second tautological or meaningless.
I think the difference is exactly enough to make the second one tautological or meaningless. What you have to do is identify some characteristics of "I" and then ask: Why do entities of this type exist in the 20th century, as opposed to the 30th? If you have identified features that distinguish 20th century people from 30th century people you will have asked something interesting and meaningful.
The key point I will remember from reading this post is that the anthropic Doomsday argument can safely be put away in a box labelled 'muddled thinking about consciousness' alongside 'how can you get blue from not-blue?', 'if a tree falls in a forest with nobody there does it make a sound?' and 'why do quantum events collapse when someone observes them?'.
There are situations in which anthropic reasoning can be used but it is a mistake to think that this is because of the ability of a bunch of atoms to perform the class of processing we happen to describe as consciousness.
The probability of a randomly picked currently-living person having a Finnish nationality is less than 0.001. I observe myself being a Finn. What, if anything, should I deduce based on this piece of evidence?
The results of any line of anthropic reasoning are critically sensitive to which set of observers one chooses to use as the reference class, and it's not at all clear how to select a class that maximizes the accuracy of the results. It seems, then, that the usefulness of anthropic reasoning is limited.
That's an interesting observation.
There's a problem in assuming that consciousness is a 0/1 property; that you're either conscious, or not.
There's another problem in assuming that YOU are a 0/1 property; that there is exactly one atomic "your consciousness".
Reflect on the discussion in the early chapters of Daniel Dennet's "Consciousness Explained", about how consciousness is not really a unitary thing, but the result of the interaction of many different processes.
An ant has fewer of these processes than you do. Instead of asking "What are the odds that 'I' ended up as me?", ask, "For one of these processes, what are the odds that it would end up in me, rather than in an ant?"
According to Wikipedia's entry on biomass, ants have 10-100 times the biomass of humans today.
According to Wikipedia's list of animals by neuron count, ants have 10,000 neurons.
According to that page, and this one, humans have 10^11 neurons.
Information is proportional not to the number of neurons, but to the number of patterns that can be stored in those neurons, which is likely somewhere between N and N^2. I'm gonna call it NlogN.
I weigh as much as 167,000 ants. Each...
"why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?".
Well, quite. Both are absurd.
At one time I wondered, why am I not a particle? The anthropic "explanation" is that particles can't be conscious. But that doesn't remove the prior improbability of my existence in this form. Empirically I know I'm conscious, so being a particle (under the usual assumptions) has a posterior probability of zero. But if I think of myself as a random sample from the set of all entities - and why shouldn't I? - then my apriori probability of having been conscious is vanishingly small. (Unless I change my notion of reality rather radically.)
Let's look at examples where we know the 'right' answer:
Someone flips a coin. If it's heads they copy you a thousand times and put 1 of you in a green room and 999 of you in a red room. If it's tails they do the opposite.
You wake up in a green room and conclude that the coin was likely tails.
Now assume that in addition to copying you 1000 times, 999 of you were randomly selected to have the part of your brain that remembers to apply anthropic reasoning erased. You wake up in a green room and remember to apply the anthropic principle, but, knowing that you ...
Following bogus, I could imagine endorsing a weaker form of the argument: not that it's like nothing to be a bat, but that it's like less to be a bat than to be a human.
In fact, if you've ever wondered why you happen to be the person you are, and not someone else, it may be that the reflectivity you are displaying by asking this question puts you in a more-strongly-anthropically-weighted reference class.
and the rest of us can eat veal and foie gras guilt-free.
I don't think this works.
Obama can use the same argument to decide that, since if he could have been any person, it would be vanishingly likely that he'd be the president of the most powerful nation on earth. Thus, clearly, the rest of us (he would conclude) have no conscious experience, and he had better go ahead and be an egoist, and run the country in whatever way gives him the most personal gain.
I don't want Obama to do this, so I think I had better not do it either.
Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
Only in the sense that it's impossible for you to be a rock, or a tree, or an alien, or another person, because you clearly aren't any of those things. All this tells you is that you should be nearly 100% certain that you are you, and that's no great insight.
The anthropic principle seems to imply that our subjective experiences take place in amazingly common ancestor simulations that don't simulate animals in sufficient detail to give them subjective experience. That I find myself experiencing being a human rather than being a bat, even though bats are in principle capable of subjective experience, is because there are vastly more detailed simulations of humans than of bats.
The fact that you are human is evidence that only humans are conscious, but it's far from proof. If you have no a priori reason to believe that only humans are conscious, that means it's just as likely that it's only humans as only bats. If the a priori probability of all animals being conscious is only the same as the probability that it's just a given species (I'd say it's much, much larger), and it's impossible for it to just be two species etc., then a posteriori, there would still be a 50:50 chance that all animals are conscious.
Of course, there is an...
Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be.
I do not really think you need an anthropic argument to prove that "you" couldn't be an animal; it is more a matter of definition, i.e. by definition you are not an animal. For example, there is no anthropic reason that "I" couldn't have been raised in Alabama, but what would it even mean to say that I could have been raised in Alabama? That somebody ...
Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
You assume that you have equal probability of being any conscious being. The internal subjective experience of humans stands out in its complexity; perhaps more complex subjective experiences have higher weight for some reason.
Anthropic reasoning is what leads people to believe in miracles. Rare events have a high probability of occurring if the number of observations is large enough. But whoever that rare event happens to will feel like it couldn't have just happened by chance, because the odds of it happening to them was so large.
If you wait until the event occurs, and then start treating it as a random event from a single trial, forming your hypothesis after seeing the data, you'll make inferential errors.
Imagine that there are balls in an urn, labeled with numbers 1, 2,....
The phrase "for me to be an animal" may sound nonsensical, but "why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?".
Agreed - they are both equally silly. The only answer I can think of is 'How do you know you are not?" If you had, in fact, been turned into an animal, and an animal into you, what differences would you expect to see in the world?
Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. (...) ...we still have no idea what it's like to feel a subjective echolocation quale.
(Excuse me for being off topic)
Reductionism is true; if we really know everything about a bat brain, bat quale would be included in the package. Imagine a posthuman that is able to model a bat's brain and sensory modalities on a neural level, in its own mind. There is no way it'd find anything missing about the bat; there is no way it'd comp...
I wondered about this today, googled it, and should not be surprised that Scott Alexander thought about it years ago:)
A couple of thoughts, very late to this discussion.
First, perhaps human consciousness is highly individuated, so each human counts for one, when we’re reasoning anthropically. But if there are hive-minds, then maybe thousands of ants count for only one. Perhaps even Norway rats are similar enough to each other that, though they’re not a hive mind, they have less anthropic weighting. Perhaps the proper reference class is types of ...
Considering the vast number of non-elephant animals in the world, the probability of being an elephant is extremely low.
[Edited, because it was wrong.]
The doomsday argument is,
O(X) = random human me observes some condition already satisfied for X humans
pt(X) = P(there will be X humans total over the course of time)
pt(2X | O(X|2)) / pt(2X) < pt(X | O(X/2)) / pt(X)
This is true if your observation O(X) is, "X people lived before I was born", or, "There are X other people alive in my lifetime".
But if your observation O(X) is "I am the Xth human", then you get
pt(2X | O(X|2)) / pt(2X) = pt(X | O(X/2)) / pt(X)
and the Doomsday argument fails.
So which definition of O(X) is the right observation to use?
The anthropic principle is contingent on no additional information. For example, if sentient life exists elsewhere in the universe, your odds of being a human are vanishingly small. This would suggest sentient life does not exist elsewhere in the universe. However, given that there appears to be nothing so special about earth that it wouldn't reoccur many times among trillions and trillions of stars, we can still conclude that sentient life does likely exist elsewhere in the universe.
Similarly, in this context, the fact that animals have brains that are r...
You're saying, "I rolled a die. The die came up 1. Therefore, this die probably has a small number of sides."
But "human" is just "what we are". Humans are not "species number 1". So your logic is really like saying, "I rolled a die. The die landed with some symbol on top. Therefore, it probably has a small number of sides."
You can't be a toaster, because toasters don't have any awareness at all. As a philosophical ponderer, you likewise can't be an animal lower than H. Sap. If you were, you wouldn't be able to reflect on it.
Re: "If the doomsday argument is sufficient to prove that some catastrophe is preventing me from being one of a trillion spacefaring citizens of the colonized galaxy, this argument hints that something is preventing me from being one of a trillion bats or birds or insects."
The doomsday argument? It seems like a dubious premise.
The following is copypasted from some stream-of-consciousness-style writing from my own experimental wave/blog/journal, so it may be kinda messy. If this gets upvoted, I might take the time to clean it up some more. The first part of this is entirely skippable.
(skippable part starts here)
I just read this LW post. I think the whole argument is silly. But I still haven't figured out how to explain the reasons clearly enough to post a comment about it. I'll try to write about it here.
Some people have posted objections to it in the comments, but so far no...
Another reason I wouldn't put any stock in the idea that animals aren't conscious is that the complexity cost of a model in we are and they (other animals with complex brains) are not is many bits of information. 20 bits gives a prior probability factor of 10^-6 (2^-20). I'd say that would outweigh the larger # of animals, even if you were to include the animals in the reference class.
If there is an infinite number of conscious minds, how do the anthropic probability arguments work out?
In a big universe, there are infinitely many beings like us.
An (insufficiently well designed) AI might use this kind of reasoning to conclude that it's not like anything to be a human. (I mentioned this as an AI risk at the bottom of this SL4 post.)
Re: "Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal."
Your priors say that you are a human. It is evidence that is hard to ignore, no matter how unlikely may seem. Concrete evidence that you are part of a minority trumps the idea that being part of a minority is statistically unlikely.
Since this is true regardless of whether or not it "feels like something" to be a bat, the mere evidence of your existence as a human doesn't allow you to draw conclusions about Nagel's bat speculations.
This argument would have to apply to people who were born completely blind, or completely deaf. Just imagine that all humans are echolocation-deaf/blind.
If you randomly selected from the set of all sentient beings throughout time and space, the odds are vanishingly low that you would get the Little Prince as well.
Suppose that he ponders his situation, and concludes that if there were places in the universe where many, many humans can coexist, then it would be unlikely that he would find himself living alone on an asteroid.
If we accept for the sake of an argument that he exists, then someone must be the Little Prince, and be doomed to make incorrect inferences about the representativeness of their situatio...
Can we dismiss all anthropic reasoning by saying that probability is meaningless for singular events? That is, the only way to obtain probability is from statistics, and I cannot run repeated experiments of when, where, and as what I exist.
Instead of showing that non-human animals are unconscious, anthropic reasoning may show that such animals are conscious if we are not ourselves soon doomed to extinction. Expanding the class of observers to include such animals makes it less surprising that we find ourselves living at this comparatively early stage of human evolution, since "we" refers to conscious rather than to merely human beings.
This argument assumes that most non-human animals will soon go extinct. But this assumption makes sense under many of the possible scenarios involving human survival.
Since there's a many-to-one mapping between physical states and temperatures, am I a temperature dualist? Would it be any less dualist to define a one-to-one mapping between physical states of glasses of water and really long strings? (You can assume that I insist that temperature and really long strings are real.)
The ontological status of temperature can be investigated by examining a simple ontology where it can be defined exactly, like an ideal gas in a box where the "atoms" interact only through perfectly elastic collisions. In such a situation, the momentum of an individual atom is an exact property with causal relevance. We can construct all sorts of exact composite properties by algebraically combining the momenta, e.g. "the square of the momentum of atom A minus the square root of the momentum of atom B", which I'll call property Z. But probably we don't want to say that property Z exists, in the way that the momentum-property does. The facts about property Z are really just arithmetic facts, facts about the numbers which happen to be the momenta of atoms A and B, and the other numbers they give rise to when combined. Property Z isn't playing a causal role in the physics, but the momentum property does.
Now, what about temperature? It has an exact definition: the average kinetic energy of an atom. But is it like "property" Z, or like the property of momentum? I think one has to say it's like property Z - it is a quantitative construct without causal power. It is true that if we know the temperature, we can often make predictions about the gas. But this predictive power appears to arise from logical relations between constructed meta-properties, and not because "temperature" is a physical cause. It's conceptually much closer than property Z to the level of real causes, but when you say that the temperature caused something, it's ultimately always a shorthand for what really happened.
When we apply all this to coarse-grained computational states, and their identification with mental states, I actually find myself making, not the argument that I intended (about many-to-one mappings), but another one, an argument against the validity of such an identification, even if it is conceived dualistically. It's the familiar observation that the mental states become epiphenomenal and not actually causally responsible for anything. Unless one is willing to explicitly advocate epiphenomenalism, then mental states must be regarded as causes. But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
So: if you were to insist that temperature is a fundamental physical cause and not just a shorthand for microphysical complexities, then you would not only be a dualist, you would be saying something in contradiction with the causal model of the world offered by physics. It would be a version of phlogiston theory.
As for the "one-to-one mapping between physical states of glasses of water and really long strings" - I assume those are symbol-strings, not super-strings? Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible. If you're saying that a physical glass of water really is a string of symbols, you'd be bringing up a whole other class of ontological mistakes that we haven't touched on so far, but which is increasingly endemic in computer-science metaphysics, namely the attempt to treat signs and symbols as ontologically fundamental.
It seems like we can cash out the statement "It appears to X that Y" as a fact about an agent X that builds models of the world which have the property Y.
I actually disagree with this, but thanks for highlighting the idea. The proposed reduction of "appearance" to "modeling" is one of the most common ways in which consciousness is reduced to computation. As a symptom of ontological error, it really deserves a diagnosis more precise than I can provide. But essentially, in such an interpretation, the ontological problem of appearance is just being ignored or thrown out, and all attention directed towards a functionally defined notion of representation; and then this throwing-out of the problem is passed off as an account of what appearance is.
Every appearance has an existence. It's one of the intriguing pseudo-paradoxes of consciousness that you can see something which isn't there. That ought to be a contradiction, but what it really means is that there is an appearance in your consciousness which does not correspond to something existing outside of your consciousness. Appearances do exist even when what they indicate does not exist. This is the proof (if such were needed) that appearances do exist. And there is no account of their existential character in a discourse which just talks about an agent's modeling of the world.
It appears to the brain I am talking to that qualia exist. It appears to the brain that is me that qualia exist. Yet this is not any evidence of the existence of qualia.
You are just sabotaging your own ability to think about consciousness, by inventing reasons to ignore appearances.
Facts about your phenomenology are facts about your programming!
No...
If you can type them into a computer, they must have a physical cause tracing back through your fingers, up a nerve, and through your brain.
Those are facts about my ability to communicate my phenomenology.
What's more interesting to think about is the nature of reflective self-awareness. If I'm able to say that I'm seeing , it's only because, a few steps back, I'm able to "see" that I'm seeing ; there's reflective awareness within consciousness of consciousness. There's a causal structure there, but there's also a non-causal ontological structure, some form of intentionality. It's this non-causal constitutive structure of consciousness which gets passed by in the computational account of reflection. The sequence of conscious states is a causally connected sequence of intentional states, and intentionality, like qualia, is one of the things that is missing in the standard physical ontology.
There is no rule in science that says that large-scale quantum entanglement makes this behavior more or less likely, so there is no evidence for large-scale quantum entanglement.
The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it's not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.
My point is that the evidence for consciousness, that various humans such as myself and you believe that they are conscious, can be cashed out as a statement about computation, and computation and consciousness are orthogonal, so we have no evidence for consciousness.
Once again, appearance is being neglected in this passage, this time in favor of belief. To admit that something appears is necessarily to give it some kind of existential status.
B: "What are the properties of ontologically fundamental love?"
A: "[The equations that define the standard model of quantum mechanics]"
The word "love" already has a meaning, which is not exactly easy to map onto the proposed definition. But in any case, love also has a subjective appearance, which is different to the subjective appearance of hate, and this is why the experience of hate can falsify the theory that only love exists.
I'm a reductive materialist for statements - I don't see the problem with reading statements about consciousness as statements about quarks.
Intentionality, qualia, and the unity of consciousness; none of those things exist in the world of quarks as point particles in space.
Ontologically I suppose I'm an eliminative materialist.
The opposite sort of error to religion. In religion, you believe in something that doesn't exist. Here, you don't believe in something that does exist.
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it's very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn't ontologically fundamental, you ar...
...at least not if you accept a certain line of anthropic argument.
Thomas Nagel famously challenged the philosophical world to come to terms with qualia in his essay "What is it Like to Be a Bat?". Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. Even if we deduce all the physical principles behind echolocation, even if we could specify the movement of every atom in a bat's senses and nervous system that represents its knowledge of where an echolocated insect is, we still have no idea what it's like to feel a subjective echolocation quale.
Anthropic reasoning is the idea that you can reason conditioning on your own existence. For example, the Doomsday Argument says that you would be more likely to exist in the present day if the overall number of future humans was medium-sized instead of humongous, therefore since you exist in the present day, there must be only a medium-sized number of future humans, and the apocalypse must be nigh, for values of nigh equal to "within a few hundred years or so".
The Buddhists have a parable to motivate young seekers after enlightenment. They say - there are zillions upon zillions of insects, trillions upon trillions of lesser animals, and only a relative handful of human beings. For a reincarnating soul to be born as a human being, then, is a rare and precious gift, and an opportunity that should be seized with great enthusiasm, as it will be endless eons before it comes around again.
Whatever one thinks of reincarnation, the parable raises an interesting point. Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
The phrase "for me to be an animal" may sound nonsensical, but "why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?". If the doomsday argument is sufficient to prove that some catastrophe is preventing me from being one of a trillion spacefaring citizens of the colonized galaxy, this argument hints that something is preventing me from being one of a trillion bats or birds or insects.
And this could be that animals lack subjective experience. This would explain quite nicely why I'm not an animal: because you can't be an animal, any more than you can be a toaster. So Thomas Nagel can stop worrying about what it's like to be a bat, and the rest of us can eat veal and foie gras guilt-free.
But before we break out the dolphin sausages - this is a pretty weird conclusion. It suggests there's a qualitative and discontinuous difference between the nervous system of other beings and our own, not just in what capacities they have but in the way they cause experience. It should make dualists a little bit happier and materialists a little bit more confused (though it's far from knockout proof of either).
The most significant objection I can think of is that it is significant not that we are beings with experiences, but that we know we are beings with experiences and can self-identify as conscious - a distinction that applies only to humans and maybe to some species like apes and dolphins who are rare enough not to throw off the numbers. But why can't we use the reference class of conscious beings if we want to? One might as well consider it significant only that we are beings who make anthropic arguments, and imagine there will be no Doomsday but that anthropic reasoning will fall out of favor in a few decades.
But I still don't fully accept this argument, and I'd be pretty happy if someone could find a more substantial flaw in it.