Jonii comments on It's not like anything to be a bat - Less Wrong

15 Post author: Yvain 27 March 2010 02:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (189)

You are viewing a single comment's thread.

Comment author: Jonii 27 March 2010 03:42:09PM 12 points [-]

If you'd be any other animal on Earth, you wouldn't be considering what it would be like to be something else. Doomsday argument and arguments like it are usually formulated in a way "Of all the persons that could reason like me, only this small percentage ever were wrong". When animals are prevented, due to their neurological limitations, from reasoning as necessiated by the argument, they're not part of this consideration.

This doesn't mean that they're not sentient, it just means that by thinking about anthropic problems you're part of much narrower set of beings than just sentient ones.

Comment author: Yvain 27 March 2010 05:23:28PM 8 points [-]

Why not limit the set of people who could reason like me to "people who are using anthropic reasoning" and just assume people will stop using anthropic reasoning in the next hundred years? Is this a reductio ad absurdum, or do you think it's a valid conclusion?

Comment author: Jack 28 March 2010 02:30:00AM *  6 points [-]

Perhaps the fact that we are so confused by anthropic reasoning is a priori evidence that we are a very early anthropic reasoners and thus the Doomsday argument is false. Further, not every human is an anthropic reasoner. If the growth rate of anthropic reasoners is less than the growth rate of humans we should then extend the estimation of the lifespan of a human race with anthropic reasoners (and of course this says nothing about the lifespan of humanity without anthropic reasoners).

A handful of powerful anthropic reasoners could enforce a ban on anthropic reasoning: burning books, prohibiting it's teaching and silencing those who came to be anthropic reasoners on their own. If within two generations we could stabilize the anthropic reasoner population at around 35 (say 10 enforcing, 25 to account for enforcement failure) with life spans averaging 100 years that would put us in the final 95% (I think, anyone have an educated estimate of how many anthropic reasoners there have been up to this point in time?) until a permanent solution was reached or humanity began spreading and we would need at least one enforcer for every colony-- but given optimistic longevity scenarios we could still keep the anthropic reasoner population to a minimum. The permanent solution is probably obvious: A singleton could enforce the ban by itself and make itself the last or at least close to last anthropic reasoner in the galaxy.

The above strikes me as obviously insane so there has to be a mistake somewhere, right?

Comment author: Strange7 14 January 2011 12:56:23PM 1 point [-]

Maybe somebody will just come up with an elegant explanation of the underlying probability theory some time in the next few years, it'll go viral among the sorts of people who would otherwise have attempted anthropic reasoning, and the whole thing will go the way of geocentrism, but with fewer religiously-motivated defenders.

Comment author: JGWeissman 28 March 2010 03:50:41AM 1 point [-]

If within two generations we could stabilize the anthropic reasoner population at around 35 (say 10 enforcing, 25 to account for enforcement failure) with life spans averaging 100 years that would put us in the final 95% ...

That sounds like something Evidential Decision Theory would do, but not Timeless or Updateless Decision Theories. Unless you think that reaching a certain number of anthropic reasoners would cause human extinction.

Comment author: Jack 28 March 2010 05:36:32AM 2 points [-]

Hmmm. Yes thats right, as far as I understand those theories at least. I guess my point is that something seems very wrong with an argument that makes predictions but offers nothing in the way of causal regularities whose variables could in principle be manipulated to alter the result. It isn't even like seen barometer indicate low pressure and then predicting a storm (while not understanding the variable that lead to the correlation of barometers indicating low pressure and storms coming): there isn't even any causal knowledge involved in the Doomsday argument afaict. Note that this isn't the case with all anthropic reasoning, it is peculiar to this argument. The only way we know of predicting the future is by knowing earlier conditions and rules governing those conditions over time: the Doomsday argument is thus an entirely knew way of making predictions. This suggests to me something has to be wrong with it.

Maybe the self-indication assumption is the way out, I can't tell if I would have the same problem with it.

Comment author: Jordan 27 March 2010 06:52:19PM 1 point [-]

You know that you are using anthropic reasoning, so you can limit yourself to the group of people using anthropic reasoning. You likewise know that your name is Yvain... so you can limit yourself to the group of people named Yvain?

Comment author: Jonii 27 March 2010 05:36:20PM *  1 point [-]

"Why not limit the set of people who could reason like me to "people who are using anthropic reasoning" and just assume people will stop using anthropic reasoning in the next hundred years?"

That's known as the Doomsday argument, as far as I can tell.

My point, in a bit simplifying way, is that anthropic reasoning is only applicable to beings are capable of anthropic reasoning. If you know that there are billion agents, of which one thousand are capable of anthropic reasoning, and you know that of anthropic reasoners 950 are on island A and 50 are on the B, and all the non-anthropic reasoners are on island B, you know, based on anthropic reasoning, that you're on the island A 95% certainly. The rest of the agents simply don't matter. You can't conclude anything about those beyond that they're most likely not capable of anthropic reasoning

Comment author: khafra 28 March 2010 02:28:11AM 0 points [-]

What happens if we replace "capable of anthropic reasoning" to "have considered the anthropic doomsday argument"? As far as I can tell, it becomes a tautology.

Comment author: Jonii 28 March 2010 12:25:53PM 0 points [-]

I'm not sure, but it seems that your tautology-way of putting it is simply more accurate, at the cost that using it requires more accurate a priori knowledge.

Comment author: Unknowns 30 March 2010 05:16:22PM 0 points [-]

I argued before -- in the discussion of the Self-Indication Assumption -- that this is exactly the right anthropic reference class, namely people who make the sorts of considerations that I am engaging in. However, that doesn't show that people will just stop using anthropic reasoning. It shows that this is one possibility. On the other hand, it is still possible that people will stop using such reasoning because there will be no more people.