The two are incompatible. Anthropic reasoning makes explicit use of first-person experience in their question formulation. E.g. in the sleeping beauty problem, "what is the probability that now is the first awakening?" or "today is Monday?" The meaning of "now", and "today" is considered to be apparent, it is based on their immediacy to the subjective experience. Just like which person "I" am is inherently obvious based on a first-person experience. Denying first-person experience would make anthropic problems undefined.
Another example is the doomsday argument. Which says my birth rank, or the current generation's birth rank, is evidence for doom-soon. Without a first-person experience who "me" or "the current generation" refers to would be unclear.
they're perfectly compatible, they don't even say anything about each other [edit: invalidated]. anthropics is just a question of what systems are likely. illusionism is a claim about whether systems have an ethereal self that they expose themselves to by acting; I am viciously agnostic about anything epiphenomenal like that, I would instead assert that all epiphenomenal confusions seem to me to be the confusion "why does [universe-aka-self] exist", and then there's a separate additional question of the surprise any highly efficient chemical processing sys...
The two are unrelated. Illusionism is specifically about consciousness (or rather its absence), while anthropics is about particular types of conditional probabilities and does not require any reference to consciousness or its absence. Denying first person experience does not make anthropic problems any more undefined than they already are.
A computer with no first-person experience can still do anthropic reasoning. They don't really interact with each other.
I can see how a computer could simulate any anthropic reasoner's thought process. But if you ran the sleeping beauty problem as a computer simulation (i.e. implemented the illusionist paradigm) aren't the Halfers going to be winning on average?
Imagine the problem as a genetic algorithm with one parameter, the credence. Wouldn't the whole population converge to 0.5?
I think that anthropic beats illusionism. If there are many universes, in some of them consciousness (=qualia) is real, and because of anthropics I will find myself only is such universes.
I didn't already know what illusionism argues, so I tried to understand it by skimming two related wiki articles that may be the ones you meant.
https://en.wikipedia.org/wiki/Illusionism_(philosophy) - this one doesn't seem like what you were talking about; it's relevant anyway, and I think the answer is undefined
.
https://en.wikipedia.org/wiki/Eliminative_materialism#Illusionism this seems like what you're talking about. The issue I always hear is, an illusion to whom? and the answer I give is effectively EC Theory: consciousness to whom is a confused question, "to whom" is answered by access consciousness ie the question of when information becomes locally available to a physical process; the hard problem of consciousness boils down to "wat, the universe exists?" which is something that all matter is surprised by.
As for anthropics: I think anthropics must be rephrased into the third person to make any sense at all anyhow. you update off your own existence the same way you do on anything else: huh, the parts of me seem to have informed each other that they are a complex system; that is a surprising amount of complexity! and because we neurons have informed each other of a complex world and therefore have access consciousness of it, to the degree that our dance of representations is able to point to shapes we will experience in the future, such that the neuron weights will light up and match them when the thing they point to occurs, and our physical implementation of approximately bayesian low-level learning can find a model for the environment -
well, that model should probably be independent of where it's applied to physics; no matter what a network senses, the universe has the same mechanisms to implement the network, and that network must figure out what those invariants are in order to work most reliably. whether that network is a cell, a bio neural net, a social net, or a computer network, the task of building quorum representation involves a patch of universe building a model of what is around it. no self is needed for that.
So, okay, I've said too many words into my speech recognition and should have used more punctuation. My point about anthropics boils down to the claim that the best way to learn about anthropics is by example. Most or all math and physics works by making larger scale systems with different rules by arbitrarily choosing to virtualize those rules, so a system can only learn about other things that could have been in its place by learning what things can be and then inferring how likely that patch of stuff is and where it is in the possibility-space of things that can be.
This is a lot of words to say, you can do anthropic reasoning in an entirely materialist-first worldview where you don't even believe mathematical objects are distinctly real separate from physics. you don't need self-identity, because any network of interacting physical systems can reason about its own likelihood.
Alright, I said way the hell too many words in order to say the same thing enough ways that I have any chance in hell of saying what I intend to be. Let me know if this made any sense.
Illusionism makes the (unintuitive) claim that we have no first-person experience of the world or ourselves. Anthropic reasoning makes the (also unintuitive) claim that we learn something new about the world just by knowing that we exist. How do these two claims interact with each other? Is there work on this that I can read? For example, is there an illusionist account of the Sleeping Beauty Problem?