Hmm. It gets tricky because we get into like, what does the English word “experience” mean. “Phenomenal properties” is supposed to pick out the WOW! aspect of experiences, that thing that’s really obvious and vivid that makes us speculate about dualism and zombies. I think Frankish uses “experience” basically to mean whatever neural events cause us to talk about pain, hunger etc, so I don’t think an eliminativist would deny those exist. But I’m not sure.
Illusionism is the doctrine that phenomenal consciousness does not exist. Frankish introduced the term, so it makes sense to anchor it to his usage.
In the essay “Illusionism as a Theory of Consciousness”, Frankish makes very clear that he is not advocating a “conservative realist” position in which phenomenal properties can be reduced to brain states. Illusionism is in fact ideologically close to dualism - both agree that phenomenal properties are too weird to be explained by physical phenomena, they just disagree on what to make of this. He distinguishes between weak illusionism and strong illusionism - weak illusionists deny some of phenomenal consciousness’s putative features, whereas strong illusionism denies that it exists altogether. Illusionism is to be understood as strong illusionism. Finally, illusionism should not be understood as the denial of experiences altogether - we have sensations like pain and color, it is just that introspection falsely depicts them as possessing phenomenal properties.
NOW, there is definitely room for confusion and equivocation here, because the meaning of “phenomenal consciousness” is not perfectly clear. The main idea seems to be that introspection systematically misrepresents our own experiences in a way that gives rise to dualist intuitions. At this point I lose a grip on what is meant by statements like “An illusionist would not deny the existence of 🟩”
when we say that A is B, we generally do not mean that A is strictly identical to B - which it clearly isn’t. This applies even when we say things like 2+2 = 4. Obviously, "2+2" and "4" are not even close to being identical.
This seems to mix up labels and referents. 2+2 is strictly identical to 4. The statement “2+2=4” is not the same as the statement “‘2+2’=‘4’”
It seems that this may unfortunately make s-risk more likely, as AGI may find it worthwhile to run experiments on humans. See “More on the ‘human experimentation’ s-risk” at the bottom of this page: https://www.reddit.com/r/SufferingRisk/wiki/intro/
Two things one might be considered about with regard to psychedelic usage are acute highly unpleasant experiences (“bad trips”) and HPPD. Anecdotally, both happened to me from my first and only psychedelic experience.
My HPPD is very mild now and doesn’t bother me, though it did at first. Some people have other drugs on hand during their psychedelic experiences as “tripkillers” in case they have a very bad psychological reaction.
Psychedelics are pretty psychologically strong stuff and I would not recommend experimenting with them at your son’s age.
I think the intuition error in the Chinese Room thought experiment is that the Chinese Room doesn’t know Chinese, just because it’s the wrong size/made out of the wrong stuff.
If GPT-3 was literally a Giant Lookup Table of all possible prompts with their completions then sure, I could see what you’re saying, but it isn’t. GPT is big but it isn’t that big. All of its basic “knowledge” it gains during training but I don’t see why that means all the “reasoning” it produces happens during training as well.
Nabokov is less popular and more prestigious than JK Rowling, and I prefer reading him, and get more pleasure out of doing so. I wouldn’t jump to the conclusion that people who say they prefer Beethoven to nightcore are lying to themselves. People’s tastes really do differ quite a lot.
I also think “only listen to/read/watch whatever gives you most units of pleasure per minute” is a meme that discourages people from seeking out a wide range of experiences. It’s suspiciously “wireheady”. If life was just guilty pleasures it would be a lot more boring. Better to take the risk of listening to jazz for a while, before realizing you’ve only been pretending to like it, then to just listen to pop music all the time and miss out on the possibly of having a new kind of experience.
Anticipated experience is just my estimate for the percentage of future-mes with said experience. Whether any of those future-mes "actually exist" is meaningless, though, it's all just models.
So the idea is that you’re taking a percentage of the yous that exist across all possible models consistent with the data? Why? And how? I can sort of understand the idea that claims about the external world are meaningless in so far as they don’t constrain expectations. But now this thing we’ve been calling expectations is being identified with a structure inside the models whose whole purpose is to constrain our expectations. It seems circular to me.
It didn’t have to be this way, that the best way to predict experiences was by constructing models of an external world. There are other algorithms that could have turned out to be useful for this. Some people even think that it didn’t turn out this way, and that quantum mechanics is a good example.
Why? You'll end up with many models which fit the data, some of which are simpler, but why is any one of those the "best"?
I care about finding the truth. I think I have experiences, and my job is to use this fact to find more truths. The easiest hypotheses for me to think of that incorporate my data make ontological claims. My priors tell me that something like simplicity is a virtue, and if we’re talking about ontological claims, that means simpler ontological claims are more likely. I manage to build up a really large and intricate system of ontological claims that I hold to be true. At the margins, some ontological distinctions feel odd to maintain, but by and large it feels pretty natural.
Now, suppose I came to realize that ontological claims were in fact meaningless. Then I wouldn’t give up my goal of finding truths, I would just look elsewhere, maybe at logical, or mathematical, or maybe even moral truths. These truths don’t seem adequate to explain my data, but maybe I’m wrong. They are also just as suspicious to me as ontological truths. I might also look for new kinds of truths. I think it’s definitely worth it too try and look at the world (sorry) non-ontologically.
The vast majority of philosophers definitely do not favor maximizing the amount of hedonium. Pure hedonistic utilitarianism is a relatively rare minority view. I don’t think we should try to explain how people end up with specific idiosyncratic philosophical views by this kind of high-level analysis…