Not looking at the world in a probabilistic way
Philosophy has long had the hope that eventually, somehow, it would find a set of elegant axioms from which the rest would regrow, like what happened in math. Several branches of philosophy think they did collapse it to a set of elegant axioms (though upon inspection, they actually let the complexity leak back in elsewhere). I think there's a fear, not entirely unjustified, that if you let probabilistic reasoning into two many places then this closes off the possibility of reaching an axiomatization or of ever reaching firm conclusions about interesting questions. Today, it's been long enough to know that the quest for axiomatization was doomed from the start - or at least, the quest for an axiomatization that wasn't itself a probabilistic thing. So allowing probabilistic reasoning shouldn't seem like a big scary concession anymore, but on the other hand, it's still difficult and most philosophers aren't dual-classed into maths.
Using personal preference or personal intuitions as priors instead of some objective measure along the lines of Solomonoff Induction
Unfortunately, Solomonoff Induction falls off the table as soon as the questions get interesting. As a next-best-thing, intuition is not all that bad. I'd criticize a lot of philosophy, not for grounding ideas in intuition, but for treating intuition as a black box rather than as something which can be studied and debugged and improved. Most LW-style philosophy does bottom out at intuition somewhere, it just does a better-than-usual job of patching over intuition's weaknesses.
Moral realism
When you're getting started on learning game theory, there is a point where it looks like it might be building towards an elegant theory of morality, something that would reproduce our moral intuitions and being a great Schelling point and ground morality really well. Then it runs into roadblocks and doesn't get there, so we're stuck with a hodgepodge metaethics where morality depends on an aggregation of many peoples' preferences but there are different ways to aggregate one persons' preferences and different ways to aggregate groups' preferences and some preferences don't count and it's all very unsatisfying. But if you haven't hit that wall yet or you're very optimistic or you're limiting yourself to sufficiently simple trolley problems, then moral realism seems like a thing.
Mathematical Platonism
This is a trap door into silly arguments about subtleties of the word "exist" which are cleanly and completely separated from all predictions. But if you want to engage with ideas like a mathematical multiverse, you do end up needing to think about subtleties of the word "exist", and math ends up looking more fundamental than physics.
Libertarian free will (I'm looking for arguments other than those from religion)
I'm not sure what libertarian free will is in relation to the rest of the ideas about free will, but I find thinking about free will gets a lot easier if you first acknowledge that our intuitions are guided by the idea of ordinary freedom (ie, whether there's a human around with a whip), and then go a step further and just think about ordinary freedom instead.
The view that there actually exist abstract "tables" and "chairs" and not just particles arranged into those forms
These ideas come back in slightly different forms when you start considering mathematical multiverses and low-fidelity simulations of the universe. For example, if you accept the simulation argument, and further suppose that the simulation would not be full-fidelity but would be designed to make this fact hard to notice, then you get the conclusion that certain abstract objects exist and their constituent particles don't.
The existence of non-physical minds (I'm looking for arguments other than the argument from the Hard Problem of Consciousness)
The idea of minds as cognitive algorithms leads to something sort-of like this; in that framing, minds are physical objects with a dual existence in platonic math-realm that diverges if physics causes a deviation from the algorithm.
For a while now I've been trying hard to understand philosophical viewpoints that defer from mine. Somewhere along the line I've picked up or developed a lot of the LW-typical viewpoints (not sure if this was because of LW, or if I developed them earlier and that's what later attracted me to LW), but I know there are a lot of smart people out there who disagree with those viewpoints. I've tried to read articles and books on this, but they either don't address what I'm looking for somehow, or they're so technical that I have a hard time following them. I've also talked at some length with a philosophy professor, but our conversations often seem to end with me still being confused and the professor being confused about what it is I might be confused about.
I'm thinking maybe it'll help to get some input from people who do intuitively agree with my viewpoints, hence this post. So, can someone please tell me what the central arguments or motivations are for promoting the following:
Epistemology:
Ontology / philosophy of mind:
I suspect I'm having trouble with the ontology issues because of my trouble understanding the epistemology issues. Specifically, I keep getting the impression that most (all?) of the arguments for the ontology issues boil down to trusting philosophical intuitions and/or the way people use words. Something along the following lines:
Or the equivalent using the way people talk about things.
But this just seems totally ludicrous to me. If we trust cognitive science, evolutionary psychology, etc., and if those fields give us perfectly plausible reasons for why we might intuitively feel this way / talk this way, even if it didn't reflect the truth, then what could possibly be your motivation for sticking to your intuitions anyway and using them to support some grand metaphysical theory?
The only thing I can think of is that people who support using intuitions like this say, "well, you're also ultimately basing yourself on intuitions for things like logic, existence of mind-independent objects, Occamian priors, and all the other viewpoints that you view as intuitively plausible, so I can jolly well use whatever intuitions I feel like too." But although I can hear such words and why they sound reasonable in a sense, they still seem totally crazy to me, although I'm not 100% sure why.
Any help would be appreciated.