Philosophy is notorious for not answering the questions it tackles. Plato posed most of the central questions more than two millennia ago, and philosophers still haven't come to much consensus about them. Or at least, whenever philosophical questions begin to admit of answers, we start calling them scientific questions. (Astronomy, physics, chemistry, biology, and psychology all began as branches of philosophy.)
A common attitude on Less Wrong is "Too slow! Solve the problem and move on." The free will sequence argues that the free will problem has been solved.
I, for one, am bold enough to claim that some philosophical problems have been solved. Here they are:
- Is there a God? No.
- What's the solution to the mind-body problem? Materialism.
- Do we have free will? We don't have contra-causal free will, but of course we have the ability to deliberate on alternatives and have this deliberation effect the outcome.
- What is knowledge? (How do we overcome Gettier?) What is art? How do we demarcate science from non-science? If you're trying to find simple definitions that match our intuitions about the meaning of these terms in ever case, you're doing it wrong. These concepts were not invented by mathematicians for use in a formal system. They evolved in practical use among millions of humans over hundreds of years. Stipulate a coherent meaning and start using the term to successfully communicate with others.
Taking "solved" to mean "there's only one right-thinking answer, given the arguments that have been raised", I would definitely agree with the questions you think are settled.
I'm also highly confident on:
I wrote up how I "derive" my ethical position here, where I'd hoped you'd see it, but the thread was a bit old by the time I posted.
My thoughts on the teleporter problem are not novel --- I agree with Robin Hansen's take here, although I'd put it a bit differently. I came to the answer when I described the problem to a friend and he told me immediately that I'd arrived at the reductio ad absurdem correctly but had failed to resolve it: the answer was that my definition of "myself" was broken, and that it's better to think there will just be a future-copy of me who thinks he was me, but I will not be him. This is true in general, as it is in the teleporter problem. If you're comfortable making life easy and pleasant for future-you, you can be just as comfortable making life easy and good for teleported-you.
So I think the teleporter is a very simple problem. It's just that the answer is hidden behind a strong adaptive intuition about what constitutes identity.