I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.
I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
Hmm, I don't buy the P-evildoer argument because I think it's smuggling in what you're trying to prove. The move from "P-evildoers seem inconceivable to me" to "therefore moral facts must supervene on physical facts" only seems to work if you already believe in some kind of natural morality. If "morality" is instead simply behavioral norms, then it all falls apart because of course there can be people who adopt different norms, though those norms may be better or worse for getting people what they want.
For what it's worth, I'm more sympathetic to game theoretic arguments about optimal norms, which ends up looking kind of like a Kantian conception of morality, but with all the metaphysical baggage of Kant dropped.
I have some examples and more details about intersubjectivity in Chapter 9 of my book, Fundamental Uncertainty. Do those help?
As to how it relates to the search for meaning, it's because if we think meaning is objective then we go looking for it somewhere outside our experience (but it's not there) and if we think it's subjective we may think it doesn't exist. It does exist, just not "out there" anywhere (but also not exactly "in here" only).
Yes. This is a place where I think David's phrasing is a bit confusing because what he actually means to say is "not only subjective".
These's two ways to get a better handle on what not only subjective means.
The first is to understand that subjectivity is really intersubjectivity. That is, we each have our own subjective experience, but we learn about other's subjective experiences and treat them as facts, and then this creates a social reality that feels objective because it contains facts that we believe are true even if we don't have direct knowledge of them ourselves.
But David is using "subjective" here to mean "only subjective" as in solipsism, which is a common view many people adopt and needs to be rejected. This happens because people correctly catch on to the subjective part and then fail to understand how their beliefs about the subjective experiences of others impact their own beliefs, sort of like they just see one level of the system and not the whole thing. That's the kind of subjectivity he's pushing back against.
Now I want to be clear that intersubjectivity is not the whole story when it comes to the complete stance. David's also rolling into it the idea that meaning is not something that comes purely from doxastic or epistemic knowledge. It involves many other ways of knowing (and not knowing) that are perhaps beyond the scope of the question here. There's a sense in which meaning creates itself and is orthogonal to the objective/subjective distinction, but I don't think I can explain that idea in a comment, and is arguably why David's writing a whole book.
ETA: Also, "existentialism" is a really loaded term and carries a lot of connotations. In one sense it's neither true nor false because it's making a metaphysical claim. But in another it's true, in a limited sense, because there is no physical essence, which is the big thing David spends a lot of time arguing against (because it's the naive view almost everyone has until they are convinced out of it). But then there's the big sense of existentialism which is false because it includes all the stuff the post-modernists hung on to existentialism that grew out of a pure subjectivity assumption.
Try replacing "meaning" with "purpose" and see if it starts to make sense. Meaning is about orientation towards, salience, importance, and a sense of why things matter.
Nihilism is the idea that nothing matters, so there is no purpose to things, it's just stuff happening.
Essentialism is the idea that purpose exists somewhere else other than here and now.
Meaning is embodied in our lives as we live them, just as they are, with nothing added or taken away.
Does that help?
There are patterns of muscle tension and slackness that fairly reliably create changes in the brain and the rest of the nervous system.
For example, if you're slumped too much, you'll tend to get sleepy easier. If you're sitting too upright and stiffly by creating tension to hold yourself up, it blocks the nervous system from getting in sync with breathing.
We teach people to sit in an upright, relaxed posture, and traditionally this involves sitting on a cushion cross-legged because it forces the hips into a position that makes sitting upright require relatively little effort (most possible postures heavily engage the core, requiring a lot of tension to sit upright, while we sit in a way that is designed to minimize that effort by "locking" the torso into a position where it doesn't have to work very hard to maintain posture).
When we teach people zazen meditation, we teach them posture first. And the traditional instruction is to observe breathing at the hara (the diaphragm). The theory is that this regulates attention by regulating the whole nervous system by getting everything in sync with breathing.
Bad posture makes it harder for people to meditate, and the usual prescription for various problems like sleepiness or daydreaming is postural changes (as in, fix your posture to conform to the norm).
I think this would be really interesting to look into, and I guess it depends on the level of disfunction. There's lots of people who lose conscious control of parts of their bodies but seem to retain some control in that they don't need to be put on a ventilator or have a pace maker. This suggests that some signals may still be coming through, even if they can't be accessed via awareness.
But in other cases the signals are totally lost, in which case we should predict some sort of alternation of mental state, and if there's not that would be both surprising relative to this theory and would require explanation to make sense of both that evidence and the evidence in favor of the theory.
That's fair. I know there are programmers who actually like writing code for it's own sake rather than as a way to achieve a goal. I think you are right that the profession will change to be less about writing code and more about achieving goals (and it already is, so I just mean it will more be like this), since AI will be cheap enough to make humans writing code too expensive.
Yes? I'm not objecting directly to the results of the study, which are contained to what the study can show, but to the inference that many people seem to be drawing from the study.
I have no power to decide what's on the frontpage, but I'm glad these posts aren't there because they make general points in a way that read to me as continuations of meta-discussions about the site and use examples from those discussions not so much as examples (because you could have easily made up more relatable examples, and the examples you chose are not salient to more than maybe a hundred people) but what seems to me to be a a way to make points against what's happening in those conversations. This feature makes these posts feel like thinly-veiled drama posting to me.
To be fair, I don't know your actual motivations, so this is just based on the vibe I'm getting reading them, but I think the vibe I (and others?) pick up reading a post is pretty important for what should be on the frontpage.