the feeling of my head remaining on the pillow is motivating, but the self-reflective idea of myself being in bed is demotivating
This seems to be an example of conflicting values, and its preferred resolution, not a difference between a value and a non-value. Suppose you'd find your pillow replaced by a wooden log - I'd imagine that the self-reflective idea of yourself remedying this state of affairs would be pretty motivating!
For at least six months now, we’ve had software assistants that can roughly double the productivity of software development.
Is this the consensus view? I've seen people saying that those assistants give 10% productivity improvement, at best.
In the last few months, there’s been a perceptible increase in the speed of releases of better models.
On the other hand, the schedules for headline releases (GPT-5, Claude 3.5 Opus) continue to slip, and there are anonymous reports of diminishing returns from scaling. The current moment is interesting in that there are two essentially opposite prevalent narratives barely interacting with each other.
the principles of EA imply that
The principles of Christianity not only imply that, they clearly spell it out: "If you want to be perfect, then go and sell your possessions and give the money to the poor", and yet Christianity was uncontroversial in the West for centuries, and the current secular "common sense" morality hasn't diverged particularly far. EAs just take ostensibly common sense principles far too seriously compared to the unspoken social consensus, in a way that's cringe for normal people. Critics don't really have a principled response to the core EA ideas either, but they don't want to appear morally delinquent, so they generally try to dismiss EAs without seriously engaging.
When circling was first discussed here, there was a comment that led to a lengthy discussion about boundaries, but nobody seemed to dispute its other main claim, that "it is highly unlikely that [somebody] would have 3-11 people they reasonably trusted enough to have [group] sex with". Do you agree with that statement, and if so, do you think that the circling/sex analogy is invalid?
And it may not be our permanent condition. The future may hold something more like a “foundation” or a “framework” or a “system of the world” that people actually trust and consider legitimate.
Our current condition is a product of our material circumstances, and those definitely aren't permanent in their essential character, as many people have variously noted. Things are still very much in flux, and any eventual medium-to-long term frameworks would significantly depend on (possibly wildly divergent) trajectories that major trends will take in the foreseeable future. Of course, even marginal short-term improvements to the memetic hellscape would be welcome, but seriously expecting something substantial and lasting is premature.
Though generally it doesn’t seem to me like social stigma would be a very effective way of reducing unhealthy behaviors
I agree, as far as it goes, but surely we shouldn't be quick to dismiss stigma, as uncouth as it might seem, if our social technology isn't developed enough yet to actually provide any very effective approaches instead? Humans are wired to care about status a great deal, so it's no surprise that traditional enforcement mechanisms tend to lean heavily into that.
I think generally people can maintain healthy habits much more consistently if their motivation comes from genuinely believing in the health benefits and wanting to feel better.
Humans are also wired with hyperbolic discounting, which doesn't simply go away when you brand it as an irrational bias. (I do in general feel that this community is too quick to dismiss "biases" as "irrational", they clearly were plenty useful in the evolutionary environment, and I'd guess still aren't quite as obsolete as the local consensus would have it, but that's a different discussion.)
But I don’t think this is always true
Neither do I, of course, but my impression was that you thought this was never true.
But this still doesn’t justify the assertion that “expressing” the preference is “wrong.”
I do agree that the word "wrong" doesn't feel appropriate here, something like "ill-advised" might work better instead. If you're a sadist, or a pedophile, making this widely known is unlikely to be a wise course of action.
I do not believe preferences themselves, or expressing them, should ever be considered wrong
Suppose that you have a preference for inflicting suffering on others. You also have a preference for being a nice person, that other people enjoy the company of. Clearly those preferences would be in constant conflict, which would likely cause you discomfort. This doesn't mean that either of those preferences is "bad", in a perfectly objective cosmic sense, but such a definition of "bad" doesn't seem particularly useful.
Now it would certainly be tempting to define rationality as something like “only taking actions that you endorse in the long term”, but I’d be cautious of that.
Indeed, and there's another big reason for that - trying to always override your short-term "monkey brain" impulses just doesn't work that well for most people. That's the root of akrasia, which certainly isn't a problem that self-identified rationalists are immune to. What seems to be a better approach is to find compromises, where you develop workable long-term strategies which involve neither unlimited amounts of proverbial ice cream, nor total abstinence.
But I think that quite a few people who care about “health” actually care about not appearing low status by doing things that everyone knows are unhealthy.
Which is a good thing, in this particular case, yes? That's cultural evolution properly doing its job, as far as I'm concerned.
I agree, but I think it's important to mention issues like social desirability bias and strategic self-deception here, coupled with the fact that most people just aren't particularly good at introspection.
It's both, our minds employ desires in service of pursuing our (often conflicting) values.
I'd rather put it as a routine conflict eventually getting resolved in a predictable way.
Indeed, but I claim that those statements actually mean "I want my value conflicts to resolve in the way I endorse" and "I don’t always endorse the way my value conflicts resolve".