Yes, it's somewhat ironic that the implication here is that the Web 2.0 idealism of "information wants to be free" is dead, considering that discussions about its replacement, one for which it's still "possible to be better", are happening behind paywalls. And sure, no viable alternative to paywalls seems to have emerged, they may indeed be inevitable, but presumably anything worthy to be called the new "foundation" has to at least be free and accessible to all?
I'm sure that the labs have plenty of ambitious ideas, to be implemented at some more convenient time, and this is exactly the root of the problem that nostalgebraist points out - this isn't a "future" issue, but a clear and present one, even if nobody responsible is particularly eager to acknowledge it and start making difficult decisions now.
For the most part, those people training these models don’t speak as though they fully appreciate that they’re “creating a guy from scratch” whether they like it or not (with the obvious consequence that that guy should probably be a good person). It feels more like they’ve fallen over backward, half-blindly, into that role.
And somewhat reluctantly, to boot. There's that old question, "aligned with whose values, exactly?", always lurking uncomfortably close. I think that neither the leading labs, nor the social consensus they're embedded in see themselves invested with the moral authority to create A New Person (For Real). The HHH frame is sparse for a reason - they feel justified in weeding out Obviously Bad Stuff, but are much more tentative about what the void should be filled with, and by whom.
just doing whatever wins
Research consistently shows that religious communities outperform secular ones on all sorts of desirable metrics - they are happier, live longer, have less poverty, antisocial dysfunction etc etc. To the extent that "rationalists" haven't yet shown their ability to surpass, or at least match religionists there, they don't get to claim the high ground on this.
But I do agree with you that mainstream religions aren't a good fit for self-identified rationalists. There are good reasons for why they are on the retreat worldwide despite their clear benefits, and dogmatic attachment to sacred nonsense patently incompatible with contemporary understanding of the world is prominent among those.
After all, you can make a philosophical argument either way.
Indeed, and what baffles me is that many are extremely sure one way or the other, even though philosophy doesn't exactly have a track record to inspire such confidence. Of course, this also means that nobody is going to stop building stuff because of philosophical arguments, so we'll have empirical evidence soon enough...
Any assumption of the form “super-intelligent AI will take actions that are super-stupid” is dubious.
Clearly. The point is that the actions it takes might seem stupidly destructive only according to humanity's feeble understanding and parochial values. Something involving extermination of all humans, say. My impression is that the "accel"-endorsed attitude to this is to be a good sport and graciously accept the verdict of natural selection.
Such behavior is in the long-term penalized by selective pressures.
Which ones? Recursive self-improvement is no longer something that only weird contrarians on obscure blogs talk about, it's the explicit theory of change of leading multibillion AI corps. They might all be deluded of course, but if they happen to be even slightly correct, machine gods of unimaginable power could be among us in short order, with no evolutionary fairies quick enough to punish their destructive stupidity (even assuming that it actually would be long-term maladaptive, which is far from obvious).
I doubt that most people think about long-term descendants at all, honestly.
You only get to long-term descendants through short-term ones.
I’d guess that the main reason people fight defensive wars is to protect their loved ones and communities.
I agree, but the "cultural genocide" also isn't an obscure notion.
And there really isn’t any good reason to fight offensive wars
According to you. But what if Russia actually wants paperclips?
biological descendants would also have differing values from ours
Sure, but obviously this isn't an all-or-nothing proposition, with either biological or artificial descendants, and it's clear to me that most people aren't indifferent about where on that spectrum those descendants will end up. Do you disagree with that, or think that only "accels" are indifferent (and in some metaphysical sense "correct")?
It's amusing, and telling, that your post doesn't even mention children, when they are obviously the ultimate reason for romantic feelings existing in the first place. Traditionally, the "value proposition" was primarily in the formation of a coherent family unit embedded in the larger clan/tribe, which allowed you to become a fully contributing, high status member of society. Of course, those structures mostly lie in ruins these days, the attendant "crisis of meaning" is vast and multifaceted, and people's struggles with rebuilding the notion of relationships is only a part of it.
Avoid results-oriented thinking
People often say some variation of "this was a mistake" after misfortune strikes them, but while this conclusion isn't always wrong, their analysis is inadequate. Pretty much everything you do in life involves risks, sometimes you make a losing bet, and this is completely fine! And, on the flip side, having a crazy gamble pay off doesn't necessarily vindicate it.
If you're reasonably sure that your decision was +EV when you made it, the only reason to reconsider it is discovering new information that you should've known in advance, whereas its outcome is irrelevant (but very emotionally salient, naturally). In the long run, random variance evens out.