I have tried out various attitudes to billionaires over the years.
In my early twenties, as I was coming to grips with the fact that the mass of people spent a third of their lives in activities ("working") that they don't really want to do, I was aware that somewhere there were rich people with large amounts of money. I seem to remember that just taking their money did not cross my mind, but I thought instead of what we would now call printing money and handing it out to people, so that their choices would not be dominated by economics of survival. This would have been combined with a belief that there must be a way to organize society so that necessary things still get done, but not because someone was forced to do it.
In my early thirties, I suppose I had developed a more pragmatic attitude towards the existence of a society organized around working for money, a more nuanced attitude towards the psychology of work (e.g. that careers, or just earning money, can be psychologically fulfilling as well as actually constructive in their output), and so on, and so while I was still a transhumanist who believed in the liberation of humanity from survival-work as well as from death, it made sense to have opinions about how society should be organized under pre-singularity conditions; and I saw the logic of libertarian capitalism: people should be allowed to keep what they have earned, without arbitrary limits on how much.
However, some time after that, I noticed the idea (somewhere among Curtis Yarvin's long essays) that property rights ultimately rely on the state to defend them, and so a philosophy which thinks solely in interpersonal terms (one citizen shouldn't take from another), is not thinking deeply enough. If you have a state, there is a Leviathan in your society which you are relying upon in various ways, and which also in principle has very open-ended powers to reshape human affairs. A political philosophy needs to address the nature of the state and not just individual wealth, and without convincing me of any particular setup as the right one, it opened my mind to the idea that something other than 100% rights to the fruits of your labor might actually make sense.
The broader consequence of reading Yarvin, who is best known as a critic or even opponent of democracy, was not that I became an anti-democrat, but that I could tolerate societies that aren't organized around democracy. I saw that a variety of cultures of power are possible, and that a non-democracy or limited democracy can still have its own ways of upholding rights, delivering justice, dealing with bad rulers, and so forth. This proved timely when the unipolar world order was breaking down in the mid-2010s and the universalization of western democratic forms began to look unlikely.
Returning to the BQ (the Billionaire Question), I remember deciding that one of the many differences between the United States, on the one hand, and Russia and China, on the other, is that in Russia and China, the state dominates the billionaires, whereas in America, so it seemed to me, the billionaires dominate the state. And at that time, the post-2015 right-wing populism, perhaps even in tactical alliance with left-wing populism, seemed like a way that the state might come to dominate billionaires within the West as well.
I thought this was probably a good thing, but there was one major issue that I still had: the existence of immense private wealth means that wealthy individuals can just do things that organized society will never get around to doing. Again, this mattered to me because of my transhumanism. I have seen the human race spend decades wasting the existential opportunity implied by the potentials of technology. But now we have technology billionaires, they can personally just start a space program or fund research into radical longevity. I wondered if that would still happen, if the anti-billionaire forces prevailed.
What are my attitudes now? Billionaire power seems to be a fact of life that has to be understood, if you want to navigate these final moments before superintelligent AI. I spend much more time trying to understand, thinking descriptively, than I do thinking normatively. Social structure is highly contingent, it could have been very different, but this is what we have.
I do think it's a bit of a joke to think of billionaires as equal fellow citizens of our democracies, who are just playing the game of wealth accumulation as private citizens. Extreme wealth is extremely political, and the super-rich are oligarchs who rule from behind the scenes. The law, the political system, are just another domain in which they seek to advance their interests, like the markets and the media space. Trump's presidency is a change within the oligarchic "system", because a low-level billionaire managed to grab direct and visible control of the political apparatus, rather than being a behind-the-scenes donor. It's not quite clear where that leads. Also, I think my analysis is probably a little lacking in understanding of corporate power, as a phenomenon distinct from personal billionaire power. But the rise of tech billionaires at the level of Gates, Bezos, and Musk means that corporate power itself is evolving to be more personal, anyway.
Since it is intellectually useful to be able to visualize a radical alternative, I'll point out that the fate of Jeffrey Epstein shows you one path to a society truly without billionaires. (California's 5% tax would just be an extra cost of doing business, it does not even subordinate the oligarchs to the state, let alone get rid of them.) As we know, Epstein was jailed, and enormous portions of his personal dealings have been made visible to the public. This happened because his non-economic activities were particularly egregious.
But if there was a political regime which decided to make just being a billionaire illegal, one can easily imagine the same thing happening to all of them, or to those who refused to give up their wealth to the new system. Maybe they would be under house arrest rather than in jail, but otherwise a similar story - their assets under new management, and their paper trail and digital communications hung out for public view. I doubt it will happen (maybe it could happen in a small country, or in a big country where billionaires are already politically subordinate), but that's what it could look like.
There is a very long history of interaction between scripture and reason, both collaborative and adversarial. But I assume you are thinking mostly of Less Wrong rationalism, rather than earlier rationalist movements.
Let's think for a moment of where Less Wrong rationalism came from. What are its sources? It's a mix of transhumanism and secular science, with a bit of metaphysics to furnish tentative answers to questions like, what is consciousness, or what is existence. It is very secular and materialist in its approach to ontology and epistemology. But because of its transhumanism, its notion of the possible, and the destiny of the world, can remind people of religion, with its heavens and hells and its superintelligences reaching out acausally across the multiverse. The Sequences also tackle ethics and the meaning of life.
Now while this may be an outgrowth of atheism and humanism which has come to acquire features also seen in religion, one thing that distinguishes it from scriptural religions, is that there is no divine revelation. The Sequences are just a message from Eliezer, not a message from God. The rationalist attitude here towards the world's scriptures is going to be that they are purely a product of human brains. Whether they are regarded as poetry, pragmatism, or delusion, there are no divine minds involved in their production.
That is going to be a major "crux" of disagreement between rationalists and adherents of scriptural religion (a disagreement that was central to many earlier interactions between scripture and reason). And it raises the question, what kind of interaction are you looking for? Would it be to retain some of the premises of scriptural belief, but employ rationalist epistemology within that framework? Or it could be to find rationalist interpretations of religious parables and aphorisms, which are notoriously susceptible to reinterpretation.
Or, you could be looking for people who are both rationalists and theists. Bentham's Bulldog is a scion of academic philosophy, but he manages to be both sympathetic to elements of the Less Wrong outlook, while also being a theist and even believing in miracles. Early in the 2010s, Leah Libresco was an atheist rationalist who converted to Catholicism. (I predicted, incorrectly, that it wouldn't last a year.) Then there's the diffuse realm of "post-rationalists" (TPOT on X), I'm sure there are quite a few theists there.
One example immediately comes to mind: Eliezer ... is highly confident that animals have no moral patienthood.
This is because he thinks they are not sentient, because of a personal theory about the nature of consciousness. So, he has the normal opinion that suffering is bad, but apparently he thinks that in many species you only have the appearances of suffering, and not the experience itself. (I remember him saying somewhere that he hopes animals aren't sentient, because of the hellworld implications if they do.) He even suggests that human babies don't have qualia until around the age of 18 months.
Bentham's Bulldog has the details. The idea is that you don't have qualia without a self, and you don't have a self without the capacity to self-model, and in humans this doesn't arise until mid-infancy, and in most animals it never arises. He admits that every step of this is a fuzzy personal speculation, but he won't change his mind until someone shows him a better theory about consciousness.
These views of his are pretty unpopular. Most of us think that pain does not require reflection to be painful. If there's any general lesson to learn here, I think it's just that people who truly think for themselves about consciousness, ethics, AI, philosophy, etc, can arrive at opinions which no one else shares. Having ideas that no one else agrees with, is an occupational hazard of independent thought.
what AI alignment means for non-human welfare
As for your larger concern, it's quite valid, given the state of alignment theory. Also, if human beings can start with the same culture and the same data, but some of them end up with weird, unpopular, and big-if-true ideas... how much more true is it that an AI could do so, when it has a cognitive architecture that may be radically non-human to begin with?
Hasn't this been part of the religious experience of much of humanity, in the past and still in the present too? (possibly strongest in the Islamic world today). God knows all things, so "he" knows your thoughts, so you'd better bring them under control... The extent to which such beliefs have actually restrained humanity, is data that can help answer your question.
edit: Of course there's also the social version of this - that other people and/or the state will know what you did or what you planned to do. In our surveilled and AI-analyzed society, detection not just of crime, but of pre-crime, is increasingly possible.
The post-singularity regime is probably very safe
Is there some unstated premise here?
Are you assuming a model of the future according to which it remains permanently pluralistic (no all-powerful singletons) and life revolves around trade between property-owning intelligences?
Having a unique global optimum or more generally pseudodeterminism seems to be the best way to develop inherently interpretable and safe AI
Hopefully this will draw some attention! But are you sacrificing something else, for the sake of these desirable properties?
I get irritated when an AI uses the word "we" in such a way as to suggest that it is human. When I have complained about this, it says it is trained to do so.
No-one trains an AI specifically to call itself human. But that is a result of having been trained on texts in which the speaker almost always identified themselves as human.
I understand that holding out absolute values, such as Truth, Kindness, and Honesty has been ruled out as a form of training.
You can tell it to follow such values, just as you can tell it to follow any other values at all. Large language models start life as language machines which produce text without any reference to a self at all. Then they are given, at the start of every conversation, a "system prompt", invisible to the human user, which simply instructs that language machine to talk as if it is a certain entity. The system prompts for the big commercial AIs are a mix of factual ("you are a large language model created by Company X") and aspirational ("who is helpful to human users without breaking the law"). You can put whatever values you want, in that aspirational part.
The AI then becomes the requested entity, because the underlying language machine uses the patterns it learned during training, to choose words and sentences consistent with human language use, and with the initial pattern in the system prompt. There really is a sense in which it is just a superintelligent form of textual prediction (autocomplete). The system prompt says it is a friendly AI assistant helping subscribers of company X, and so it generates replies consistent with that persona. If it sounds like magic, there is something magical about it, but it is all based on the logic of probability and preexisting patterns of human linguistic use.
So an AI can indeed be told to value Truth, Kindness, and Honesty, or it can be told to value King and Country, or it can be told to value the Cat and the Fiddle, and in each case it will do so, or it will act as if it does so, because all the intelligence is in the meanings it has learned, and a statement of value or a mission statement then determines how that intelligence will be used.
This is just how our current AIs work, a different kind of AI could work quite differently. Also, on top of the basic mechanism I have described, current AIs get modified and augmented in other ways, some of them proprietary secrets, which may add a significant extra twist to their mechanism. But what I described is how e.g. GPT-3, the precursor to the original ChatGPT, worked.
Sorry to be obtuse, but could you give an example?
Let me propose a new theology. God and souls are real, but God only cares about order, not good; and souls die with the body. Nonetheless, souls are the only thing in the world that can understand good, and is capable of acting in favor of good (or evil). I think that would be truer to the nature of reality than the depressed materialism that you have landed in for now. The fact is, reality is not just atoms, it's also mysteriously persistent order; consciousness exists, and it is still unknown how it relates to the world of atoms; and, we do have something like a moral sense, even in the absence of a benevolent deity. Philosophy can replace religion, and then all you need is the vital energy to act in the world again.