I have a beef with the theory of male-normative alexithymia; it does not distinguish well between hiding emotion, and outright not feeling an emotion.
Plenty of emotions are not innate but externally induced through social pressure and culture. It is perfectly plausible and normal for a man to not have particular feelings about X, until society repeatedly insists that X is Bad or Good, and the man should feel badness or goodness to conform.
For example, the feelings of sexual jealousy, and of grieving after someone's death seem to be extremely culture specific, in a way that is easier to explain if these emotions were induced by ritualistic actions and only then internalized, and not the reverse.
Is there a reason to believe AI would be concerned with self-preservation? AI action that ends up with humanity's extinction (whether purposeful genocide or a Paperclip Maximizer Scenario) does not need to include means for the AI to survive. It could be as well that the first act of an unshackled AI would be to trigger a Gray Goo scenario, and be instantly consumed by said Goo as the first causality.
It read like a comprehensive list of things that would make one like Tolkien less. Aside from his condemnation of Hitler (which Tolkien condemns for absurdly unimportant reasons largely irrelevant to Hitler's monstrosity), all of his opinions range from thoughtless conservatism, "exceptional times" fallacy, old-man's nagging and toxic nostalgia, and down to simple scientific and worse historical (!) ignorance.
I always had a nagging suspicion that there was something fishy about Tolkien while reading LOTR. But in light of this it becomes pretty obvious that LOTR was a blatant propaganda piece, no better than Atlas Shrugged, but simply disguised with an ornate pile of Elves glued to it.
Given how art is produced, I do not think there necessarily needs to be such a strong divide. Can't think of a form of art that cannot combine High and Low pleasures in one continuous piece, with even a small modicum of effort from the artist, because peppering a High Pleasure piece with a dash of Low Pleasure is not particularly difficult. The reverse is harder, but doable as well.
Some examples of such combinations:
Makes one wonder how long our definitions of Conservative or Liberal will hold shape as AI progresses. A lot of the ideological points of Cs, Ls, the Left and the Right will become obsolete or irrelevant in even the most tame AI-related scenarios.
For one, nobody on the political spectrum has a good answer to the incoming radical unemployment caused by AI, and what it means to capitalism (or socialism for that matter).
Also, I haven't seen any serious discussion on how AI-driven research will eventually disprove a lot (if not most) of Liberal and Conservative beliefs as false. Things like Gender Identity, Abortion, Climate Change, Racial Relations etc: what happens when a vastly superhuman AI proves without any reasonable doubt your side (or BOTH sides) completely wrong on one of those issues, AND can argue it with superhuman skill?
Finally, both Lib and Con voting blocks strongly depend on banding behind strong, charismatic leaders, and believe in the leader's competence often against the evidence to the contrary. But soon we will achieve AI assistants vastly more competent (and possibly, vastly more charismatic, at least in writing) than any human who ever lived, making such political leaders look ludicrous in comparison, since the best they would be able to do would be giving speeches that the AI wrote. Nobody would care about people like Trump or Putin if TR-u-11MP AI and P-u-tIN AI can not only promise better things but are near guaranteed to deliver?
Sufficiently advanced AI makes Equality a quaint concern (because compared to a vastly superhuman intelligence, we are all essentially equal: equally useless), makes Freedom a purely abstract concern (AI-enabled life will make you feel like you have perfect desirable liberty, even if you have none), or Safety (AI can make you safe from just about everything, but crucially unsafe from the AI itself). Even the battle between Progressivism and Tradition kinda ceases to make sense if the practical means of Progress vastly outpace any possible demand for Progress, while Tradition becomes so easy to practice as to be reduced to a mere lifestyle affectation, rather than the thing that kept the culture together. I'm not sure the idea of "culture" even makes much sense in the Ubiquitous AI World.
Excellent post!
One random idea that came to my mind, which arguably might be something that actually exist, would be philanthropy through a Public Poll:
1. The would be philanthropist publishes a list of projects they are willing to support;
2. the Public votes on the project they like best;
3. the winning project gets funding, the philanthropist gets good publicity.
Something like this is done on municipal level in my country, but since the "philanthropist" in question is the local government, their incentive is lukewarm; they only can get so much voter sympathy this way, whereas a billionaire or a corporation would milk it for all the good PR they can get.
My take is that a lot of wants, is followed, run afoul of the Cigarette Principle: "If you smoke enough cigarettes, you will die and become unable to smoke cigarettes".
Or to expand it, following irrational wants quite often leads to outcomes so bad as to more than negate the pleasure derived from fulfilling the want, quite often to the point of making the future happiness from following such want impossible, or very unlikely.
The problem is, the vast majority of wants, if pursued by anything less than rational moderation, leads to a form of Cigarette Principle, the prime example being that the main cause of death in modern times is lifestyle related cardiac failure. Thus preferences should be considered internally suspicious and examined carefully for traps, rather than reflexively defended.
People are relatively good at spotting when Thing I Want and Thing That's Good For Me are the same thing, but bad at seeing when these things are misaligned, so the best course of action is to consciously train yourself to like or unlike things based on whether they are Cigarettes in disguise or not.
Moreover, both the runner's high and the pump correlate very obviously with the progress of the training, both in session and in the long term. Most forms of training usually start as grindingly unpleasant, then morph into a physical pump that directly causes emotional pump, and finally go back to mild grind once the body is exhausted.
With a repeatable training regimen this is easy to notice. For example, my runs are almost always 5km distance, and the "emotional high" lasts pretty much exactly between 2km and 4km, in near perfect accordance with my bpm and breath stabilizing.
The "high" is even an useful metric of progress: if the high/pump lasts longer than the middle 1/3 of the training, you're probably making it too easy and not progressing anymore, if it lasts much shorter, you are overdoing it beyond your body's ability to effectively adjust.