If anyone wants to have a voice chat with me about a topic that I'm interested in (see my recent post/comment history to get a sense), please contact me via PM.
My main "claims to fame":
How does the Shareholder Value Revolution fit into your picture? From an AI overview:
1. The Intellectual Origins (The 1970s)
The revolution was born out of economic stagnation in the 1970s. As U.S. corporate profits dipped and competition from Japan and Germany rose, economists and theorists argued that American managers had become "fat and happy," running companies for their own comfort rather than efficiency.
Two key intellectual pillars drove the change:
- Milton Friedman (The Moral Argument): In a famous 1970 New York Times essay, Friedman argued, "The social responsibility of business is to increase its profits." He posited that executives spending money on "social causes" (like keeping inefficient plants open to save jobs) were essentially stealing from the owners (shareholders).
- Jensen and Meckling (The Economic Argument - "Agency Theory"): In 1976, these economists published a paper describing the "Principal-Agent Problem." They argued that managers (agents) were not aligned with shareholders (principals). Managers wanted perks (corporate jets, large empires), while shareholders wanted profit. The solution? Align their interests by paying executives in stock.
It seems to better fit my normative picture of human values: terminal values come from philosophy, and subservience of instrumental values to terminal values improves over time as we get better at it, without need to permanently raise instrumental values to terminal status or irreversibly commingle the two.
Thank you, this is helpful to clarify and remind for me what Ben was trying to say.
Through a mix of ambient cultural pressures silencing or warping the clarity of good meaning folk
Do you or Ben have a more detailed explanation of what happened here? What can/should "good meaning folk" do to prevent this? Should I personally be worried about something like this?
I'm pretty confused by your comment. Surely there are arguments other than wastefulness for not having cycles in one's terminal/intrinsic values? Like if I prefer to tile the universe with qualia A more than qualia B, and prefer B to C, and C to A, how do I actually make the decision of what qualia to tile the universe with?
Possible root causes if we don't end up having a good long term future (i.e., realize most of the potential value of the universe), with illustrative examples:
Is this missing anything, or perhaps not a good way to break down the root causes? The goal for this includes:
"Utility" literally means usefulness, in other words instrumental value, but in decision theory and related fields like economics and AI alignment, it (as part of "utility function") is now associated with terminal/intrinsic value, almost the opposite thing (apparently through some quite convoluted history). Somehow this irony only occurred to me ~3 decades after learning about utility functions.
Isn't living in cities itself driven at least in part by memetics (e.g., glamour/appeal of city living shown on TV/movies)? Certainly memes can cause people to not live in cities, e.g., the Amish or the meme of moving out to the suburbs to raise kids.
Oops, thought I could trust "reasoning" AI (Gemini 3 Pro) for such a simple seeming question. Had it redo the estimate taking your comment into account, and it came up with 1m assuming N(90,15) globally, which still felt wrong, so I had it redo the estimate using country-level data, and it ended up with 7.5m with 6.1m in East Asia, 1.1m in the West, and .3m in RoW. This assumed N(105,15) for East Asia (so not quite using country-level data), which Opus and GPT point out might be an overestimate due to China being a bit lower than this. Had them redo the EA estimate using country-level data and they came up with 4.5m and 5.5m for EA (using N(103,15) and N(104,15) for China) respectively.
This is actually a significant update for my mental world model, as I didn't previously realize that China had more than half of the world's population of IQ>145 people.
It looks like Part 1 was never cross-posted to LW. Please fix this?
Assuming I have an IQ of 145, there are ~11 million people on Earth smarter (have a higher IQ) than me, but almost none of them, including e.g. Terence Tao, are trying to do something about AI x-risk even at this late date. Updating on this has to move one away from HIA directionally, right, versus the prior 10 years ago?
10 years ago you could say that those ~11m have just never thought about AI, but today the conclusion seemingly has to be that strategic competence is surprisingly little correlated with, or not much scaled by, intelligence, which if true would mean that HIA wouldn't do much for the key bottleneck of humanity's strategic incompetence[1], but could easily make things worse by creating more and harder strategic problems.
Yeah, so I think this is probably my most salient crux at this point: what does the "strategic competence landscape" look like after significant HIA has occurred?
How to explain this? (I note that he not only fails to proactively address these questions, but also ignores them when others raise them, which seems totally inexplicable to me. Or at least this was my experience when I participated in the CAIS discussion when that came out.)