that we must stand true to the Code and allow those billionaires to retain the wealth if we want to remain a Just and Wealthy society.
I think this right here is the crux of it. I doubt anyone who supports a wealth tax believes we live in a just society, and I expect them all to believe that the behavior of billionaires actively suppresses our wealth in the sense that you mean it. It looks, to them, like our current crop of billionaires are not honorably wealthy men, but trying very hard to become the new feudal lords themselves.
Protecting the Code is incredibly important NOT because it serves some billionaires, but because it serves every single person in our society that lives above the level of a 16th century peasant.
I think there's a shorter path to this conclusion that the people supporting the wealth tax will find more understandable, I bet: if we pass a law to take billionaire's stuff, they'll use the same law to take our stuff.
Excellent post, strong upvote. You've done a great job articulating what I felt as basically just a twisting of the guts whenever I read economic analysis of the idea of AGI. Tackling the problem head-on:
Model AI as a firm directly: I believe AI straightforwardly breaks the usefulness of the capital-labor distinction. The central crux for me is the extent to which the AI could perform the knowledge work of corporate management; I claim it doesn't matter for economic purposes that the source of the decisions is an abstract machine the company owns (or rents), what matters for economic purposes is the level at which the decisions are made. If AI makes the management decisions for the firm, to the rest of the economy they are indistinguishable.
For modelling AI-as-a-firm:
Information Asymmetry: I predict the AI will have an information advantage over most other actors in the economy, and once we cross the AGI threshold over all non-AI actors in the economy, at least eventually. This might be a reasonable economics-view-definition of AGI: the threshold at which it achieves local information asymmetry on all transactions.
Transaction Costs: I expect transaction costs to be systematically lower for AI firms through the time-cost decisions. I make a concrete analogy is high frequency trading, where a fast trading algorithm can see a new buy order, go purchase the better orders on the market, and return to sell to the original order.
Some second-order items I would like to see:
Principal-Agent problems: This is how economics tackles alignment problems. Currently we model OpenAI/Anthropic as owning ChatGPT/Claude respectively, under capital; if the AIs were instead modeled as firms independently and viewed as subcontractors (albeit with contracts strongly favoring OpenAI/Anthropic) and apply the information asymmetry and transaction cost modifications above, what does a principal-agent model predict?
EMH-breaking threshold: My intuition is that the information asymmetry and transaction cost advantages are mutually reinforcing, but the idea I think is more important is that doing a transaction provides much more detailed information than a price signal. A systematic advantage in completing transactions means a systematic accumulation of higher dimensional information than prices; because the EMH works on price signals, I expect it will be defeated if it is possible to aggregate higher-dimensional signals than price.
It feels to me like Sutton is too deep inside the experiential learning theory. When he says there is no evidence for imitation, this only makes sense if you imagine it strictly according to the RL theory he has in mind. He isn't applying the theory to anything; he is inside the theory and interpreting everything according his understanding of it.
It did feel like there was a lot of talking past one another when Dwarkesh was clearly talking about the superintelligent behaviors everyone is interested in (doing science, math, and engineering) as his model for intelligence, and Sutton is blowing all of this off only to articulate quite late in the game that his perspective is that human infants are his model for intelligence. If this was cleared up early, it would probably have been more productive.
I have always found the concept of a p-zombie kind of silly, but now I feel like we might really have to investigate the question of an approximate i-zombie: if we have a computer than can output anything an intelligent human can, but we stipulate that the computer is not intelligent....and so on and so forth.
On the flip side, it feels kind of like a waste of time. Who would be persuaded by such a thing?
Good job saying a brave thing; on a US-based site with a plurality US membership, this was a risk. Well done.
Out of curiosity, how often do conversations about 9/11 come up? For the most part, we don't discuss it that much among ourselves except during the anniversary, although I do make note that it was just a couple of weeks ago and indeed the traditional observance is literally just to talk about where we were and what we were doing at the time, which precisely when the observation about cheering would come up.
It may or may not surprise you that while there were basically no rooms cheering in the US, there was a substantial minority population that celebrated after the fact. Mostly these were people who hated finance and globalization (which the twin towers symbolized) or something about foreign policy (imperialism, colonialism, etc) or as some form of divine punishment (tolerating gay people or interracial marriage or what-have-you).
So, thank you for saying your piece. I appreciate the honesty.
Strong upvote, I appreciate the inside-view context that you have from publishing a similar book. I bought it as a result of this review.
I cannot, alas, promise a side-by-side review. However, there are a couple of questions I am primed to look for, foremost among them right now: how much detail is invested in identifying the target audience? The impression I am getting so far is that it has been approximately defined as not us, but a lot of complaints seem to turn on this question. I see a lot of discussion about laymen but that's an information level, not a target audience. I don't know if I have seen much discussion of the target audiences at all outside of the AI policy area, come to think of it.
The important information I should take from a strong trend is an axis, or a dimension, rather than the default takeaway of a direction.
I have the colloquial habit of talking about a trend as a direction, leaning on the implicit metaphor of physical space for whatever domain in which the trend appears. I've only just today started to realize that, while I am pretty sure the physical space (or, probably, geography and maps) metaphor is why I speak that way, there's no reason not to lean into it as an explicit description within the abstract space for the domain. By this I mean that whatever it is we are talking about (the domain) has several important parts to it (dimensions), and taken together these form the space of the domain.
Returning to the direction v. dimension takeaway, this basically means that the important thing is the dimension (or dimensions) along which the trend moves, so it is worth looking at the opposite direction of the trend as well.
This is basically the same as the idea as taking good advice and reversing it, just looking at changes in the world instead.
Boiling this down for myself a bit, I want to frame this as a legibility problem: we can see our own limitations, but outsiders successes are much more visible than their limitations.
I'm inclined to look at the blunt limitations of bandwidth on this one. The first hurdle is that p(doom) can pass through tweets and shouted conversations in bay area house parties.
I also think he objects to putting numbers on things, and I also avoid doing it. A concrete example: I explicitly avoid putting numbers on things in LessWrong posts. The reason is straightforward - if a number appears anywhere in the post, about half of the conversation in the comments will be on that number to the exclusion of the point of the post (or the lack of one, etc). So unless numbers are indeed the thing you want to be talking about, in the sense of detailed results of specific computations, they are positively distracting from the rest of the post for the audience.
I focused on the communication aspect in my response, but I should probably also say that I don't really track what the number is when I actually go to the trouble of computing a prior, personally. The point of generating the number is clarifying the qualitative information, and then the point remains the qualitative information after I got the number; I only really start paying attention to what the number is if it stays consistent enough after doing the generate-a-number move that I recognize it as being basically the same as the last few times. Even then, I am spending most of my effort on the qualitative level directly.
I make an analogy to computer programs: the sheer fact of successfully producing an output without errors weighs much more than whatever the value of the output is. The program remains our central concern, and continuing to improve it using known patterns and good practices for writing code is usually the most effective method. Taking the programming analogy one layer further, there's a significant chunk of time where you can be extremely confident the output is meaningless; suppose you haven't even completed what you already know to be minimum requirements, and compile the program anyway, just to test for errors so far. There's no point in running the program all the way to an output, because you know it would be meaningless. In the programming analogy, a focus on the value of the output is a kind of "premature optimization is the root of all evil" problem.
I do think this probably reflects the fact that Eliezer's time is mostly spent on poorly understood problems like AI, rather than on stable well-understood domains where working with numbers is a much more reasonable prospect. But it still feels like even in the case where I am trying to learn something that is well-understood, just not by me, trying for a number feels opposite the idea of hugging the query, somehow. Or in virtue language: how does the number cut the enemy?
I agree on all three counts, but what I am talking about is the rhetorical strategy for trying to communicate the belief that it is very important in the long term to let people keep their stuff, to people who are proposing to take some people's stuff right now.
I don't think bad long term consequences would be hard to communicate in this instance (though it would not be easy in the middle of a protest). For example, I expect almost everyone who supports a wealth tax to also oppose the idea of corporate personhood; but these rulings started showing up in the 1800s, and the Citizens United decision showed up in 2010.
There's a bit of line to walk so as to not misrepresent what OP is believes, but I feel like establishing a link between longer-term bad effects that billionaire tax supporters understand might be as simple as saying "Citizens United was in 2010 and. . ." gestures at things in general