Previously "Lanrian" on here. Research analyst at Open Philanthropy. Views are my own.
FWIW, that's not the impression I get from the post / I would bet that Ege doesn't "bite the bullet" on those claims. (If I'm understanding the claims right, it seems like it'd be super crazy to bit the bullet? If you don't think human speed impacts the rate of technological progress, then what does? Literal calendar time? What would be the mechanism for that?)
The post does refer to how much compute AIs need to match human workers, in several places. If AIs were way smarter or faster, I think that would translate into better compute efficiency. So the impression I get from the post is just that Ege doesn't expect AIs to be much smarter or faster than humans at the time when they first automate remote work. (And the post doesn't talk much about what happens afterwards.)
Example claims from the post:
My expectation is that these systems will initially either be on par with or worse than the human brain at turning compute into economic value at scale, and I also don’t expect them to be much faster than humans at performing most relevant work tasks.
...
Given that AI models still remain less sample efficient than humans, these two points lead me to believe that for AI models to automate all remote work, they will initially need at least as much inference compute as the humans who currently do these remote work tasks are using.
...
These are certainly reasons to expect AI workers to become more productive than humans per FLOP spent in the long run, perhaps after most of the economy has already been automated. However, in the short run the picture looks quite different: while these advantages already exist today, they are not resulting in AI systems being far more productive than humans on a revenue generated per FLOP spent basis.
SB1047 was mentioned separately so I assumed it was something else. Might be the other ones, thanks for the links.
lobbied against mandatory RSPs
What is this referring to?
Thanks. It still seems to me like the problem recurs. The application of Occam's razor to questions like "will the Sun rise tomorrow?" seems more solid than e.g. random intuitions I have about how to weigh up various considerations. But the latter do still seem like a very weak version of the former. (E.g. both do rely on my intuitions; and in both cases, the domain have something in common with cases where my intuitions have worked well before, and something not-in-common.) And so it's unclear to me what non-arbitrary standards I can use to decide whether I should let both, neither, or just the latter be "outweighed by a principle of suspending judgment".
To be clear: The "domain" thing was just meant to be a vague gesture of the sort of thing you might want to do. (I was trying to include my impression of what eg bracketed choice is trying to do.) I definitely agree that the gesture was vague enough to also include some options that I'd think are unreasonable.
Also, my sense is that many people are making decisions based on similar intuitions as the ones you have (albeit with much less of a formal argument for how this can be represented or why it's reasonable). In particular, my impression is that people who are are uncompelled by longtermism (despite being compelled by some type of scope-sensitive consequentialism) are often driven by an aversion to very non-robust EV-estimates.
If I were to write the case for this in my own words, it might be something like:
I like this formulation because it seems pretty arbitrary to me where you draw the boundary between a credence that you include in your representor vs. not. (Like: What degree of justification is enough? We'll always have the problem of induction to provide some degree of arbitrariness.) But if we put this squarely in the domain of ethics, I'm less fuzzed about this, because I'm already sympathetic to being pretty anti-realist about ethics, and there being some degree of arbitrariness in choosing what you care about. (And I certainly feel some intuitive aversion to making choices based on very non-robust credences, and it feels interesting to interpret that as an ~ethical intuition.)
Just to confirm, this means that the thing I put in quotes would probably end up being dynamically inconsistent? In order to avoid that, I need to put in an additional step of also ruling out plans that would be dominated from some constant prior perspective? (It’s a good point that these won’t be dominated from my current perspective.)
Compute would also be reduced within a couple of years, though, as workers at TSMC, NVIDIA, ASML and their suppliers all became much slower and less effective. (Ege does in fact think that explosive growth is likely once AIs are broadly automating human work! So he does think that more, smarter, faster labor can eventually speed up tech progress; and presumably would also expect slower humans to slow down tech progress.)
So I think the counterfactual you want to consider is one where only people doing AI R&D in particular are slowed down & made dumber. That gets at the disagreement about the importance of AI R&D, specifically, and how much labor vs. compute is contributing there.
For that question, I'm less confident about what Ege and the other mechanize people would think.
(They might say something like: "We're only asserting that labor and compute are complementary. That means it's totally possible that slowing down humans would slow progress a lot, but that speeding up humans wouldn't increase the speed by a lot." But that just raises the question of why we should think our current labor<>compute ratio is so close to the edge of where further labor speed-ups stop helping. Maybe the answer there is that they think parallel work is really good, so in the world where people were 50x slower, the AI companies would just hire 100x more people and not be too much worse off. Though I think that would massively blow up their spending on labor relative to capital, and so maybe it'd make it a weird coincidence that their current spending on labor and capital is so close to 50/50.)
Re your response to "Ege doesn't expect AIs to be much smarter or faster than humans": I'm mostly sympathetic. I see various places where I could speculate about what Ege's objections might be. But I'm not sure how productive it is for me to try to speculate about his exact views when I don't really buy them myself. I guess I just think that the argument you presented in this comment is somewhat complex, and I'd predict higher probability that people object (or haven't thought about) some part of this argument then that they bite the crazy "universal human slow-down wouldn't matter" bullet.