The correct amount of time and effort to devote to the meta-level is not 100% (you don't do anything useful), and not 0% (you don't know how to do anything well). Somewhere in the middle is the optimal amount, and that amount will differ between people for all sorts of reasons. What do you think the optimal amount is for you, and why? That would essentially remove the problem this piece talks about from the piece itself, by tying the thinking back to a real world problem you're trying to solve.
I read that line as Zvi talking about privately owned self-driving cars, not just robotaxis. Otherwise yeah it's very similar.
Edit to add that the first Kelsey Piper quote is about feeling bad about waiting three weeks to go public with an unpopular judgment call. Meanwhile, I think we can all comfortably look at false things the mainstream authorities stuck with for years, and some they still haven't acknowledged.
I have some very mixed feelings about this post. On the one hand, I get exactly where you're coming from. On the other, I think there are genuinely important second order effects to consider.
Basically: if a public intellectual consistently tries to tell people their true-but-difficult-or-unpopular opinions, one (IMO likely) outcome is that over time, they lose (much of) their audience, and cease to be a public intellectual. But if they never tell the truth about their opinions, then their status as a public intellectual isn't doing anyone any good.
The other side of this from my POV is that successfully becoming a public intellectual involves building habits of thought and communication that can make it difficult to notice when the moment comes to really put your cards down on the table and tell the honest but hard truth. I don't follow Ball closely enough, but Kelsey Piper and Will MacAskill have, in my opinion, done amazingly well on this front overall.
I think the SSC post on Kolmogorov Complicity and the Scott Aaronson post it builds off of capture versions of a similar problem, where putting yourself in a position to help when the critical moment comes relies on otherwise going along with a sometimes unfortunate epistemic context.
Evolution Does Not Have Goals
True and important, but if anything I think the importance in this particular community is often overstated rather than unappreciated. I suspect the analogy itself is downstream of a flaw in human languages, which are very agent-centric in their grammatical assumptions. They didn't evolve to describe impersonal forces like evolution, and trying to do so without such analogies is often very cumbersome in ways that obfuscate the reality more than they enlighten.
Evolution Does Not Produce Individual Brains
A lot of good points in this section as well. To the "Who cares?" question, the answer is, "We do, until and unless we know how to use other methods that do sufficiently reliably encode the goals we (should) care about into the AIs we create."
As a minimum answer to that question in the world where it turns out we have nothing more valuable to do with all the energy and matter, consider if step one were "Store all excess emitted energy indefinitely," and if step two were "Engage in some form of stellar engineering to extend the sun's lifetime and slow the rate of emission to a stream of usable scale." Step three (also plausibly started in parallel) would be having autonomous systems do the same for the rest of the reachable stars and galaxies in the universe, and then just wait until we need or want it. No need for descendants unless you want them - feel free to extend your own life arbitrarily far into the future, biologically or digitally or otherwise.
And yes, it might turn out that many or most of those stars and galaxies are already controlled by other civilizations, and therefore not available for our use. If so, then so be it. I hope we're sane enough to leave them alone or become friendly in that case, otherwise there's lots of opportunities to waste resources fighting them and/or one another
I agree with most of the arguments and most of the vision in this post, but I still think the fundamental problem we face is that no one, today, knows how to build a(n AI) system that reliably values any particular chosen thing. We're getting better, especially in regards to moderately powerful current and near future systems that are meaningfully constrained by the power of other people and systems. But as I understand it this is still a deep, unsolved problem. In other words, when you say:
Historically, you can trace the ebb and flow of the plight of the average person by how decentralizing or centralizing the technology most essential for national power is, and how much that technology creates mutual dependencies that make it hard for the elite to defect against the masses.
I think this dynamic is much deeper than the impression I got from this post suggests.
Often, this then leads to calls for centralization, from Oppenheimer advocating for world government in response to the atomic bomb, to Nick Bostrom’s proposal that comprehensive surveillance and totalitarian world government might be required to prevent existential risk from destructive future technologies.
Was Oppenheimer wrong? AFAICT we did, in fact, build a (fairly competent by human standards) limited form of world government for the specific goal of constraining access to nuclear weapons. The US and USSR seized overwhelming power in regards to nukes just about as soon as they were able to do so, and then conspired to deny anyone else from acquiring large amounts of that same power. In the process they altered (and slowed, and in some ways crippled) the potential for nuclear technology to solve civilian problems, most notably in energy. They did so for preservation of themselves and the world, so yay, but they did do it. In the process they had to waste a lot of resources that could in principle have been used to do much more valuable things, had they felt safe to do so.
Thanks! Letting us play with the assumptions is a great way to develop an intuitive sensitivity analysis.
As you note, opinions differ widely, on many axes, and while I also will like to see more people's viewpoints and advice made explicit, there is really no path you can actually be confident in. In that kind of scenario, there's IMO three factors to consider.
First, which predictions resonate with you, and best withstand scrutiny from you?
Second, which paths fail most gracefully? In the event you pick wrong (and in which there was a right thing to pick), what leaves you in an acceptable position anyway?
Third, by what criteria do you wish for your actions to be judged, and which paths best align with that?
I drive a Sierra 2500 which has a turning radius of ~53'. It really does change how (and where) you have to drive.
In any case I agree something like this should exist.