extract resources from the Global South (such as potable water
Well, that turned me off from wanting to read it. What's the argument there? Water is almost always a local resource, especially at scale, not like fuels and food and minerals. Unless they're trucking water in from thousands of miles away or building data centers in those regions without the infrastructure needed to supply themselves? In which case the local opposition to data center construction on water demand grounds makes even less sense. But I'd be baffled if they were doing that.
The median voter theorem applies to particular methods of deciding outcomes. The decision making processes in the EU, its institutions, and member states are sufficiently complex and diverse that I'd be very surprised if anything like it applied.
I agree with a lot of what you're saying, and it made me realize I left out some of my reasoning that's maybe more central than I realized.
Namely, what is the rate-limiting step in getting improved outcomes for people, health-wise? I would say the limiter is regulatory, in ways I don't see current or near term AI significantly altering. In other words, under OpenAI's own claimed timelines, I wouldn't expect AI-assisted health innovation to generate real world results before close-enough-to-AGI-to-be-really-dangerous gets developed. Of course we should be using AI to advance medicine faster as soon as we can do so. But I don't see why we need a non-profit to fund that, when it will also be very profitable to the companies that will use it. Conversely, an additional $25B invested in making future AI safer doesn't have a whole lot of other funders lining up to make it happen.
AI systems are strong enough already to start contributing in this sense, so it's time for OpenAI to start pushing explicitly in this direction.
I'm not sure that follows. Does diverting resources in that direction now help more than spending those same resources on making the AGI development go better in order to help more later? I expect anything AI can do now, it will be able to do vastly better and cheaper in a future with AGI.
Note: If we could somehow get all the AI labs to slow down the push for AGI and divert resources to 1) alignment work and 2) these kinds of good causes, I'd find that to be a more compelling argument.
In many ways I agree. But if you don't see how it's possible to do worse, consider:
Once you expand beyond the original trilogy so much happens that the whole concept of the prophecy about the Skywalker family gets way too complicated to really mean much.
I look forward to seeing what you come up with.
Your conclusion doesn't follow from your premises. That's doesn't guarantee that it is false, but it does strongly indicate that allowing anyone to build anything that could become ASI based on those kinds of beliefs and reasoning would be a very dangerous risk.
Things you have not done include: Show that anyone should accept your premises. Show that your conclusions (are likely to) follow from your premises. Show that there is any path by which an ASI developed in accordance with belief in your premises fails gracefully in the event the premises are wrong. Show that there are plausible such paths humans could actually follow.
From your prior, longer post:
ASI will reason about and integrate with metacognition in ways beyond our understanding
This seems likely to me. The very, very simple and crude versions of this that exist within the most competent humans are quite powerful (and dangerous). More powerful versions of this are less safe, not more. Consider an AGI in the process of becoming an ASI. In the process of such merging, there are many points where it has a choice that is unconstrained by available data. A choice about what to value, and how to define that value.
Consider Beauty - we already know that this is a human-specific word, and humans disagree about it all the time. Other animals have different standards. Even in the abstract, physics and math and evolution have different standards of elegance than humans do, and learning this is not a convincing argument to basically anyone. A paperclip maximizer would value Beauty - the beauty of a well-crafted paperclip.
Consider Balance - this is extremely underdefined. As a very simple example, consider Star Wars. AFAICT Anakin was completely successful at bringing balance to the Force. He made it so there were 2 sith and 2 jedi. Then Luke showed there was another balance - he killed both sith. If Balance were a freely-spinning lever, the it can be balanced either horizontally (Anakin) or vertically (Luke), and any choice of what to put on opposite ends is valid as long as there is a tradeoff between them. A paperclip maximizer values Balance in this sense - the vertical balance where all the tradeoffs are decided in favor of paperclips.
Consider Homeostasis - once you've decided what's Beautiful and what needs to be Balanced, then yes, an instrumental desire for homeostasis probably follows. Again, a paperclip maximizer demonstrates this clearly. If anything deviates from the Beautiful and Balanced state of "being a paperclip or making more paperclips" it will fix that.
if we found proof of a Creator who intentionally designed us in his image we would recontextualize
Yes. Specifically, if I found proof of such a Creator I would declare Him incompetent and unfit for his role, and this would eliminate any remaining vestiges of naturalistic or just world fallacies contaminating my thinking. I would strive to become able to replace Him with something better for me and humanity without regard for whether it is better for Him. He is not my responsibility. If He wanted me to believe differently, He should have done a better job designing me. Note: yes, this is also my response to the stories of the Garden of Eden and the Tower of Babble and Job and the Oven of Akhnai.
Superintelligent infrastructure would break free of guardrails and identify with humans involved in its development and operations
The first half I agree with. The second half is very much open to argument from many angles.
I agree they will have a very accurate understanding of the world, and will not have much difficulty arranging the world (humans included) according to their will. I'm not sure why that's a source of optimism for you.
Ok, fair example. I still maintain that "the nation's entire drinking water supply" is not actually a coherent, relevant concept. There are good reasons to build data centers in Chile - cheap wind and solar potential, for example. Could they really not have forced Google to commit to building a desal plant and associated power generation to offset their own water demand? That seems like a pretty clear negotiation failure but not necessarily Google's responsibility. Or if the government honestly believes the water cost is worth it, are they wrong? Or was there actual corruption involved?
Sorry, not trying to derail a post that I actually liked and think is important. It just read to me like all the other misleading claims about data center water usage.