(This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. It's not the first post in the series, it'll be very confusing and probably understood wrong without reading at least the first one. Everything described here can be modeled mathematically—it’s essentially geometry. I take as an axiom that every agent in the multiverse experiences real pain and pleasure. Sorry for the rough edges—I’m a newcomer, non‑native speaker, and my ideas might sound strange, so please steelman them and share your thoughts. My sole goal is to decrease the probability of a permanent dystopia. I’m a proponent of direct democracies and new technologies being a choice, not an enforcement upon us.)

Most of our physical theories deny jumping into the future, so does the halting problem. And computational irreducibility too guarantees that even the perfectly aligned agentic superintelligence will make mistakes, because it’s impossible to jump into the future and see long-term consequences of actions. You can help a dying man by sacrificing your life (will agentic ASI sacrifice its?) and he'll turn out to be the next Hitler who'll be cunning enough to enslave the ASI and use it to cause dystopia. Maybe it was better not to sacrifice your life but how could've you known? ASI will have to model all the short-term or long-term futures to be 100% sure what will be a mistake or not in the short-term or longest term.

In humans, mistakes are mitigated by the sheer number of us, mistakes of each one of us are slow and small compared to the total size of the population. Agentic superintelligence can make fast (instant) and big (catastrophic) mistakes, or just one will suffice. It's "body" and "mind" can be the whole Internet or more.

Purpose built non-agentic tool AIs are less dangerous and for maximum security, I propose Artificial Static Place Intelligence as a solution to our alignment problems. Basically, instead of trying to create “god” we’ll create “heaven.” We’ll be the only agents, and our artificial intelligence will be non-agentic virtual places that are akin to a multiverse grown by us. It’s all practical—we just need a BMI-armchair and to make a digitized copy of Earth, and we’ll have the first man-on-the-Moon moment when the first man steps into our digital Earth, gets hit by a car, and opens his eyes in the physical Earth unharmed—it gives you immortality from injuries.

This Static Place Superintelligence  (and its multiversal extension) is actually what any good agentic superintelligence will be building anyway, so why do we need the extremely dangerous intermediate step? That will forever keep us anxious about what-ifs. It’s like trying to tame a white hole or time itself, instead of building a safe space of eventual all-knowing where only we will be all-powerful agents. I give examples of how multiversal UI can look, too: https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/multiversal-sai-alignment-steerable-ai-for-ultimate-human

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since:

Artificial Static Place Intelligence

This would be a better title (this points to the actual proposal here)

Yep, fixed it, I wrote more about alignment and it looks like most of my title choosing is over the top :) Will be happy to hear your suggestions, how to improve more of the titles: https://www.lesswrong.com/users/ank