(If you want to downvote, please comment or DM why, I always appreciate feedback, my sole goal is to decrease the probability of a permanent dystopia. Just edited this first post of mine,  now it's obvious I don't propose to build any dystopias. I'm actually trying to avoid building any. This is one of the posts in the series about building a rational utopia, the result of three years of thinking and modeling hyper-futuristic and current ethical systems. Sorry for the rough edges—I’m a newcomer, non-native speaker and my ideas might sound strange, so please steelman them and share your thoughts. I don’t want to create dystopias—I personally wouldn’t want to live in one. This is why I write. I’m a proponent of direct democracies and new technologies being a choice not an enforcement upon us.).

I consider thinking about the extremely long-term future important if we don't want to end up in a dystopia. The question is: if we'll eventually have almost infinite compute what should we build?

I think it's hard or impossible to answer it 100% right, so it's better to build something that gives us as many choices as possible (including choices to undo), something close to a multiverse (possibly virtual) - because this is the only way to make sure we don't permanently censor ourselves into a corner of some dystopia.

The mechanism for instant and cost-free switching between universes/observers in the future human-made multiverse - is essential to get us as close as possible to some perfect utopia. It'll allow us to debug the future.

With instant switching, we won't need to be afraid to build infinitely many worlds in search of an utopia because if some world we're building will start to look like a dystopia - we'll instantly switch away from it. It's more realistic to try to build many and find some good one, instead of trying to get to good one from the single first try and end up in a single permanent dystopia.

The first long but overwhelming post: https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-utopia-multiversal-ai-alignment-steerable-asi

The second easier to understand long post: https://www.lesswrong.com/posts/Ymh2dffBZs5CJhedF/eheaven-1st-egod-2nd-multiversal-ai-alignment-and-rational

ank

New Comment
4 comments, sorted by Click to highlight new comments since:

Building every possible universe seems like a very direct way of purposefully creating one of the biggest possible S-risks. There are almost certainly vastly more dystopias of unimaginable suffering than there are of anything like a utopia.

So to me this seems like not just "a bad idea" but actively evil.

[-]ank*10

I wrote a response, I’ll be happy if you’ll check it out before I publish it as a separate post. Thank you! https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-utopia-and-multiversal-ai-alignment-steerable-asi

[-]ank*10

Fair enough, my writing was confusing, sorry, I didn't mean to purposefully create dystopias, I just think it's highly likely they will unintentionally be created and the best solution is to have an instant switching mechanism between observers/verses + an AI that really likes to be changed. I'll edit the post to make it obvious, I don't want anyone to create dystopias.

[-]ank10

Any criticism is welcome, it’s my first post and I’ll post next on the implication for the current and future AI systems. There are some obvious implication for political systems, too. Thank you for reading