by [anonymous]
3 min read

-28

I seem to have a relatively unique utopia idea (I've never seen anyone speak of a similar utopia) which removes the need for fun theory, and with Alicorn's recent post about her uneasiness with Eliezer's 31 Laws of Fun, I think it's a good time to bring it up in the discussion.

This Utopia is very simple; it has only 4 real laws, as follows (probably phrased better to prevent loopholes).

Thou shalt not physically harm others without their permission

Thou shalt not damage other's property without their permission

Thou shalt not restrict other's freedom

Thou shalt not plot to break the law with the intention that the plot will happen without their permission

Basically don't harm others, don't damage their property, don't lock up someone in your basement, and don't hire someone else to do any of this for you. You may harm others with their permission (though most people won't give you permission). The reason for allowing harm with permission is to restrict freedom as little as possible (you wouldn't want to make boxing illegal, for example, and I personally believe suicide is an option people should have ifever they really want it). This seems unmaintainable and it sounds like it would lead to chaos, but with good enough AI I expect it wouldn't be.

To maintain the law, AI police would be set up all over the place to catch people who break these few laws, but they wouldn't interfere unless someone breaks the law, or is about to do so, or someone asks their help.

Artificially Intelligent overlords would deal with the entire relatively simple economy. It's basically ask the AI for something, and they will give it to you if it doesn't require too many resources relative to the supply and demand, and if what you're asking doesn't break the 4 laws. You could of course pool with a large group of others to get extra resources if the resources of an army of robots who's sole purpose is to provide you with what you're asking for isn't enough for you.

People would be allowed to make their own sub-societies on their land. These sub-societies would have their own laws, except for one: thou shalt not restrict other's freedom. If someone's society has laws which someone does not want to follow, they are completely free not to follow them, though the sub-society is free to kick that person out. AIs would provide a free service of analyzing laws for potential dangers and loopholes. Sub-societies would also usually create protocols for things like roads and even social interactions to keep things smooth.

And to do away with fun theory, the AI would provide a free service to recommend things you'll enjoy. And if you've got a problem such as extreme laziness or if you want photographic memory or something, you could just ask the AI to operate you for that. And if you don't want to die, anti-aging technologies would probably be available, so you can live forever if you want to, but if you don't for some reason, you don't have to.

Most possible utopias would fall as a subset of this utopia, except those which restrict freedom. You could definitely live happily here, whatever a happy life would be like for you, and if you don't know what to do, AI will be there to recommend things. And this utopia avoids the objection that everyone has about every utopia: the disagreement about what matters and what doesn't.

And to all the people who will say that no one should ever die, I reply that no one should ever die without their consent. Death is only bad if you don't want to die. Maybe all those who want to die in a Utopia are crazy, but I consider forcibly changing them so that they do want to live an imposition of values no better than religions imposing their values on us all. And if you can't agree with me on this, please just ignore this part of my opinion when you comment on my utopia, I would hate for every comment to be on this small aspect of my views.

New Comment
5 comments, sorted by Click to highlight new comments since:

with Alicorn's recent post about her uneasiness with Eliezer's 31 Laws of Fun, I think it's a good time to bring it up in the discussion.

I repudiate any association with this post. It reflects a complete misunderstanding of everything that has been said about the need for Fun Theory on the entire website and ignores a fair chunk of the intro to my post.

You clearly do not understand any post about Fun Theory that has been made here; the entire point of Fun Theory is to avoid errors like this. The fact that you think this idea is original also indicates a lack of awareness of utopian thought and philosophy in general. I would strongly recommend reading more before posting.

It is a genuine challenge for me to tell if this is a joke.

Is this post meant as satire?

Thou shalt not restrict other's freedom

But the AI police have no such axiomatic limitation on restricting our freedom. They are empowered to restrict our freedom, if they think we are restricting the freedom of others. So if they reason that human communication restricts the freedom of others, because providing information sometimes reduces a person's range of likely actions, they will stop anyone from talking to anyone else.

That's one type of "bug in the system" - where you might even get the "values" or "laws" right, but they are implemented wrongly. Another problem is if you manage to forget something important - what if there was a fifth law that's needed for human happiness? Would the system allow you to add it later? This is why people talk about figuring out what humans really want, or really want to want, by studying neuroscience, rather than just by private reasoning, and then using that as the basis for utopia.

Despite all that, I think it's appropriate to have some informal discussion of "how utopia would work". But that can only produce a first draft which then needs to be judged by much more sophisticated criteria. The first draft might turn out to be 90% right, but it also might turn out to be 90% wrong.

Thou shalt not plot to break the law with the intention that the plot will happen without their permission

Whom does “their” refer to?