Posts

Sorted by New

Wiki Contributions

Comments

I don't get it: any agent that fooms becomes superintelligent. It's values don't necessarily change at all, nor does its connection to its society.

UFAI is about singletons. If you have an AI society whose members compare notes and share information -- which ins isntrumentally useful for them anyway -- your reduce the probability of singleton fooming.

And is that valid or not? If you can validly decide some systems are better than others, you are some of the way to deciding which is best.

I'm increasingly baffled as to why AI is always brought in to discussions of metaethics. Societies of rational agents need ethics to regulate their conduct. Out AIs aren't sophisticated enough to live in their own socieities. A wireheading AI isn't even going to be able to survive "in the wild". If you could build an artificial society of AI, then the questions of whether they spontaneously evolved ethics would be a very interesting and relevant datum. But AIs as we know them aren't good models for the kinds of entities to which morality is relevant. And Clippy is particularly exceptional example of an AI. So why do people keep saying "Ah, but Clippy..."...?

Isn't the idea of moral progress based on one reference frame being better than another?

Load More