Today's post, Value is Fragile was originally published on 29 January 2009. A summary (taken from the LW wiki):

 

An interesting universe, that would be incomprehensible to the universe today, is what the future looks like if things go right. There are a lot of things that humans value that if you did everything else right, when building an AI, but left out that one thing, the future would wind up looking dull, flat, pointless, or empty. Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was 31 Laws of Fun, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
1 comment, sorted by Click to highlight new comments since:

minds don't have to be just like us to be embraced as valuable

The key here is that just because we value alien minds doesn't mean that they do. A paperclip maximizer does not tile the universe with paperclip maximizers. It tiles it with paperclips. We may value the paperclip maximizers, but unless we value paperclips we must not give them free reign.