You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Values at compile time - Less Wrong Discussion

7 Post author: Stuart_Armstrong 26 March 2015 12:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 27 March 2015 06:43:02PM *  0 points [-]

In a sense you should be confused about qualia/TWAAFFTI, because we know next nothing about the subject. It might be the case that we "qualia" adds some extra level of confusiojn,...although it might alternatively be the case that TWAAFFTI is something that sounds like an explanation without being actually being an explanation. In particular, TWAAFFTI sets no constraints on what kind of algorithm would have morally relevant feelings, which reinforces my original point: if you think an embedded simulation of al human is morally relevant, how can you deny relevance to the host, even at times when it isnt simulating a human?

Comment author: tailcalled 27 March 2015 07:28:56PM 2 points [-]

Maybe it would be clearer if we looked at some already existing maximization processes. Take for instance evolution. Evolution maximizes inclusive genetic fitness. You punish it by not donating sperm/eggs. I don't care, because evolution is not a personlike thing.