eli_sennesh comments on Confused as to usefulness of 'consciousness' as a concept - LessWrong

35 Post author: KnaveOfAllTrades 13 July 2014 11:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 19 July 2014 08:26:18AM 3 points [-]

Honestly, it would just be much better to open up "shared-value game theory" as a formal subject and then see how well that elaborated field actually matches our normal conceptions of ethics.

Comment author: TheAncientGeek 19 July 2014 05:54:11PM 2 points [-]

Why assume some values have to be shared? If decision theoretic ethics canoe made to work without shared values, that would be interesting.

And decision theoretic ethics is already extant.

Comment author: [deleted] 19 July 2014 06:54:36PM 1 point [-]

Why assume some values have to be shared?

Largely because, in my opinion, it explains the real world much, much better than a "selfish" game theory.

Using selfish game theories, "generous" or "altruistic" strategies can evolve to dominate in iterated games and evolved populations (there's a link somewhere upthread to the paper). You're still then left with the question of: if they do, why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?

Using theories in which agents share some of their values, "generous" or "altruistic" strategies become the natural, obvious result: shared values are nonrivalrous in the first place. Evolution builds us to feel Good and Moral about creatures who share our values because that's a sign they probably have similar genes (though I just made that up now, so it's probably totally wrong) (also, because nothing had time to evolve to fake human moral behavior, so the kin-signal remained reasonably strong).

Comment author: satt 18 August 2014 07:25:19AM *  1 point [-]

Using selfish game theories, "generous" or "altruistic" strategies can evolve to dominate in iterated games and evolved populations (there's a link somewhere upthread to the paper). You're still then left with the question of: if they do, why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?

Because we're adaptation executors, not fitness maximizers. Evolution gets us to do useful things by having us derive emotional value directly from doing those things, not by introducing the extra indirect step of moulding us into rational calculators who first have to consciously compute what's most useful.

Comment author: Strange7 25 October 2014 12:36:14AM 0 points [-]

why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?

If you're running some calculation involving a lot of logarithms, and portable electronics haven't been invented yet, would you rather take a week to derive the exact answer with an abacus, and another three weeks hunting down a boneheaded sign error, or ten seconds for the first two or three decimal places on a slide rule?

Rational selfishness is expensive to set up, expensive to run, and can break down catastrophically at the worst possible times. Evolution tends to prefer error-tolerant systems.