jphaas comments on On Terminal Goals and Virtue Ethics - LessWrong

67 Post author: Swimmer963 18 June 2014 04:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (205)

You are viewing a single comment's thread. Show more comments above.

Comment author: jphaas 18 June 2014 02:15:29PM 5 points [-]

Funny, I always thought it was the other way around... consequentialism is useful on the tactical level once you've decided what a "good outcome" is, but on the meta-level, trying to figure out what a good outcome is, you get into questions that you need the help of virtue ethics or something similar to puzzle through. Questions like "is it better to be alive and suffering or to be dead", or "is causing a human pain worse than causing a pig pain", or "when does it become wrong to abort a fetus", or even "is there good or bad at all?"

Comment author: CCC 30 June 2014 09:49:37AM 7 points [-]

I think that the reason may be that consequentionalism requires more computation; you need to re-calculate the consequences for each and every action.

The human brain is mainly a pattern-matching device - it uses pattern-matching to save on computation cycles. Virtues are patterns which lead to good behaviour. (Moreover, these patterns have gone through a few millenia of debugging - there are plenty of cautionary tales about people with poorly chosen virtues to serve as warnings). The human brain is not good at quickly recalcuating long-term consequences from small changes in behaviour.

Comment author: Armok_GoB 29 June 2014 06:42:57PM 3 points [-]

What actually happens is you should be consequential at even-numbered meta-levels and virtue-based on the odd numbered ones... or was it the other way around? :p