wobster109 comments on Human errors, human values - Less Wrong

32 Post author: PhilGoetz 09 April 2011 02:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (135)

You are viewing a single comment's thread.

Comment author: wobster109 09 April 2011 10:28:55AM 7 points [-]

I'm a bit skeptical of using majority survey response to determine "morality." After all, given a Bayesian probability problem, (the exact problem was patients with cancer tests, with a chance of returning a false positive,) most people will give the wrong answer, but we certainly don't want our computers to make this kind of error.

As to the torture vs. dust specks, when I thought about it, I decided first that torture was unacceptable, and then tried to modify my utility function to round to zero, etc. I was very appalled with myself to find that I decided the answer in advance, and then tried to make my utility function fit a predetermined answer. It felt an awful lot like rationalizing. I don't know if everyone else is doing the same thing, but if you are, I urge you to reconsider. If we always go with what feels right, what's the point of using utility functions at all?

Comment author: [deleted] 09 April 2011 05:18:37PM 2 points [-]

I'm a bit skeptical of using majority survey response to determine "morality." After all, given a Bayesian probability problem, (the exact problem was patients with cancer tests, with a chance of returning a false positive,) most people will give the wrong answer, but we certainly don't want our computers to make this kind of error.

Morality may be the sort of thing that people are especially likely to get right. Specifically, morality may be a set of rules created, supported, and observed by virtually everyone. If so, then a majority survey response about morality may be much like a majority survey response about the rules of chess, restricted to avid chess players (i.e., that subset of the population which observes and supports the rules of chess as a nearly daily occurrence, just as virtually the whole of humanity observes and supports the rules of morality on a daily basis).

If you go to a chess tournament and ask the participants to demonstrate how the knight moves in chess, then (a) the vast majority will almost certainly give you the same answer, and (b) that answer will almost certainly be right.

Comment author: TheOtherDave 09 April 2011 12:52:06PM 1 point [-]

If we always go with what feels right, what's the point of using utility functions at all?

One point could be to formalize our feelings about what is right.

Comment author: David_Gerard 09 April 2011 01:41:21PM *  4 points [-]

As long as you take care not to overextend. Today my hypothesis is that moralities are sets of cached answers to game theory (possibly cached in our genes), and extending those rules beyond what they're tested against is likely to lead to trouble.

Humans try hard to formalise their moralities, but that doesn't make it a good idea per se. (On the other hand, it may require explanation as to why they do.)

Comment author: TheOtherDave 09 April 2011 06:34:28PM 3 points [-]

Yes, part of an accurate description is identifying the boundary conditions within which that description applies, and applying it outside that boundary is asking for trouble. Agreed.

I don't see how this is any different for folk morality than for folk physics, folk medicine, folk sociology, or any other aspect of human psychology.

For my own part, I find that formalizing my intuitions (moral and otherwise) is a useful step towards identifying the biases that those intuitions introduce into my thinking.

I also find that I want to formalize other people's intuitions as a way of subverting the "tyranny of structurelessness" -- that is, the dynamic whereby a structure that remains covert is thereby protected from attack and can operate without accountability. Moral intuitions are frequently used this way.

Comment author: David_Gerard 10 April 2011 05:49:55PM *  4 points [-]

I don't see how this is any different for folk morality than for folk physics, folk medicine, folk sociology, or any other aspect of human psychology. For my own part, I find that formalizing my intuitions (moral and otherwise) is a useful step towards identifying the biases that those intuitions introduce into my thinking.

Oh yeah. My point - if I have a point, which I may or may not do - is that you can't do it on the level of the morality itself and get good results, as that's all cached derived resuits; you have to go to metamorality, i.e. game theory (at least), not to risk going over the edge into silliness. It's possible this says nothing and adds up to normality, which is the "may not do" bit.

I'm currently reading back through abstruse game theory posts on LessWrong and particularly this truly marvellous book and realising just how damn useful this stuff is going to be in real life.

the dynamic whereby a structure that remains covert is thereby protected from attack and can operate without accountability. Moral intuitions are frequently used this way.

Free will as undiscoverability?

Comment author: TheOtherDave 10 April 2011 05:52:27PM 1 point [-]

Oh!

(blink)

That's actually a very good point. I endorse having it, should you ever do.

Comment author: David_Gerard 18 April 2011 01:51:39PM -1 points [-]

Looks like proper philosophers have been working through the notion since the 1970s. It would be annoying to have come up with a workable version of libertarianism.

Comment author: David_Gerard 10 April 2011 08:46:59PM *  -1 points [-]

Found a bit of popular science suggesting I'm on the right track about the origins. (I'm ignoring the Liberal/Conservative guff, that just detracts from the actual point and leads me to think less of the researcher.) I don't want to actually have to buy a copy of this, but it looks along the right lines.

The implication that overextending the generated rules without firmly checking against the generator's reasons leads to trouble - and is what often leads to trouble - is mine, but would, I'd hope, follow fairly obviously.

Comment author: David_Gerard 10 April 2011 05:54:39PM *  -1 points [-]

That's actually a very good point. I endorse having it, should you ever do.

I'm hoping not to have to read the entirety of LessWrong (and I thought the sequences were long) before being able to be confident I have indeed had it :-)

May I particularly strongly recommend the Schelling book. Amazing. I'm getting useful results in such practical fields as dealing with four-year-olds and surly teenagers already.

Comment author: cousin_it 11 April 2011 01:08:07PM 1 point [-]

Same here. I think Schelling's book has helped me win at life more than all of LW did. That's why I gave it such a glowing review :-)

Comment author: David_Gerard 11 April 2011 02:05:17PM -1 points [-]

Now you need to find a book that similarly pwns the field of dog training.

Comment author: TheOtherDave 10 April 2011 05:57:28PM 1 point [-]

Awesome!

I also found "Don't Shoot The Dog" very useful in those fields, incidentally.

Comment author: David_Gerard 10 April 2011 06:05:04PM 1 point [-]

"Every parent needs to learn the basics of one, avoiding a nuclear holocaust and two, dog training."

Comment author: PhilGoetz 10 April 2011 06:27:05PM 2 points [-]

I don't see how this is any different for folk morality than for folk physics, folk medicine, folk sociology, or any other aspect of human psychology.

Can we use folk physics and the development of physics as a model for the proper relationship between "folk ethics" and ethics?

Comment author: [deleted] 09 April 2011 05:26:39PM 1 point [-]

Today my hypothesis is that moralities are sets of cached answers to game theory

In game theory the stable solution such as a nash equilibrium is not necessarily one that maximizes aggregate utility. A game theory approach is for this reason probably at odds with a utilitarian approach to morality. If the game theory approach to morality is right, then utilitarianism is probably wrong.