this was covered here: http://lesswrong.com/lw/65/money_the_unit_of_caring/
"If the soup kitchen needed a lawyer, and the lawyer donated a large contiguous high-priority block of lawyering, then that sort of volunteering makes sense—that's the same specialized capability the lawyer ordinarily trades for money. But "volunteering" just one hour of legal work, constantly delayed, spread across three weeks in casual minutes between other jobs? This is not the way something gets done when anyone actually cares about it, or to state it near-equivalently, when money is involved."
That's correct.
"Contradictions do not exist. Whenever you think you are facing a contradiction, check your premises. You will find that one of them is wrong."
Thanks. Do you think the vote downs have to do with the content? Is this not a relevant topic for this forum?
I tried visualizing but I don't know how that helps me construct a formula. I would imagine, in your example, the landscape would be mountainous. One movie may have both great suspense and great humor and be a great movie...another may have both great suspense and great humor and be just an okay movie. But then perhaps there is a movie with very low amounts of humor or suspense that is still a good movie for other reasons. So in that case neither of these metrics would be good predictors for that movie.
That's kind of the core of the issue, as your exercise illustrates. Since in any given case, and metric can be a complete non-predictor of the outcome, I don't know any way to construct the formula. It seems like you'd have to find some way to both include and exclude metrics based on (something).
So maybe the answer is the N/A thing I considered. Valuing movie metrics is not about quantifying how much of each metric is packed into a film. It is about gauging how well these metrics are used. So maybe you could give Schindler's List "N/A" in the humor metric and some other largely humorless movie a 2/10 based on the fact that you felt the other movie needed humor and didn't have much. In that way, it seems all metrics not stated as N/A would have value and you would just need to figure out how to weight them. For instance:
A 9 9 9 9 wouldn't necessarily score a better total than a 9 9 9 N/A...but it might, if the last category was weighted higher than one/some of the others.
I liked this idea, which carried the added bonus of only taking a few second to implement. Better?
Yes! That helps. My question, then, is what to plug into that formula if a metric SOMETIMES matters.
e.g. If 9 9 9 9 isn't necessarily better than 9 9 9 0.
There are probably some additional questions to think of, but I'm not sure what they are. And I'm not entirely sure this is possible...that's why I brought it up.
I tried to acknowledge that the rankings in this case are completely subjective. Maybe it would help to think about it like this. Let's say instead we have a data set. We'll simplify to 4 metrics: Plot, Acting, Humor, and Suspense. We're given data for 3 movies, for each movie a ranking for these 4 metrics, respectively:
Groundhog Day 9 9 10 5 Terminator 8 8 6 9 Achorman 6 9 10 2
Based on this, what are some ways to evaluate this data? We're not satisfied that just summing the rankings for each metric comes up with an accurate ranking for the film overall. So how else can we do it?
Just thought I would try to make it easier to follow. An alternative would have been to declare my terms, I guess. I haven't really developed a strategy for that -- just thought I'd try this.
Did the next few posts Luke mentions would be about empathic metaethics ever get written? I don't see them anywhere.