Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: imbatman 01 May 2013 08:15:07PM 1 point [-]

Did the next few posts Luke mentions would be about empathic metaethics ever get written? I don't see them anywhere.

Comment author: SRStarin 25 December 2010 02:23:13AM *  2 points [-]

The points made here are sound. I was particularly awakened by calling out the rule about overhead as wrong, since that has been a major factor in my charitiable giving in the past.

However, if we imagine everyone behaving according to these rules, we wind up with very few (incompetent) people running a few charities with piles of cash. If no lawyers take time off and contribute their expertise to a charity, then how do charities protect themselves from lawsuits, for example? The optimal charity solution is not for everyone to follow your guidelines, but for almost everyone to follow your guidelines, and a few people to deviate. Yet, how do we know whether we should be the ones who deviate?

Comment author: imbatman 12 June 2012 06:02:35PM 1 point [-]

this was covered here: http://lesswrong.com/lw/65/money_the_unit_of_caring/

"If the soup kitchen needed a lawyer, and the lawyer donated a large contiguous high-priority block of lawyering, then that sort of volunteering makes sense—that's the same specialized capability the lawyer ordinarily trades for money. But "volunteering" just one hour of legal work, constantly delayed, spread across three weeks in casual minutes between other jobs? This is not the way something gets done when anyone actually cares about it, or to state it near-equivalently, when money is involved."

Comment author: MarkusRamikin 21 May 2012 06:29:41PM 4 points [-]

Pretty sure that was Francisco d'Anconia aka Superman, in Ayn Rand's Atlas Shrugged.

Comment author: imbatman 22 May 2012 09:48:37PM 2 points [-]

That's correct.

Comment author: imbatman 21 May 2012 04:28:05PM 3 points [-]

"Contradictions do not exist. Whenever you think you are facing a contradiction, check your premises. You will find that one of them is wrong."

Comment author: Risto_Saarelma 21 February 2012 06:39:00PM 0 points [-]

Looks good to me now.

Comment author: imbatman 21 February 2012 08:56:21PM 0 points [-]

Thanks. Do you think the vote downs have to do with the content? Is this not a relevant topic for this forum?

Comment author: faul_sname 21 February 2012 12:41:26AM 0 points [-]

It is entirely possible, and feel free to ask more questions.

I find that it's helpful to visualize the shape of the space I am operating in, which in this case is a 5-dimensional space (the dimensions are Plot, Acting, Humor, Suspense, and Overall Rating). However, many people find it difficult to visualize more than 3 dimensions, so I will describe only the interaction of Humor and Suspense on Overall Rating."

In this case, let Humor (H) be the east/west direction, Suspense (S) be the north south direction, and Overall Rating (R) be the altitude. We can now visualize a landscape that corresponds to these variables. Here are some possible landscapes and what we can infer from them:

*Flat, with no slope or features (The audience doesn't care about either H or S)

*Sloped up as we go northeast (The audience likes humor and suspense together)

*Saddle shaped with the high points to the northwest and southeast (The audience likes H or S independently, but not together)

*Mountainous (The audience has complex tastes).

You would then want to find the equation that best fit this terrain you have. Usually, the best fit is linear (which you would see as a sloped terrain). However, you can find better equations when it isn't. You do have to be careful not to over-fit: a good rule of thumb is that if it takes more information to approximate your data than is contained in the data itself, you're doing something wrong.

Comment author: imbatman 21 February 2012 04:56:44PM 0 points [-]

I tried visualizing but I don't know how that helps me construct a formula. I would imagine, in your example, the landscape would be mountainous. One movie may have both great suspense and great humor and be a great movie...another may have both great suspense and great humor and be just an okay movie. But then perhaps there is a movie with very low amounts of humor or suspense that is still a good movie for other reasons. So in that case neither of these metrics would be good predictors for that movie.

That's kind of the core of the issue, as your exercise illustrates. Since in any given case, and metric can be a complete non-predictor of the outcome, I don't know any way to construct the formula. It seems like you'd have to find some way to both include and exclude metrics based on (something).

So maybe the answer is the N/A thing I considered. Valuing movie metrics is not about quantifying how much of each metric is packed into a film. It is about gauging how well these metrics are used. So maybe you could give Schindler's List "N/A" in the humor metric and some other largely humorless movie a 2/10 based on the fact that you felt the other movie needed humor and didn't have much. In that way, it seems all metrics not stated as N/A would have value and you would just need to figure out how to weight them. For instance:

A 9 9 9 9 wouldn't necessarily score a better total than a 9 9 9 N/A...but it might, if the last category was weighted higher than one/some of the others.

Comment author: Risto_Saarelma 21 February 2012 06:49:32AM 4 points [-]

Bolding or italicizing each special term the first time it appears in the text and writing it in regular typeface afterwards would probably read better, while still drawing attention to the relevant special concept words. People can keep picking out the word better without the typeface once they've been primed by the first mention to assume the word denotes an important concept.

Comment author: imbatman 21 February 2012 04:28:13PM 1 point [-]

I liked this idea, which carried the added bonus of only taking a few second to implement. Better?

Comment author: faul_sname 20 February 2012 09:12:35PM *  2 points [-]

Empirically determine what formula most closely matches overall impressions in the real world, avoiding over-fitting by penalizing the formulas for complexity. The "sum the scores" would simply be P+A+H+S. A weighted sum would be k1P+k2A+k3H+k4S. Perhaps humor and suspense are found to correlate positively with rating when considered individually, but interfere negatively with each other. So we might go with k1P+k2A+k3H+k4S-k5(H*S). Each additional bit of complexity in the formula must double the predictive power of your formula (halve your error).

We would start with the data and possible formulas (probably weighted by complexity). We would then plug in the data for each formula, seeing how well each one predicts it. The formula which most efficiently predicts movie ratings based on these dimensions is the one we would use.

Comment author: imbatman 20 February 2012 11:01:53PM 0 points [-]

Yes! That helps. My question, then, is what to plug into that formula if a metric SOMETIMES matters.

e.g. If 9 9 9 9 isn't necessarily better than 9 9 9 0.

There are probably some additional questions to think of, but I'm not sure what they are. And I'm not entirely sure this is possible...that's why I brought it up.

Comment author: jimrandomh 20 February 2012 08:17:35PM *  1 point [-]

"How good a movie is" is not a question for which there is a fact of the matter, because expanding out the definition of the word "good" brings in all the complexity of human preference. That's not unambiguous until you specify a particular human and a particular priming state (or particular weighted combination thereof). Things like costumes and suspense correlate with movies being good, for most humans in most states, but that's a mere empirical fact, not part of the definition of goodness.

Comment author: imbatman 20 February 2012 08:49:13PM *  0 points [-]

I tried to acknowledge that the rankings in this case are completely subjective. Maybe it would help to think about it like this. Let's say instead we have a data set. We'll simplify to 4 metrics: Plot, Acting, Humor, and Suspense. We're given data for 3 movies, for each movie a ranking for these 4 metrics, respectively:

Groundhog Day 9 9 10 5 Terminator 8 8 6 9 Achorman 6 9 10 2

Based on this, what are some ways to evaluate this data? We're not satisfied that just summing the rankings for each metric comes up with an accurate ranking for the film overall. So how else can we do it?

Comment author: Alicorn 20 February 2012 07:50:55PM 4 points [-]

What is with the persistent bolding of your pet words?

Comment author: imbatman 20 February 2012 07:55:17PM 0 points [-]

Just thought I would try to make it easier to follow. An alternative would have been to declare my terms, I guess. I haven't really developed a strategy for that -- just thought I'd try this.

View more: Next