Blueberry comments on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 - Less Wrong

24 Post author: AnnaSalamon 29 March 2012 08:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (239)

You are viewing a single comment's thread. Show more comments above.

Comment author: Blueberry 30 March 2012 12:09:57AM *  2 points [-]

I agree that ranking the weights from 1 to N is idiotic because it doesn't respect the relative importance of each characteristic. However, changing the ratings from 101-110 for every scale will just add a constant to each option's value:

  • Option A, strength 103, mass 106, total score 2(103) + 106 = 312
  • Option B, strength 105, mass 103, total score 2(105) + 103 = 313

(I changed 'weight to 'mass' to avoid confusion with the other meaning of 'weight')

Using something approximating a real-valued ranking (rank from 1-10) instead of rank indicies reduces the problem to mere nonlinearity.

I assume you mean using values for the weights that correspond to importance, which isn't necessarily 1-10. For instance, if strength is 100 times more important than mass, we'd need to have weights of 100 and 1.

You're right that this assumes that the final quality is a linear function of the component attributes: we could have a situation where strength becomes less important when mass passes a certain threshold, for instance. But using a linear approximation is often a good first step at the very least.

Comment author: [deleted] 30 March 2012 12:22:45AM *  0 points [-]

Option A, strength 103, mass 106, total score 2(103) + 106 = 312 Option B, strength 105, mass 103, total score 2(105) + 103 = 313

Oops, I might have to look at that more closely. I think you are right. The shared offset cancels out.

I assume you mean using values for the weights that correspond to importance, which isn't necessarily 1-10. For instance, if strength is 100 times more important than mass, we'd need to have weights of 100 and 1.

Using 100 and 1 for something that is 100 times more important is correct (assuming you are able to estimate the weights (100x is awful suspicious)). Idiot procedures were using rank indicies, not real-valued weights.

But using a linear approximation is often a good first step at the very least.

agree. Linearlity is a valid assumption

The error is using uncalibrated rating from 0-10, or worse, rank indicies. Linear valued rating from 0-10 has the potential to carry the information properly, but that does not mean people can produce calibrated estimates there.