Wiki Contributions

Comments

Sorted by
botogol00

if management are doing that then are neglecting a powerful tool in their tool-kit, because announcing a G will surely cause G to fall, and experience says that to begin with a well-chosen G and G remain correlated (because many of the things to do to reduce G also reduce G). It is only over time that G* and G detach.

botogol70

At work a large part of my job involves choosing G , and I can report that Goodhart's Law is very powerful and readily observable.
Further : rational players in the workspace know full-well that management desire G, and the G
is not well-correlated with G, but nonethelss if they are rewarded on G*, then that's what they will focus on.

The best solution - in my experience - is mentioned in the post: the balanced scorecard. Define several measures G1 G2 G3 and G4 that are normally correlated with G. The correlation is then more persistent : if all four measures improve it is likely that G will improve.

G1 G2 G3 G4 may be presented as simulaneous measures, or if setting four measures in one go is too confusing for people trying to prioritise (the frwer the measures the more powerful) they can be sequential. IE If you hope to improve G over 2 years, then measure G1 for two quarters, then switch the measurement to G2 for the next two and so on. (obviously you don't tell people in advance). NB this approach can eb effective, but will make you very unpopular.

botogol20

That's true (that they have biases) although I understand the training is attend to the nature of the injury, and practicalities of the situation - eg danger to the firefighter - rather than the age of the victim.

However what one might expect to see in firefighters would be ethical dilemmas like the trolley problem to trigger the cerebral cortex more, and the amaglydia less than in other people.

Perhaps.

Unless of course the training works by manipulating the emotional response. So firefighters are just as emotional, but their emotions have been changed by their training.

This is the sort of problem Kahane was talking about when he said it is very difficult to interpret brain scans.

botogol20

A person in the audience suggested taking firefighters, who sometimes face dilemmas very like this (Do I try to save life-threatened person A or seriosly injured Baby B), and hooking them up to scans and seeing if their brains work differently - The hypothesis being that they would make decision in dilemmas more 'rationally' and less 'emotionally', as a result of their experience and training. Or the pre-disposition that led to them becoming fire-fighters in the first place.

botogol50

The opening was deliberate - it's a common way that newspaper Diarists start their entries.... but perhaps it's a common way that British newspaper diarists start their entries, and sounds wrong to american ears. So I have changed it. Nations divided by a common language etc.

botogol20

Yes. People get bogged down with the practical difficulties. Another common one is whether you have the strength to throw the stranger off the bridge (might he resist your assault and and even throw you off).

I think the problem is the phrasing of the question. People ask 'would you push the fat man', but they should ask 'SHOULD you push the fat man'. A thought experiemnt is like an opinion poll, the phrasing of the question has a large impact on the answers given. Another reason to be suspicious of them.

botogol30

No, I wasn't declaring it meaningless.

My (perhaps trivial) points were that all hypothetical thought experiments are necessarily conducted in Far mode, even when thought experiment is about simulating Near modes of thinking. Does that undermine it a little?

And

  • while all Thought Experiments are Far
  • Actual Experiements are Near.

I was illustrating that with what I hoped was an amusing anecdote -- the bizarre experience I had last week of having the trolley problem discussed with the fat man actually personified and present in the room, sitting next to me, and how that nudged the thought experiment into something just slightly closer to a real experiment.

It's easy to talk about sacrificing one person's life to save five others, but hurting his feelings by appearing to be rude or unkind, in order to to get to a logical truth was harder. This is somewhat relevant to the subject of the talk - decisions may be made emotionally and then rationalised afterwards.

Look, I wasn't hoping to provoke one of Eliezer's 'clicks', just to raise a weekend smile and to discuss scenario where lesswrong readers had no cached thought to fall back on.

botogol50

:-( no, not a draft! It was just supposed to be light-hearted - fun even - and to make a small point along the way.... it's shame if lesswrong article must be earnest and deep.

botogol-10

no, not at all, I don't think rational = unemotional (and I liked EY's article explaining how it is perfectly rational to feel sad ... when something sad happens).

But rationality does seem to be stongly associated with a constant meta-analytical process: always thinking about a decision, then thinking about the way we were thinking about the decision, and then thinking about the self-imposed axioms we have used to model the way that we were thinking about the meta-thinking, and some angst about whether there are undetected biases in the way that .. yada yada yada.

which is all great stuff,

but I wondered whether rationalists are like that all the time, or whether they ever come home late, open a beer or two and pick their nose while transfixed by czechoslvakian wrestling on ESPN, without stopping to wonder why they are doing it, and wouldn't it be more rational to go to bed already.

Load More