We get some evidence that people value openness and financial transparency. This is vaguely useful for SIAI (they get evidence that the derivative of donations with respect to openness is probably slightly higher than they previously thought), but useless to anyone whose considering donating. What donors need to know is how good openness is, not how much other people value it. It also does nothing to address the good reasons for SIAI not to publish AGI progress; reasons which don’t apply to normal charities.
Also, he says you shouldn't try to maximise expected utility. This could be a nit-pick, but your risk aversion should be factored into your utility function; you shouldn't be risk averse in utility.
This could be a nit-pick, but your risk aversion should be factored into your utility function
Not according to Against Discount Rates ...and I agree - though this may be a tangent.
It is better if risk aversion is dynamically generated.
This could be a nit-pick, but your risk aversion should be factored into your utility function
Not according to Against Discount Rates ...and I agree - though this may be a tangent.
I think these are two different things. I'd agree in opposing temporal discounting, but I'm not sure there's anything (normatively) problematic about being risk-averse with money.
(Also, I believe Larks's statement "your risk aversion should be factored into your utility function" didn't mean to imply necessarily "you should be risk averse"; I just read it to mean "if you're risk-averse, you don't need to put that outside the framework of expected utility maximization, and decide not to always maximize expected utility; rather, your utility function itself can represent however much risk aversion you have".)
you shouldn’t always maximize expected utility if you only live once. Expected values — in other words, averages — are very important when you make the same small bet over and over again. When the stakes get higher and you aren’t in a position to repeat the bet over and over, it may be wise to be risk averse.
Isn't that the same as maximizing expected utility, but with a different utility function?
you shouldn’t always maximize expected utility if you only live once
I thought that by the Von Neumann–Morgenstern expected utility hypothesis a rational person has to maximize his expected utility.
Mathematician and climate activist John Baez finally commented on charitable giving. I think the opinion of highly educated experts who are not closely associated with LessWrong or the SIAI but have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI.
Desertopa asked:
John Baez replied:
What can one learn from this?