lessdazed comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: lessdazed 20 August 2011 11:10:41AM *  -2 points [-]

It seems to me that with continuing support, SIAI will be able to hire as many of the right programmers as we can find and effectively integrate into a research effort.

ADBOC. SIAI should never be content with any funding level. It should get the resources to hire, bribe or divert people who may otherwise make breakthroughs on UAI, and set them to doing anything else.

Comment author: MichaelVassar 27 August 2011 04:04:36PM 0 points [-]

That intuition seems to me to follow from the probably false assumption that if behavior X would, under some circumstances, be utility maximizing, it is also likely to be utility maximizing to fund a non-profit to engage in behavior X. SIAI isn't a "do what seems to us to maximize expected utility" organization because such vague goals don't make for a good organizational culture. Organizing and funding research into FAI and research inputs into FAI, plus doing normal non-profit fund-raising and outreach, that's a feasible non-profit directive.

Comment author: lessdazed 27 August 2011 11:02:07PM 0 points [-]

It also follows from the assumption that the claims in any comment submitted on August 20, 2011 are true. Yet I do not believe this.

I had, to the best of my ability, considered the specific situation when giving my advice.

Any advice can be dismissed by suggesting it came from a too generalized assumption.

If you thought someone was about to foom an unfriendly AI, you would do something about it, and without waiting to properly update your 501(c) forms.