Academian comments on Why CFAR? The view from 2015 - Less Wrong

46 Post author: PeteMichaud 23 December 2015 10:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (63)

You are viewing a single comment's thread. Show more comments above.

Comment author: Academian 19 December 2015 04:09:51AM *  9 points [-]

I would expect not for a paid workshop! Unlike CFAR's core workshops, which are highly polished and get median 9/10 and 10/10 "are you glad you came" ratings, MSFP

  • was free and experimental,

  • produced two new top-notch AI x-risk researchers for MIRI (in my personal judgement as a mathematician, and excluding myself), and

  • produced several others who were willing hires by the end of the program and who I would totally vote to hire if there were more resources available (in the form of both funding and personnel) to hire them.

Comment author: IlyaShpitser 19 December 2015 05:56:24PM *  1 point [-]

I am not saying it wasn't a worthwhile effort (and I agreed to help look into this data, right?) I am just saying if your definition of "resounding success" is one that cannot be used to market this workshop in the future, that definition is a little peculiar...

In general, it's hard to find effects of anything in the data.

Comment author: Benito 20 December 2015 06:56:18AM 1 point [-]

The value of running a workshop and the things you can use to market a workshop are distinct, and that seems to explain it.

The fact that a workshop is in a lovely venue is a good thing for marketing, and irrelevant to the value of running it. That is not confusing.

Comment author: IlyaShpitser 20 December 2015 04:59:26PM 1 point [-]

Sure, but for example things used to market a charity and effectiveness of charity are distinct.

People worry about "effectiveness." Is that going out the window in this case?

Comment author: Academian 20 December 2015 11:05:06PM *  5 points [-]

See Nate's comment above:

http://lesswrong.com/lw/n39/why_cfar_the_view_from_2015/cz99

And, FWIW, I would also consider anything that spends less than $100k causing a small number of top-caliber researchers to become full-time AI safety researchers to be extremely "effective".

[This is in fact a surprisingly difficult problem to solve. Aside from personal experience seeing the difficulty of causing people to become safety researchers, I have also been told by some rich, successful AI companies earnestly trying to set up safety research divisions (yay!) that they are unable to hire appropriately skilled people to work full-time on safety.]