MichaelVassar comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 19 August 2011 04:20:22PM *  0 points [-]

I can't endorse treating the parts of an argument that lack strong evidence (e.g. funding SIAI is the best way to help FAI) as justifications for ignoring the parts that have strong evidence (e.g. FAI is the highest EV priority around). In a case like that, the rational thing to do is to investigate more or find a third alternative, not to go on with business as usual.

Agree here. Do you think that there's a strong case for direct focus on the FAI problem rather than indirectly working toward FAI via nuclear deproliferation [1] [2]? If so I'd be interested in hearing more.

Comment author: MichaelVassar 20 August 2011 12:36:14AM 5 points [-]

Given enough financial resources to actually endow research chairs and make a credible commitment to researchers, and given good enough researchers, I'd definitely focus SIAI more directly on FAI.

Comment author: multifoliaterose 20 August 2011 12:55:29AM 3 points [-]

I totally understand holding off on hiring research faculty until having more funding, but what would the researchers hypothetically do in the presence of such funding? Does anyone have any ideas for how to do Friendly AI research?

I think (but am not sure) that I would give top priority to FAI if I had the impression that there are viable paths for research that have yet to be explored (that are systematically more likely reduce x-risk than to increase x-risk), but I haven't seen a clear argument that this is the case.