Will_Newsome comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (154)
I haven't even finished reading this post yet, but it's worth making explicit (because of the obvious connections to existential risks strategies in general) that the philanthropy in this case should arguably go towards research that searches for and identifies things like lab universe scenarios, research into how to search for or research such things (e.g. policies about dealing with basilisks at the individual and group levels), research into how to structure brains such that those brains won't completely fail at said research or research generally, et cetera ad infinitum. Can someone please start a non-profit dedicated to the research and publication of "going meta"? Please?
ETA: I'm happy to see you talk about similar things with counterargument 3, but perhaps you could fuse an FAI (not necessarily CEV) argument with the considerations I mentioned above, e.g. put all of your money into building a passable oracle AI to help you think about how to be an optimal utilitarian (perhaps given some amount of information about what you think "your" "utility function(s)" might be (or what you think morality is)), or something more meta than that.
Research into bootstrapping current research to ideal research, research into cognitive comparative advantage, research into how to convince people to research such things or support the research of such things, research into what to do given that practically no one can research any of these things and even if they could no one would pay them to...