JGWeissman comments on Should I believe what the SIAI claims? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (600)
I'm currently preparing for the Summit so I'm not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don't emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for themselves is important; but when they talk about "putting all eggs in one basket" as an abstract argument we will generally point out that this is, in fact, the diametrically wrong direction in which abstract argument should be pushing.
Read the Yudkowsky-Hanson AI Foom Debate. (Someone link to the sequence.)
Read Eric Drexler's Nanosystems. (Someone find an introduction by Foresight and link to it, that sort of thing is their job.) Also the term you want is not "grey goo", but never mind.
Exponentials are Kurzweil's thing. They aren't dangerous. See the Yudkowsky-Hanson Foom Debate.
Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility. Things you spend on charitable efforts that just make you feel good should be considered selfish. If you are entirely selfish but you can think past a hyperbolic discount rate then it's still possible you can get more hedons per dollar by donating to existential risk projects.
Your difficulties in judgment should be factored into a probability estimate. Your sense of aversion to ambiguity may interfere with warm glows, but we can demonstrate preference reversals and inconsistent behaviors that result from ambiguity aversion which doesn't cash out as a probability estimate and factor straight into expected utility.
Michael Vassar is leading. I'm writing a book. When I'm done writing the book I plan to learn math for a year. When I'm done with that I'll swap back to FAI research hopefully forever. I'm "leading" with respect to questions like "What is the form of the AI's goal system?" but not questions like "Do we hire this guy?"
Someone link to relevant introductions of ambiguity aversion as a cognitive bias and do the detailed explanation on the marginal utility thing.
Can someone else do the work of showing how this sort of satisficing leads to a preference reversal if it can't be viewed as expected utility maximization?
Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense. Don't look at just one side and think about how much you doubt it and can't guess. Look at both of them. Also, read the FOOM debate.
Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you're looking for supporting arguments on FOOM they're in the referenced debate.
Nobody's claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)
It sounds like you haven't done enough reading in key places to expect to be able to judge the overall credence out of your own estimates.
You may have an unrealistic picture of what it takes to get scientists interested enough in you that they will read very long arguments and do lots of work on peer review. There's no prestige payoff for them in it, so why would they?
You have a sense of inferential distance. That's not going to go away until you (a) read through all the arguments that nail down each point, e.g. the FOOM debate, and (b) realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.
Index to the FOOM debate
Antipredictions