Major update here.
The state of affairs regarding the SIAI and its underlying rationale and rules of operation are insufficiently clear.
Most of the arguments involve a few propositions and the use of probability and utility calculations to legitimate action. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. Even if you tell me, where is the data on which you base those estimations?
There seems to be an highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call that a castle in the air.
I know that what I'm saying may simply be due to a lack of knowledge and education, that is why I am inquiring about it. How many of you, who currently support the SIAI, are able to analyse the reasoning that led you to support the SIAI in the first place, or at least substantiate your estimations with other kinds of evidence than a coherent internal logic?
I can follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. Are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground? There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.
I'm concerned that, although consistently so, the SIAI and its supporters are updating on fictional evidence. This post is meant to inquire about the foundations of your basic premises. Are you creating models to treat subsequent models or are your propositions based on fact?
An example here is the use of the Many-worlds interpretation. Itself a logical implication, can it be used to make further inferences and estimations without additional evidence? MWI might be the only consistent non-magic interpretation of quantum mechanics. The problem here is that such conclusions are, I believe, widely considered not to be enough to base further speculations and estimations on. Isn't that similar to what you are doing when speculating about the possibility of superhuman AI and its consequences? What I'm trying to say here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.
The gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Imagination allows for endless possibilities while scientific evidence provides hints of what might be possible and what impossible. Science does provide the ability to assess your data. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of what might be possible. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction, and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed by the SIAI.
Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who's aware of something that might shatter the universe? Why is it that people like Vernor Vinge, Robin Hanson or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI? Why aren't Eric Drexler, Gary Drescher or AI researches like Marvin Minsky worried to the extent that they signal their support?
I'm talking to quite a few educated people outside this community. They do not doubt all those claims for no particular reason. Rather they tell me that there are too many open questions to focus on the possibilities depicted by the SIAI and to neglect other near-term risks that might wipe us out as well.
I believe that many people out there know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have named other people. That's besides the point though, it's not just Hanson or Vinge but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?
What do you expect me to do, just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? Maybe after a few years of study I'll know more.
...
2011-01-06: As this post received over 500 comments I am reluctant to delete it. But I feel that it is outdated and that I could do much better today. This post has however been slightly improved to account for some shortcomings but has not been completely rewritten, neither have its conclusions been changed. Please account for this when reading comments that were written before this update.
2012-08-04: A list of some of my critical posts can be found here: SIAI/lesswrong Critiques: Index
I'm currently preparing for the Summit so I'm not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don't emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for themselves is important; but when they talk about "putting all eggs in one basket" as an abstract argument we will generally point out that this is, in fact, the diametrically wrong direction in which abstract argument should be pushing.
Read the Yudkowsky-Hanson AI Foom Debate. (Someone link to the sequence.)
Read Eric Drexler's Nanosystems. (Someone find an introduction by Foresight and link to it, that sort of thing is their job.) Also the term you want is not "grey goo", but never mind.
Exponentials are Kurzweil's thing. They aren't dangerous. See the Yudkowsky-Hanson Foom Debate.
Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility. Things you spend on charitable efforts that just make you feel good should be considered selfish. If you are entirely selfish but you can think past a hyperbolic discount rate then it's still possible you can get more hedons per dollar by donating to existential risk projects.
Your difficulties in judgment should be factored into a probability estimate. Your sense of aversion to ambiguity may interfere with warm glows, but we can demonstrate preference reversals and inconsistent behaviors that result from ambiguity aversion which doesn't cash out as a probability estimate and factor straight into expected utility.
Michael Vassar is leading. I'm writing a book. When I'm done writing the book I plan to learn math for a year. When I'm done with that I'll swap back to FAI research hopefully forever. I'm "leading" with respect to questions like "What is the form of the AI's goal system?" but not questions like "Do we hire this guy?"
Someone link to relevant introductions of ambiguity aversion as a cognitive bias and do the detailed explanation on the marginal utility thing.
Can someone else do the work of showing how this sort of satisficing leads to a preference reversal if it can't be viewed as expected utility maximization?
Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense. Don't look at just one side and think about how much you doubt it and can't guess. Look at both of them. Also, read the FOOM debate.
Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you're looking for supporting arguments on FOOM they're in the referenced debate.
Nobody's claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)
It sounds like you haven't done enough reading in key places to expect to be able to judge the overall credence out of your own estimates.
You may have an unrealistic picture of what it takes to get scientists interested enough in you that they will read very long arguments and do lots of work on peer review. There's no prestige payoff for them in it, so why would they?
You have a sense of inferential distance. That's not going to go away until you (a) read through all the arguments that nail down each point, e.g. the FOOM debate, and (b) realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.
Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who's already convinced, preferably of someone within the SIAI?
That is part of what I call transparency and a foundational and reproducible corroboration o... (read more)