(Disclaimer: My statements about SIAI are based upon my own views, and should in no way be interpreted as representing their stated or actual viewpoints on the subject matter. I am talking about my personal thoughts, feelings, and justifications, no one else's. For official information, please check the SIAI website.)
Although this may not answer your questions, here are my reasons for supporting SIAI:
I want what they're selling. I want to understand morality, intelligence, and consciousness. I want a true moral agent outside of my own thoughts, something that can help solve that awful, plaguing question, "Why?" I want something smarter than me that can understand and explain the universe, providing access to all the niches I might want to explore. I want something that will save me from death and pain and find a better way to live.
It's the most logical next step. In the evolution of mankind, intelligence is a driving force, so "more intelligent" seems like an incredibly good idea, a force multiplier of the highest order. No other solution captures my view of a proper future like friendly AI, not even "...in space!"
No one else cares about the big
I'm currently preparing for the Summit so I'm not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.
Given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don't emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for th...
An example here is the treatment and use of MWI (a.k.a. the "many-worlds interpretation") and the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that's it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least as understood within the LW community? But that's besides the point. The problem here is that such conclusions are, I believe, widely considered to be weak evidence to base further speculations and estimations on.
Reading the QM sequence (someone link) will show you that to your surprise and amazement, what seemed to you like an unjustified leap and a castle in the air, a mere interpretation, is actually nailed down with shocking solidity.
...What I'm trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence
Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.
What would you accept as evidence?
Would you accept sophisticated machine learning algorithms like the ones in the Netflix contest, who find connections that make no sense to humans, who simply can't work with high-dimensional data?
Would you accept a circuit designed by a genetic algorithm, which doesn't work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?
Would you accept a chess program which could crush any human chess player who ever lived? Kasparov at ELO 2851, Rybka at 3265. Wikipedia says grandmaster status comes at ELO 2500. So Rybka is now even further beyond Kasparov at his peak as Kasparov was beyond a new grandmaster. And it's not like Rybka or the other chess AIs will weaken with age.
Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?
Can I say, first of all, that if you want to think realistically about a matter like this, you will have to find better authorities than science-fiction writers. Their ideas are generally not their own, but come from scientific and technological culture or from "futurologists" (who are also a very mixed bunch in terms of intellect, realism, and credibility); their stories present speculation or even falsehood as fact. It may be worthwhile going "cold turkey" on all the SF you have ever read, bearing in mind that it's all fiction that was ground out, word by word, by some human being living a very ordinary life, in a place and time not very far from you. Purge all the imaginary experience of transcendence from your system and see what's left.
Of course science-fictional thinking, treating favorite authors as gurus, and so forth is endemic in this subculture. The very name, "Singularity Institute", springs from science fiction. And SF occasionally gets things right. But it is far more a phenomenon of the time, a symptom of real things, rather than a key to understanding reality. Plain old science is a lot closer to being a reliable guide to reality, thou...
Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).
This claim can be broken into two separate parts:
For 1: looking at current technology trends, Sandberg & Bostrom estimate that we should have the technology needed for whole brain emulation around 2030-2050 or so, at least assuming that it gets enough funding and that Moore's law keeps up. Even if there isn't much of an actual interest in whole brain emulations, improving scanning tools are likely to revolutionize neuroscience. Of course, respected neuroscientists are already talking about reverse-engineering of the brain as being within reach. If we are successful at reverse engineering the brain, then AI is a natural result.
As for two, as Eliezer mentioned, this is pretty much an antiprediction. Human minds are a particular type of architecture, running on a particular type of hardware: it would be an amazing coincidence if it just happened that our intelligence couldn't be drastically improved upon. We already know that we're insanely biased, to the point of people ...
Is there more to this than "I can't be bothered to read the Sequences - please justify everything you've ever said in a few paragraphs for me"?
Criticism is good, but this criticism isn't all that useful. Ultimately, what SIAI does is the conclusion of a chain of reasoning; the Sequences largely present that reasoning. Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying "justify yourselves!" doesn't advance the debate.
Getting annoyed (at one of your own donors!) for such a request is not a way to win.
I don't begrudge SIAI at all for using Less Wrong as a platform for increasing its donor base, but I can definitely see myself getting annoyed sooner or later, if SIAI donors keep posting low-quality comments or posts, and then expecting special treatment for being a donor. You can ask Eliezer to not get annoyed, but is it fair to expect all the other LW regulars to do the same as well?
I'm not sure what the solution is to this problem, but I'm hoping that somebody is thinking about it.
These are reasonable questions to ask. Here are my thoughts:
- Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).
- Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).
Virtually certain that these things are possible in our physics. It's possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we're sure chimps couldn't program trans-simian AI. But this possibility seems slimmer when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading) and it's hard to imagine that recursive improvement would cap out any time soon. At some point we'll have a descendant who can figure out self-improving AI; it's just a question of when.
- The likelihood of exponential growth versus a slow development over many centuries.
- That it is worth it to spend most on a future whose likelihood I cannot judge.
These are more about decision theory than logical uncertainty, IMO. If a self-improving AI isn't actually possible for a long time, then funding ...
I think Vernor Vinge at least has made a substantial effort to convince people of the risks ahead. What do you think A Fire Upon the Deep is? Or, here is a more explicit version:
...If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet.... In a Post- Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self- aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others
David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year's Singularity Summit. He estimates the probability of human-level AI by 2100 at "somewhat more than one-half," thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.
He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.
I feel some of the force of this...I do think we should take the opinions of other experts seriously, even if their arguments don't seem good.
I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you're going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you're just interested in maximizing expected utility, the complaint that we don't have a lot of evidence about what will be best for the future, or the complaint that we just don't really know whether SIAI's mission and methodology are going to work seems to lose a lot of force.
The questions of speed/power of AGI and possibility of its creation in the near future are not very important. If AGI is fast and near, we must work on FAI faster, but we must work on FAI anyway.
The reason to work on FAI is to prevent any non-Friendly process from eventually taking control over the future, however fast or slow, suddenly powerful or gradual it happens to be. And the reason to work on FAI now is because the fate of the world is at stake. The main anti-prediction to get is that the future won't be Friendly if it's not specifically made Friendly, even if it happens slowly. We can as easily slowly drift away from things we value. You can't optimize for something you don't understand.
It doesn't matter if it takes another thousand years, we still have to think about this hugely important problem. And since we can't guarantee that the deadline is not near, expected utility calculation says we must still work as fast as possible, just in case. If AGI won't be feasible for a long while, that's great news, more time to prepare, to understand what we want.
(To be clear, I do believe that AGIs FOOM, and that we are at risk in the near future, but the arguments for that are informal and difficult to communicate, while accepting these claims is not necessary to come to the same conclusion about policy.)
I'm not exactly an SIAI true believer, but I think they might be right. Here are some questions I've thought about that might help you out. I think it would help others out if you told us exactly where you'd be interested in getting off the boat.
Judging from your post, you seem most skeptical about putting your efforts into causes whose probability of success is very difficult to estimate, and perhaps low.
Dawkins agrees with EY
Richard Dawkins states that he is frightened by the prospect of superhuman AI and even mentions recursion and intelligence explosion.
I was disappointed watching the video relative to the expectations I had from your description.
Dawkins talked about recursion as in a function calling itself, as an example of the sort of the thing that may be the final innovation that makes AI work, not an intelligence explosion as a result of recursive self-improvement.
I was not sure whether to downvote this post for its epistemic value or upvote for instrumental (stimulating good discussion).
I ended up downvoting, I think this forum deserves better epistemic quality (I paused top-posting myself for this reason). I also donated to SIAI, because its value was once again validated to me by the discussion (though I have some reservations about apparent eccentricity of the SIAI folks, which is understandable (dropping out of high school is to me evidence of high rationality) but couterproductive (not having enough accepted a...
I don't think this post was well-written, at the least. I didn't even understand the tl;dr?
tldr; Is the SIAI evidence-based or merely following a certain philosophy? I'm currently unable to judge if the Less Wrong community and the SIAI are updating on fictional evidence or if the propositions, i.e. the basis for the strong arguments for action that are proclaimed on this site, are based on fact.
I don't see much precise expansion on this, except for MWI? There's a sequence on it.
...And that is my problem. Given my current educational background and know
I don't understand why this post has upvotes.
I think the obvious answer to this is that there are a significant number of people out there, even out there in the LW community, who share XiXiDu's doubts about some of SIAIs premises and conclusions, but perhaps don't speak up with their concerns either because a) they don't know quite how to put them into words, or b) they are afraid of being ridiculed/looked down on.
Unfortunately, the tone of a lot of the responses to this thread lead me to believe that those motivated by the latter option may have been right to worry.
Yeah, I agree (no offense XiXiDu) that it probably could have been better written, cited more specific objections etc. But the core sentiment is one that I think a lot of people share, and so it's therefore an important discussion to have. That's why it's so disappointing that Eliezer seems to have responded with such an uncharacteristically thin skin, and basically resorted to calling people stupid (sorry, "low g-factor") if they have trouble swallowing certain parts of the SIAI position.
What are you considering as pitching in? That I'm donating as I am, or that I am promoting you, LW and the SIAI all over the web, as I am doing?
You simply seem to take my post as hostile attack rather than the inquiring of someone who happened not to be lucky enough to get a decent education in time.
All right, I'll note that my perceptual system misclassified you completely and consider that concrete reason to doubt it from now on.
Sorry.
If you are writing a post like that one it is really important to tell me that you are an SIAI donor. It gets a lot more consideration if I know that I'm dealing with "the sort of thing said by someone who actually helps" and not "the sort of thing said by someone who wants an excuse to stay on the sidelines, and who will just find another excuse after you reply to them", which is how my perceptual system classified that post.
The Summit is coming up and I've got lots of stuff to do right at this minute, but I'll top-comment my very quick attempt at pointing to information sources for replies.
Clippy, you represent a concept that is often used to demonstrate what a true enemy of goodness in the universe would look like, and you've managed to accrue 890 karma. I think you've gotten a remarkably good reception so far.
On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I'd join.
SCIENTOLOGY IS DANGEROUS. Scientology is not a joke and joining them is not something to be joked about. The fifth level of precaution is absolutely required in all dealings with the Church of Scientology and its members. A few minutes of research with Google will turn up extraordinarily serious allegations against the Church of Scientology and its top leadership, including allegations of brainwashing, abducting members into slavery in their private navy, framing their critics for crimes, and large-scale espionage against government agencies that might investigate them.
I am a regular Less Wrong commenter, but I'm making this comment anonymously because Scientology has a policy of singling out critics, especially prominent ones but also some simply chosen at random, for harrassment and attacks. They are very clever and vicious in the nature of the attacks they use, which have included libel, abusing the legal system,...
It has seemed to me for a while that a number of people will upvote any post that goes against the LW 'consensus' position on cryonics/Singularity/Friendliness, so long as it's not laughably badly written.
I don't think anything Eliezer can say will change that trend, for obvious reasons.
However, most of us could do better in downvoting badly argued or fatally flawed posts. It amazes me that many of the worst posts here won't drop below 0 for any stated amount of time, and even then not very far. Docking someone's karma isn't going to kill them, folks. Do everyone a favor and use those downvotes.
"Eliezer Yudkowsky facts" is meant to be fun and entertainment. Do you agree that there is a large subjective component to what a person will think is fun, and that different people will be amused by different types of jokes? Obviously many people did find the post amusing (judging from its 47 votes), even if you didn't. If those jokes were not posted, then something of real value would have been lost.
The situation with XiXiDu's post's is different because almost everyone seems to agree that it's bad, and those who voted it up did so only to "stimulate discussion". But if they didn't vote up XiXiDu's post, it's quite likely that someone would eventually write up a better post asking similar questions and generating a higher quality discussion, so the outcome would likely be a net improvement. Or alternatively, those who wanted to "stimulate discussion" could have just looked in the LW archives and found all the discussion they could ever hope for.
This post makes very weird claims regarding what SIAI's positions would be.
"Spend most on a particular future"? "Eliezer Yudkowsky is the right and only person who should be leading"?
It doesn't at all seem to me that stuff such as these would be SIAI's position. Why doesn't the poster provide references for these weird claims?
Here's a good reference for what SIAI's position actually is:
I think there are very good questions in here. Let me try to simplify the logic:
First, the sociological logic: if this is so obviously serious, why is no one else proclaiming it? I think the simple answer is that a) most people haven't considered it deeply and b) someone has to be first in making a fuss. Kurzweil, Stross, and Vinge (to name a few that have thought about it at least a little) seem to acknowledge a real possibility of AI disaster (they don't make probability estimates).
Now to the logical argument itself:
a) We are probably at risk from the...
The Charlie Stross example seems to be less than ideal. Much of what Stross has wrote about touches upon or deals intensely with issues connected to runaway AI. For example, the central premise of "Singularity Sky" involves an AI in the mid 20th century going from stuck in a lab to godlike in possibly a few seconds. His short story "Antibodies" focuses on the idea that very bad fast burns occur very frequently. He also has at least one (unpublished) story the central premises of which is that Von Neumann and Turing proved that P=NP and ...
This is an attempt (against my preference) to defend SIAI's reasoning.
Let's characterize the predictions of the future into two broad groups: 1. business as usual, or steady-state. 2. aware of various alarmingly exponential trends broadly summarized as "Moore's law". Let's subdivide the second category into two broad groups: 1. attempting to take advantage of the trends in roughly a (kin-) selfish manner 2. attempting to behave extremely unselfishly.
If you study how the world works, the lack of steady-state-ness is everywhere. We cannot use fossi...
- That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.
I don't believe that is necessarily true, just that no one else is doing it. I think other teams working on FAI Specifically would be a good thing, provided they were competent enough not to be dangerous.
Likewise, Lesswrong (then Overcoming bias) is just the only place I've found that actually looked at the morality problem is a non-obviously wrong way. When I arrived I had a different view on mora...
That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.
That's just a weird claim. When Richard Posner or David Chalmers does writing in the area SIAI folk cheer, not boo. And I don't know anyone at SIAI who thinks that the Future of Humanity Institute's work in the area isn't a tremendously good thing.
Likewise, Lesswrong (then Overcoming bias) is just the only place I've found that actually looked at the morality problem is a non-obviously wrong way.
Have you looked into the philosophical literature?
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) (Thanks Kevin)
...SIAI's leaders and community members have a lot of beliefs and opinions, many of which I share and many not, but the key difference between our perspectives lies in what I'll call SIAI's "Scary Idea", which is the idea that: progressing toward advanced AGI without a design for "provably non-dangerous AGI" (or something closely analogous, often called "Friendly AI" in SIAI lingo) is highly likely to lead to an involuntary end for the
At least one of those people is a member on this site.
If you're referring to Gary Drescher, I forwarded him a link of your post, and asked him what his views of SIAI actually are. He said that he's tied up for the next couple of days, but will reply by the weekend.
You know what I would call bizarre? That someone writes in bold and all caps calling someone an idiot and afterwards banning his post. All that based on ideas that themselves are resulting from and based on unsupported claims. That is what EY is doing and I am trying to assess the credibility of such reactions.
EY is one of the smartest people on the planet and this has been his life's work for about 14 years. (He started SIAI in 2000.) By your own admission, you do not have the educational achievements necessary to evaluate his work, so it is not surprising that a small fraction of his public statements will seem bizarre to you because 14 years is plenty of time for Eliezer and his friends to have arrived at beliefs at very great inferential distance from any of your beliefs.
Humans are designed (by natural selection) to mistrust statements at large inferential distances from what they already believe. Human were not designed for a world (like the world of today) where there exists so much accurate knowledge of reality no one can know it all, and people have to specialize. Part of the process of becoming educated is learning to ignore your natural human incredulity at stateme...
Good thing at least some people here are willing to think critically.
I know these are unpopular views around here, but for the record:
I recently followed a link hole that got me here.
MIRI is not now what SIAI was then, so this isn't the most pressing of questions, but: what was the major update? It is, I'm sad to say, a victim of link rot.
Greg Egan and the SIAI?
I completey forgot about this interview, so I already knew why Greg Egan isn't that worried:
...… I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any othe
Here's the Future of Humanity Institute's survey results from their Global Catastrophic Risks conference. The median estimate of extinction risk by 2100 is 19%, with 5% for AI-driven extinction by 2100:
http://www.fhi.ox.ac.uk/selected_outputs/fohi_publications/global_catastrophic_risks_survey
Unfortunately, the survey didn't ask for probabilities of AI development by 2100, so one can't get probability of catastrophe conditional on AI development from there.
Robert A. Heinlein was an Engineer and SF writer, who created many stories that hold up quite well. He put in his understanding of human interaction, and of engineering to make stories that are somewhat realistic. But no one should confuse him with someone researching the actual likelyhood of any particular future. He did not build anything that improved the world, but he wrote interesting about the possibilities and encouraged many others to per-sue technical careers. SF has often bad usage of logic, and the well known hero bias, or scientists that put to...
This comment is my last comment for at least the rest of 2010.
You can always e-Mail me: ak[at]xixidu.net
Therefore I perceive it as unreasonable to put all my eggs in one basket.
It doesn't sound to me as though you're maximizing expected utility. If you were maximizing expected utility, you would put all of your eggs in the most promising basket.
Or perhaps you are maximizing expected utility, but your utility function is equal to the number of digits in some number representing the amount of good you've done for the world. This is a pretty selfish/egotistical utility function to have, and it might be mine as well, but if you have it it's better to be honest and admit it. We're hardly the only ones:
This was a very good job of taking a number of your comments and turning them into a coherent post. It raised my estimation that Eliezer will be able to do something similar with turning his blog posts into a book.
Two key propositions seem to be:
The world is at risk from a superintelligence-gone-wrong;
The SIAI can help to do something about that.
Both propositions seem debatable. For the first point, certainly some scenarios are better than others - but the superintelligence causing widespread havoc by turning on its creators hypothesises substantial levels of incompetence, followed up by a complete failure of the surrounding advanced man-machine infrastructure to deal with the problem. Most humans may well have more to fear from a superintelligence-gone-right, but in dubious hands.
I'm pretty sure that a gray goo nanotech disaster is generally not considered plausible-- if nothing else, it would generate so much heat the nanotech would fail.
This doesn't address less dramatic nanotech disasters-- say, a uFAI engineering viruses to wipe out the human race so that it can build what it wants without the risk of interference.
I'm pretty sure that a gray goo nanotech disaster is generally not considered plausible--if nothing else, it would generate so much heat the nanotech would fail.
This argument can't be valid, because it also implies that biological life can't work either. At best, this implies a limit on the growth rate; but without doing the math, there is no particular reason to think that limit is slow.
Did you actually read through the MWI sequence before deciding that you still can't tell whether MWI is true because of (as I understand your post correctly) the state of the social evidence? If so, do you know what pluralistic ignorance is, and Asch's conformity experiment?
If you know all these things and you still can't tell that MWI is obviously true - a proposition far simpler than the argument for supporting SIAI - then we have here a question that is actually quite different from the one you seem to try to be presenting:
If you know all these things and you still can't tell that MWI is obviously true - a proposition far simpler than the argument for supporting SIAI - then we have here a question that is actually quite different from the one you seem to try to be presenting:
- I do not have sufficient g-factor to follow the detailed arguments on Less Wrong. What epistemic state is it rational for me to be in with respect to SIAI?
I respectfully disagree. I am someone who was convinced by your MWI explanations but even so I am not comfortable with outright associating reserved judgement with lack of g.
This is a subject that relies on an awful lot of crystalized knowledge about physics. For someone to come to a blog knowing only what they can recall of high school physics and be persuaded to accept a contrarian position on what is colloquially considered the most difficult part of science is a huge step.
The trickiest part is correctly accounting for meta-uncertainty. There are a lot of things that seem extremely obvious but turn out to be wrong. I would even suggest that the trustworthiness of someone's own thoughts is not always proportionate to g-factor. That leaves people with some situations where they need to trust social processes more than their own g. That may prompt them to go and explore the topic from various other sources until such time that they can trust that their confidence is not just naivety.
On a subject like physics and MWI, I wouldn't take the explanation of any non-professional as enough to establish that a contrarian position is "obviously correct". Even if they genuinely believed in what they said, they'll still only be presenting the evidence from their own point of view. Or they might be missing something essential and I wouldn't have the expertise to realize that. Heck, I wouldn't even go on the word of a full-time researcher in the field before I'd heard what their opponents had to say.
On a subject matter like cryonics I was relatively convinced from simply hearing what the cryonics advocates had to say, because it meshed with my understanding of human anatomy and biology, and it seemed like nobody was very actively arguing the opposite. But to the best of my knowledge, people are arguing against MWI, and I simply wouldn't have enough domain knowledge to evaluate either sort of claim. You could argue your case of "this is obviously true" with completely made-up claims, and I'd have no way to tell.
This is probably the best comment so far:
You could argue your case of "this is obviously true" with completely made-up claims, and I'd have no way to tell.
Rounds it up pretty well. Thank you.
In general, I don't see much evidence that physicists who argue against MWI actually have the kind of understanding of probability theory necessary to make their arguments worth anything.
Physicists have something else, however, and that is domain expertise. As far as I am concerned, MWI is completely at odds with the spirit of relativity. There is no model of the world-splitting process that is relativistically invariant. Either you reexpress MWI in a form where there is no splitting, just self-contained histories each of which is internally relativistic, or you have locally propagating splitting at every point of spacetime in every branch, in which case you don't have "worlds" any more, you just have infinitely many copies of infinitely many infinitesimal patches of space-time which are glued together in some complicated way. You can't even talk about extended objects in this picture, because the ends are spacelike separated and there's no inherent connection between the state at one end and the state at the other end. It's a complete muddle, even before we try to recover the Born probabilities.
Rather than seeing MWI as the simple and elegant way to understand QM, I...
I do not have sufficient g-factor to follow the detailed arguments on Less Wrong. What epistemic state is it rational for me to be in with respect to SIAI?
This is rude (although I realize there is now name-calling and gratuitous insult being mustered on both sides) , and high g-factor does not make those MWI arguments automatically convincing. High g-factor combined with bullet-biting, a lack of what David Lewis called the argument of the incredulous stare, does seem to drive MWI pretty strongly. I happen to think that weighting the incredulous stare as an epistemic factor independent of its connections with evolution, knowledge in society, etc, is pretty mistaken, but bullet-dodgers often don't. Accusing someone of being low-g rather than a non-bullet-biter is the insulting possibility.
Just recently I encountered someone very high IQ/SAT/GRE scores who bought partial quantitative parsimony/Speed Prior type views, and biases against the unseen. This person claimed that the power of parsimony was not enough to defeat the evidence for galaxies and quarks, but was sufficient to defeat a Big World much beyond our Hubble Bubble, and to favor Bohm's interpretation over MWI. I think that view isn't quite consistent without a lot of additional jury-rigging, but it isn't reliably prevented by high g and exposure to the arguments from theoretical simplicity, non-FTL, etc.
It seems to me that a sufficiently cunning arguer can come up with what appears to be a slam-dunk argument for just about anything. As far as I can tell, I follow the arguments in the MWI sequence perfectly, and the conclusion does pretty much follow from the premises. I just don't know if those premises are actually true. Is MWI what you get if you take the Schrodinger equation literally? (Never mind that the basic Schrodinger equation is non-relativistic; I know that there are relativistic formulations of QM.) I can't tell you, because I don't know the underlying math. And, indeed, the "Copenhagen interpretation" seems like patent nonsense, but what about all the others? I don't know enough to answer the question, and I'm not going to bother doing much more research because I just don't really care what the answer is.
It looks to me as though you've focused in on one of the weaker points in XiXiDu's post rather than engaging with the (logically independent) stronger points.
Like what? Why he should believe in exponential growth? When by "exponential" he actually means "fast" and no one at SIAI actually advocates for exponentials, those being a strictly Kurzweilian obsession and not even very dangerous by our standards? When he picks MWI, of all things, to accuse us of overconfidence (not "I didn't understand that" but "I know something you don't about how to integrate the evidence on MWI, clearly you folks are overconfident")? When there's lots of little things scattered through the post like that ("I'm engaging in pluralistic ignorance based on Charles Stross's nonreaction") it doesn't make me want to plunge into engaging the many different little "substantive" parts, get back more replies along the same line, and recapitulate half of Less Wrong in the process. The first thing I need to know is whether XiXiDu did the reading and the reading failed, or did he not do the reading? If he didn't do the reading, then my answer is simply, "If you haven't done enough reading to notice that Stross isn't in our league, then of course you don't trust SIAI". That looks to me like the real issue. For substantive arguments, pick a single point and point out where the existing argument fails on it - don't throw a huge handful of small "huh?"s at me.
Like what?
Castles in the air. Your claims are based on long chains of reasoning that you do not write down in a formal style. Is the probability of correctness of each link in that chain of reasoning so close to 1, that their product is also close to 1?
I can think of a couple of ways you could respond:
Yes, you are that confident in your reasoning. In that case you could explain why XiXiDu should be similarly confident, or why it's not of interest to you whether he is similarly confident.
It's not a chain of reasoning, it's a web of reasoning, and robust against certain arguments being off. If that's the case, then we lay readers might benefit if you would make more specific and relevant references to your writings depending on context, instead of encouraging people to read the whole thing before bringing criticisms.
Most of the long arguments are concerned with refuting fallacies and defeating counterarguments, which flawed reasoning will always be able to supply in infinite quantity. The key predictions, when you look at them, generally turn out to be antipredictions, and the long arguments just defeat the flawed priors that concentrate probability into anthropomorphic areas. The positive arguments are simple, only defeating complicated counterarguments is complicated.
"Fast AI" is simply "Most possible artificial minds are unlikely to run at human speed, the slow ones that never speed up will drop out of consideration, and the fast ones are what we're worried about."
"UnFriendly AI" is simply "Most possible artificial minds are unFriendly, most intuitive methods you can think of for constructing one run into flaws in your intuitions and fail."
MWI is simply "Schrodinger's equation is the simplest fit to the evidence"; there are people who think that you should do something with this equation other than taking it at face value, like arguing that gravity can't be real and so needs to be interpreted differently, and the long arguments are just th...
One problem I have with your argument here is that you appear to be saying that if XiXiDu doesn't agree with you, he must be stupid (the stuff about low g etc.). Do you think Robin Hanson is stupid too, since he wasn't convinced?
"There is no intangible stuff of goodness that you can divorce from life and love and happiness in order to ask why things like that are good. They are simply what you are talking about in the first place when you talk about goodness."
And then the long arguments are about why your brain makes you think anything different.
Okay, I can see how XiXiDu's post might come across that way. I think I can clarify what I think that XiXiDu is trying to get at by asking some better questions of my own.
[This comment is a response to the original post, but seemed to fit here most.] I upvoted the OP for raising interesting questions that will arise often and deserve an accessible answer. If someone can maybe put out or point to a reading guide with references.
On the crackpot index the claim that everyone else got it wrong deserves to raise a red flag, but that does not mean it is wrong. There are way to many examples on that in the world. (To quote Eliezer:'yes, people really are that stupid') Read "The Checklist Manifesto" by Atul Gawande for a real life example that is ridiculously simple to understand. (Really read that. It is also entertaining!) Look at the history of science. Consider the treatment that Semmelweis got for suggesting that doctors wash their hands before operations. You find lots of samples were plain simple ideas where ridiculed. So yes it can happen that a whole profession goes blind on one spot and for every change there has to be someone trying it out in the first place. The degree on which research is not done well is subject to judgment . Now it might be helpful to start out with more applicable ideas, like improving the tool set for real life problems. You don't have to care about the singularity to care about other LW content like self-debiasing, or winning.
Regarding the donation aspect, it seems like rationalist are particularly bad at supporting their own causes. You might estimate how much effort you spend in checking out any charity you do support, and then try to not demand higher standards of this one.
Major update here.
The state of affairs regarding the SIAI and its underlying rationale and rules of operation are insufficiently clear.
Most of the arguments involve a few propositions and the use of probability and utility calculations to legitimate action. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. Even if you tell me, where is the data on which you base those estimations?
There seems to be an highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call that a castle in the air.
I know that what I'm saying may simply be due to a lack of knowledge and education, that is why I am inquiring about it. How many of you, who currently support the SIAI, are able to analyse the reasoning that led you to support the SIAI in the first place, or at least substantiate your estimations with other kinds of evidence than a coherent internal logic?
I can follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. Are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground? There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.
I'm concerned that, although consistently so, the SIAI and its supporters are updating on fictional evidence. This post is meant to inquire about the foundations of your basic premises. Are you creating models to treat subsequent models or are your propositions based on fact?
An example here is the use of the Many-worlds interpretation. Itself a logical implication, can it be used to make further inferences and estimations without additional evidence? MWI might be the only consistent non-magic interpretation of quantum mechanics. The problem here is that such conclusions are, I believe, widely considered not to be enough to base further speculations and estimations on. Isn't that similar to what you are doing when speculating about the possibility of superhuman AI and its consequences? What I'm trying to say here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.
The gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Imagination allows for endless possibilities while scientific evidence provides hints of what might be possible and what impossible. Science does provide the ability to assess your data. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of what might be possible. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction, and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed by the SIAI.
Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who's aware of something that might shatter the universe? Why is it that people like Vernor Vinge, Robin Hanson or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI? Why aren't Eric Drexler, Gary Drescher or AI researches like Marvin Minsky worried to the extent that they signal their support?
I'm talking to quite a few educated people outside this community. They do not doubt all those claims for no particular reason. Rather they tell me that there are too many open questions to focus on the possibilities depicted by the SIAI and to neglect other near-term risks that might wipe us out as well.
I believe that many people out there know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have named other people. That's besides the point though, it's not just Hanson or Vinge but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?
What do you expect me to do, just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? Maybe after a few years of study I'll know more.
...
2011-01-06: As this post received over 500 comments I am reluctant to delete it. But I feel that it is outdated and that I could do much better today. This post has however been slightly improved to account for some shortcomings but has not been completely rewritten, neither have its conclusions been changed. Please account for this when reading comments that were written before this update.
2012-08-04: A list of some of my critical posts can be found here: SIAI/lesswrong Critiques: Index