No, our paragraph does not rebut everything we disagree with in McDermott's paper. Chalmers' reply in the forthcoming "The Singularity: a reply" is adequate.
I suppose I'd like to hear Solvent ask him about those.
The response Anna and I give in our forthcoming chapter "Intelligence Explosion: Evidence and Import" is the following:
Chalmers (2010) suggested that AI will lead to intelligence explosion if an AI is produced by an "extendible method," where an extendible method is "a method that can easily be improved, yielding more intelligent systems." McDermott (2012a, 2012b) replies that if P≠NP (see Goldreich 2010 for an explanation) then there is no extendible method. But McDermott's notion of an extendible method is not the one essential to the possibility of intelligence explosion. McDermott's formalization of an "extendible method" requires that the program generated by each step of improvement under the method be able to solve in polynomial time all problems in a particular class — the class of solvable problems of a given (polynomially step-dependent) size in an NP-complete class of problems. But this is not required for an intelligence explosion in Chalmers' sense (and in our sense). What intelligence explosion (in our sense) would require is merely that a program self-improve to vastly outperform humans, and we argue for the plausibility of this in section 3 of our chapter. Thus while we agree with McDermott that it is probably true that P≠NP, we do not agree that this weighs against the plausibility of intelligence explosion. (Note that due to a miscommunication between McDermott and the editors, a faulty draft of McDermott (2012a) was published in Journal of Consciousness Studies. We recommend reading the corrected version at http://cs-www.cs.yale.edu/homes/dvm/papers/chalmers-singularity-response.pdf.)
I sent this to Drew and he said he agreed with our rebuttal.
Do you feel this is a full rebuttal to McDermott's paper? I agree that his generalized argument against "extendible methods" is a straw man; however, he has other points about Chalmers' failure to argue for existing extendible methods being "extendible enough."
Thanks for the link, John.
When most of SI (and a few others) discussed the issue for 10+ hours, we tentatively concluded that WBE progress should not be accelerated. (Context: see differential technological development.)
I just noticed from that document that you listed Alexander Funcke as owner of "Zelta Deta." Googling his name, I think you meant "Zeta Delta?"
I love that quote, but if it carries a rationality lesson, I fail to see it. Seems more like an appeal to the tastes of the audience here.
Yeah, you're correct. Wasn't thinking very hard.
I tell you that as long as I can conceive something better than myself I cannot be easy unless I am striving to bring it into existence or clearing the way for it.
-- G.B. Shaw, "Man and Superman"
Shaw evinces a really weird, teleological view of evolution in that play, but in doing so expresses some remarkable and remarkably early (1903) transhumanist sentiments.
What should I read to get a good defense of Bayesianism--that isn't just pointing out difficulties with frequentism, NHST, or whatever? I understand the math, but am skeptical that it can be universally applied, due to problems with coming up with the relevant priors and likelihoods.
It's like the problem with simple deduction in philosophy. Yes, if your premises are right, valid deductions will lead you to true conclusions, but the problem is knowing whether the premises used by the old metaphysicians (or modern ones, for that matter) are true. Bayesianism fails to solve this problem for many cases (though I'm not denying that you do sometimes know the relevant probabilities).
I do definitely plan on getting my hands on a copy of Richard Carrier's new book when it comes out, so if that's currently the best defense of Bayesianism out there, I'll just wait another two months.
You may want to check out John Earman's Bayes or Bust?.
It is hard to imagine improving on Aristotle without resorting to methods that were simply unavailable to him.
Just wanted to remind everyone that this is straight-up false. Aristotle assumed a lot of wrong things to prove also-wrong things. For example, Aristotle proved lots of stuff based on the infallibility of sensation, but since that's a false premise, all that stuff was pointless. It's not at all impossible to surpass historical figures, because not just believing what you like is fairly rare and gets rarer the further back in scientific progress you go.
For example, Aristotle proved lots of stuff based on the infallibility of sensation
I don't know much about Aristotle, but this claim sounds to me like a distortion of something Aristotle might have said.
Is the apparent reference to David Stove's "What is Wrong with Our Thoughts?" intentional?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
so I shouldn't refer people to death spirals and baby eating right away?
Don't mindkill their cached thoughts.