Upvoted, but it wasn't nearly as fascinating as I'd hoped, because it was all on our home turf. Eliezer reiterated familiar OB/LW arguments, Aaronson fought a rearguard action without saying anything game-changing. Supporting link for the first (and most interesting to me) disagreement: Aaronson's "The Singularity Is Far".
I'd like you to talk about subjects that you firmly disagree on but think the other party has the best chance of persuading you of. To my mind, debates are more useful (and interesting) when arguments are conceded than when the debaters agree to disagree. Plus, I think that when smart, rational people are disadvantaged in a discussion, they are more likely to come up with fresh and compelling arguments. Find out where your weaknesses and Scott's strengths coincide (and vice versa) and you'll both come out of the debate stronger for it. I wouldn't suggest this to just anyone but I know that (unlike most debaters, unlike most people) you're both eager to admit when you're wrong.
(I dearly love to argue, and I'm probably too good at it for my own good, but oh how difficult it can be to admit defeat at the end of an argument even when I started silently agreeing with my opponent halfway through! I grew up in an argumentative household where winning the debate was everything and it was a big step for me when I started admitting I was wrong, and even bigger when I started doing it when I knew it, not a half hour and two-thousand words of bullshit later. I was having an argument with my fa...
At one point in the dialog, Scott raises what I think is a valid objection to the "nine people in the basement" picture of FAI's development. He points out that it's not how science progresses, and so not how he expects this novel development to happen.
If we consider FAI as a mathematical problem that requires a substantial depth of understanding beyond what's already there to get right, any isolated effort becomes likely hopeless. Mathematical progress is a global effort. I can sorta expect a basement scenario if most of the required math happen...
Well, that was interesting, if a little bland. I think the main problem was that Scott is the kind of guy who likes to find points of agreement more than points of disagreement, which works fine for everyday life, but not so well for this kind of debate.
By the way, I noticed that this was "sponsored" by the Templeton Foundation, which I and many other people who care about the truth find deeply repulsive.
It's interesting to compare the 1996 Templeton site:
The Templeton Prize for Progress in Religion (especially spiritual information through science) is awarded each year to a living person who shows extraordinary originality in advancing humankind's understanding of God.
to the current site:
The Prize is intended to recognize exemplary achievement in work related to life's spiritual dimension.
Another one. Old:
- Create and fund projects forging stronger relationships and insights linking the sciences and all religions
- Apply scientific methodology to the study of religious and spiritual subjects
- Support progress in religion by increasing the body of spiritual information through scientific research
- Encourage a greater appreciation of the importance of the free enterprise system and the values that support it
- Promote character and value development in educational institutions
New:
...Established in 1987, the Foundation’s mission is to serve as a philanthropic catalyst for discovery in areas engaging life’s biggest questions. These questions range from explorations into the laws of nature and the universe to questions on the nature of love, gratitude, forgiveness and creativi
Saying that something is better than optimality is abuse of the term "optimality". There's an idea missing -- optimal what, exactly?
I liked the discussion, especially the final part on the many world interpretation (MWI).
I had the impression that Eliezer had a better understanding of quantum mechanics (QM), however I found one of his remarks very misleading (and it also confused Scott rightly): Eliezer seemed to argue that MWI somehow resolves the difficulty of unifying QM with general relativity (GR) by resolving non-locality.
It is true that non-locality is resolved by Everett's interpretation, but the real problem with QM+GR is that the renormalization of the gravity wave function do...
I picked up a copy of Jaynes off of ebay for a good price ($35.98). There are 2 copies left in that auction. Someone here might be interested:
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=280380684353
No need to vote this comment up or down.
I note that the Born probabilities were claimed to have been derived from decision theory for the MWI in 2007 by Wallace and Deutsch:
“Probabilities used to be regarded as the biggest -problem for Everett, but ironically, they are now its most powerful success” - David Deutsch.
"In a September 2007 conference David Wallace reported on what is claimed to be a proof by Deutsch and himself of the Born Rule starting from Everettian assumptions. The status of these arguments remains highly controversial."
Robot ant: http://www.youtube.com/watch?v=0jyBiECoS3Q
I would say real ants are currently waaay ahead of robot ant controllers.
On the other hand - like EY says - there's a whole bunch of things that we can do which ants can't. So it is not trivial to compare.
Thumbs up to Eliezer Yudkowsky for getting around to giving some actual timescales. They are incredibly vague timescales - but it is still a tricky thing to estimate the difficulty of - so that's OK, I guess.
On the issue of many-world, I must just be slow because I can't see how it is "obviously" correct. It certainly seems both self consistent and consistent with observation, but I don't see how this in particular puts it so far ahead of other ways of understanding QM as to be the default view. If anyone knows of a really good summary for somebody who's actually studied physics on why MWI is so great (and sadly, Eliezer's posts here and on overcomingbias don't do it for me) I would greatly appreciate the pointer.
In particular, two things that I ha...
Dennett and Hofstadter have "extremely large" estimates of the time to intelligent machines as well. I expect such estimates will prove to be wrong - but it is true that we don't know much about the size of the target in the search space - or how rough that space is - so almost any estimate is defensible.
Time symmetry is probably not a big selling point of the classical formulation of the MWI. What with all those worlds in the future that don't exist in the past.
OK - no information is created or destroyed - so it's technically reversible - but that's not quite the same thing as temporal symmetry.
It would be better if it were formulated so there were lots of worlds in the past too. You don't lose anything that way - AFAICS.
The discussion got a bit sidetracked around about when EY asked something like:
If you are assuming that you can give the machine one value and have it stable, why assume that there are all these other values coming into it which you can't control.
...about 27 minutes in.
Scott said something about that being how humans work. That could be expanded on a bit:
In biology, it's hard to build values in explicitly, since the genes have limited control over the brain - since the brain is a big self-organising system. It's as though the genes can determine the initia...
I'm not sure the halved doubling time for quantum computers is right.
Maybe I'm not getting into the spirit of accepting the proposed counterfactuals - but is quantum computer performance doubling regularly at all? It seems more as though it is jammed up against decoherence problems already.
Scott cites the Doomsday Argument in his "The Singularity Is Far":
http://scottaaronson.com/blog/?p=346
Surely that is a mistake. The Doomsday Argument may suggest that the days of humans like us may be numbered, but doesn't say much more than that - in particular it can't be used to argue against a long and rich future filled with angelic manifestations. So: it is poor evidence against a relatively near era of transcension.
Am I missing something here? EY and SA were discussing the advance of computer technology, the end of Moore's rule-of-thumb, quantum computing, BIg Blue, etc. It seems to me that AI is an epistemological problem not an issue of more computing power. Getting Big Blue to go down all the possible branches is not really intelligence at all. Don't we need a theory of knowledge first? I'm new here so this has probably already been discussed but what about freewill? How do AI researchers address that issue?
I'm with SA on the MWI of QM. I think EY is throwing ...
Eliezer Yudkowsky and Scott Aaronson - Percontations: Artificial Intelligence and Quantum Mechanics
Sections of the diavlog: