Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds

17 Post author: Vladimir_Nesov 16 August 2009 04:06PM

Eliezer Yudkowsky and Scott Aaronson - Percontations: Artificial Intelligence and Quantum Mechanics

Sections of the diavlog:

  • When will we build the first superintelligence?
  • Why quantum computing isn’t a recipe for robot apocalypse
  • How to guilt-trip a machine
  • The evolutionary psychology of artificial intelligence
  • Eliezer contends many-worlds is obviously correct
  • Scott contends many-worlds is ridiculous (but might still be true)

 

Comments (102)

Comment author: Vladimir_Nesov 16 August 2009 04:23:05PM *  8 points [-]

At one point in the dialog, Scott raises what I think is a valid objection to the "nine people in the basement" picture of FAI's development. He points out that it's not how science progresses, and so not how he expects this novel development to happen.

If we consider FAI as a mathematical problem that requires a substantial depth of understanding beyond what's already there to get right, any isolated effort becomes likely hopeless. Mathematical progress is a global effort. I can sorta expect a basement scenario if most of the required math happens to be already developed, and so the remaining challenge is to find the relevant math, assemble it in the right way, and see the answer. But that doesn't sound very likely.

Alternatively, a "team in the basement" could wait for the right breakthrough in the mainstream mathematics, and, being prepared, to apply it faster than anyone else to the problem. This seems more realistic, but may require the mainstream to know what to look for. Which involves playing with existential risk.

Comment author: timtyler 16 August 2009 11:30:19PM 2 points [-]

"Breakthroughs" are not really how synthetic intelligence has progressed so far. Look at speech recognition, for example. So far, that has mostly been a long, gradual slog. Maybe we are doing it wrong - and there is an easier way. However, that's not an isolated example - and if there are easier ways, we don't seem to be very good at finding them.

Comment author: Vladimir_Nesov 16 August 2009 11:52:41PM 3 points [-]

Of course, "breakthroughs" is a cumulative impression: now you don't know how to solve the problem or even how to state it, and 10 years later you do.

Comment author: timtyler 17 August 2009 12:19:42AM *  2 points [-]

The idea of a "breakthrough" denotes a sudden leap forwards. There have been some of those.

One might cite back propagation, for example - but big breakthroughs seem rare, and most progress seems attributable to other factors - much as Robin Hanson claims happens in general: "in large systems most innovation value comes from many small innovations".

Comment author: wedrifid 29 August 2009 02:00:14PM 4 points [-]

At one point in the dialog, Scott raises what I think is a valid objection to the "nine people in the basement" picture of FAI's development. He points out that it's not how science progresses, and so not how he expects this novel development to happen.

I would like to hear more from Eliezer on just how likely he thinks the 'nine people in the basement' development scenario is.

My own impression would be that a more gradual development of GAI is more likely but that that 'basement development' is the only way there is even a remote possibility that the development will not lead to rapid human extinction. That would make the 'nine people in the basement picture' either wishful thinking or 'my best plan of action' depending on whether or not we are Eliezer.

Comment author: Liron 16 August 2009 05:27:48PM 2 points [-]

This is gonna be so awesome, I just took a moment to try to make sure I'm not dreaming.

Comment author: Liron 16 August 2009 07:01:52PM 2 points [-]

Good stuff. I was surprised that SA seemed so uncomfortable with the thought that reality should have a many-worlds structure, and I thought Eliezer made a powerful reply by pointing out that we might be simulated on multiple computers.

Comment author: cousin_it 16 August 2009 06:02:47PM *  8 points [-]

Upvoted, but it wasn't nearly as fascinating as I'd hoped, because it was all on our home turf. Eliezer reiterated familiar OB/LW arguments, Aaronson fought a rearguard action without saying anything game-changing. Supporting link for the first (and most interesting to me) disagreement: Aaronson's "The Singularity Is Far".

Comment author: Andrew 16 August 2009 06:25:19PM 4 points [-]

I agree. I stopped watching about five minutes into it when it became clear that EY and Scott were just going to spend a lot of time going back-and-forth.

Nothing game-changing indeed. Debate someone who substantially disagrees with you, EY.

Comment author: Eliezer_Yudkowsky 16 August 2009 08:46:59PM 4 points [-]

Sorry about that. Our first diavlog was better, IMHO, and included some material about whether rationality benefits a rationalist - but that diavlog was lost due to audio problems. Maybe we should do another for topics that would interest our respective readers. What would you want me to talk about with Scott?

Comment author: eirenicon 16 August 2009 10:11:28PM 10 points [-]

I'd like you to talk about subjects that you firmly disagree on but think the other party has the best chance of persuading you of. To my mind, debates are more useful (and interesting) when arguments are conceded than when the debaters agree to disagree. Plus, I think that when smart, rational people are disadvantaged in a discussion, they are more likely to come up with fresh and compelling arguments. Find out where your weaknesses and Scott's strengths coincide (and vice versa) and you'll both come out of the debate stronger for it. I wouldn't suggest this to just anyone but I know that (unlike most debaters, unlike most people) you're both eager to admit when you're wrong.

(I dearly love to argue, and I'm probably too good at it for my own good, but oh how difficult it can be to admit defeat at the end of an argument even when I started silently agreeing with my opponent halfway through! I grew up in an argumentative household where winning the debate was everything and it was a big step for me when I started admitting I was wrong, and even bigger when I started doing it when I knew it, not a half hour and two-thousand words of bullshit later. I was having an argument with my father about astrophysics a couple months ago, and it had gotten quite heated even though I suspected he was right. I hadn't followed up, but the next time I saw him he showed me a couple diagrams he'd worked out. It took me thirty seconds to say, "Wow, I really was totally wrong about that. Well done." He looked at me like a boxer who enters the ring ready for ten rounds and then flattens his opponent while the bell's still ringing. No particular reason for this anecdote, just felt like sharing.)

Comment author: Andrew 17 August 2009 01:43:23AM 1 point [-]

It's okay.

What do you disagree with Scott over? I don't regularly read Shtetl-Optimized, and the only thing I associate with him is a deep belief that P != NP.

I don't really know much about his FAI/AGI leanings. I guess I'll go read his blog a bit.

Comment author: psb 17 August 2009 10:45:04PM 2 points [-]

Ok, that's a weird side-effect of watching the diavlog, now when I read your comments I can hear your voice in my mind.

Comment author: marks 18 August 2009 03:27:55PM 1 point [-]

I would like to see more discussion on the timing of artificial super intelligence (or human level intelligence). I really want to understand the mechanics of your disagreement.

Comment author: billswift 17 August 2009 02:38:20AM *  5 points [-]

I have a significant disagreement with this from that link:

I see a few fragile and improbable victories against a backdrop of malice, stupidity, and greed—the tiny amount of good humans have accomplished in constant danger of drowning in a sea of blood and tears

Since destroying things is MUCH easier than building, if humans weren't substantially inclined toward helpful and constructive values, civilization would never have existed in the first place nor could it continue to exist at all.

Comment author: MichaelGR 19 August 2009 11:01:47PM 4 points [-]

Maybe I'm the only one, but I'd like to see a video of Eliezer alone. Just him talking about whatever he finds interesting these days.

I'm suggesting this because so far all the 2-way dialogs I've seen end up with Eliezer talking about 1/4 of the time, and most of what he's saying is correcting what the other person has said. So we end up with not much original Eliezer, which is what I'd really be interested in hearing.

Comment author: Furcas 16 August 2009 08:20:02PM *  4 points [-]

Well, that was interesting, if a little bland. I think the main problem was that Scott is the kind of guy who likes to find points of agreement more than points of disagreement, which works fine for everyday life, but not so well for this kind of debate.

By the way, I noticed that this was "sponsored" by the Templeton Foundation, which I and many other people who care about the truth find deeply repulsive.

Comment author: byrnema 16 August 2009 09:08:47PM *  4 points [-]

In response to this comment, I just spent some time on the Templeton Foundation web page to see why you don't like them. Wow, interesting. It's clear why you wouldn't like them. They're like the Less Wrong antithesis. They seem to have a completely opposite POV, but judging from the comments I've read so far, are quite intellectual as well. I spent the summer reading Less Wrong... I think I'll give these guys some time (and form my own conclusion).

Comment author: timtyler 16 August 2009 11:31:51PM 0 points [-]

It was whaaaat? Where are you getting that from?

Comment author: Furcas 16 August 2009 11:35:39PM 1 point [-]

Click on the link and look to the right side of the video.

Comment author: timtyler 16 August 2009 11:54:24PM *  1 point [-]

Thanks! I see! So that's these videos:

http://www.bloggingheads.tv/percontations/

Ironically, the participants discuss the Templeton Foundation 18 minutes in - did they know? ;-)

John Horgan explains how he rationalises taking the Templeton Foundation's money here:

http://www.edge.org/3rd_culture/horgan06/horgan06_index.html

Comment author: timtyler 17 August 2009 12:34:52AM *  1 point [-]

Wow! Are these folks all on the Templeton Foundation's payroll?

http://www.templeton.org/evolution/

I wondered why Robert Wright had bothered to write a whole book about god! ;-)

Comment author: thomblake 17 August 2009 02:41:58PM 1 point [-]

Why be repulsed at the Templeton Foundation?

It seems like they're mostly on the up-and-up.

Comment author: Furcas 17 August 2009 05:43:44PM *  5 points [-]

The purpose of the Templeton Foundation is to blur the line (in people's minds) between science and religion. I'm sure you know it goes: Science and religion are Different Ways Of Knowing The Same Truth, Blah Blah Blah™.

A few years ago they were fairly straightforward about it (it was practically spelled out on their website), but after being subjected to a lot of criticism by secular scientists and philosophers, they've been going about it much more sneakily. They sponsor respectable events and fund respectable science to earn credibility, and spend that credibility on stuff like this and other sneaky attempts to make it seem like among first-rate scientists/philosophers/epistemic authority figures, there's only a small minority that views religion as utter hogwash. There's also the Templeton Prize, that rewards scientists who've said something appropriately respectful about religion, and many other lesser brib... I mean gifts. All of this hidden behind an interest in what they call "The Big Questions", by which they mean, "Questions to which the answer is God".

Comment author: thomblake 17 August 2009 06:01:09PM 3 points [-]

It doesn't seem to me they're doing anything terribly subversive. Even the thing you linked to didn't look too bad - they even have Christopher Hitchens up there.

It seems like some sort of newagey softboiled ecumenical pantheism might just be the way to cut the knot between angry atheists and angry theists. Pragmatism moves me to think they're on the right side here.

Comment author: Furcas 17 August 2009 06:22:16PM *  5 points [-]

It doesn't seem to me they're doing anything terribly subversive. Even the thing you linked to didn't look too bad - they even have Christopher Hitchens up there.

Like I said, they're a sneaky bunch. Out of 13 contributors, they invite three or four forthright atheists, just to make it seem like they're being fair. The rest are theists (one Muslim and lots of Christians) or 'faitheists', agnostics and pantheists who believe in belief.

It seems like some sort of newagey softboiled ecumenical pantheism might just be the way to cut the knot between angry atheists and angry theists. Pragmatism moves me to think they're on the right side here.

First, the Templeton Foundation's current president, John Templeton Jr., is an evangelical Christian. The softboiled pantheism you think you're seeing is Christianity hidden by prodigious volumes of smoke.

Second, whatever happened to caring about the truth? Would you also say that belief in a cube-shaped Earth might just be the way to cut the knot between angry round-Earthers and angry flat-Earthers?

Comment author: thomblake 17 August 2009 06:32:36PM 3 points [-]

First, the Templeton Foundation's current president, John Templeton Jr., is an evangelical Christian. The softboiled pantheism you think you're seeing is Christianity hidden by prodigious volumes of smoke.

Well most of the pantheism I've encountered comes from the Christian worldview. And that sounds like an ad-hominem to me... the Foundation doesn't seem to be coming from an evangelical Christian viewpoint in general, and it's certainly not its stated mission.

Second, whatever happened to caring about the truth? Would you also say that belief in a cube-shaped Earth might just be the way to cut the knot between angry round-Earthers and angry flat-Earthers?

If nothing really turned on the question of the Earth's shape, then sure.

To give the classic Pragmatist example, people used to kill each other over the question of transubstantiation of the Eucharist. One side said that the Eucharist is just bread, symbolizing the body and blood of Christ. The other side said that the Eucharist is really the body and blood of Christ, but for all practical purposes (and under any scientific scrutiny) is indistinguishable from bread. It seems like insisting that one side or the other was wrong on this question is the wrong way to go, as nothing really turns on it and they're both saying roughly the same thing.

Better to just 'live and let live' and let 'truth' go this time, in favor of actually making things better. If people do end up making 'God' mean something vacuous, then there's no harm in letting them say it.

Comment author: Furcas 17 August 2009 07:05:29PM *  3 points [-]

And that sounds like an ad-hominem to me... the Foundation doesn't seem to be coming from an evangelical Christian viewpoint in general, and it's certainly not its stated mission.

Taking a person's most fundamental beliefs into account when trying to figure out what their true intentions are is not an ad hominem, it's common sense.

To give the classic Pragmatist example, people used to kill each other over the question of transubstantiation of the Eucharist. (...) It seems like insisting that one side or the other was wrong on this question is the wrong way to go, as nothing really turns on it and they're both saying roughly the same thing.

That's short-sighted. Nothing may really turn on the question of transubstantiation, but a there's a lot that turns on the cognitive processes that led millions of people to believe that a cracker is the body a magical Jewish half-deity.

I'm all in favor of "actually making things better", but the middle-of-the-road solution that the Templeton Foundation is (outwardly, deceitfully) espousing won't do that. Middle-of-the-road solutions are easy, they allow us to avoid sounding shrill, strident, and militant, but easiness is not effectiveness.

If people do end up making 'God' mean something vacuous, then there's no harm in letting them say it.

There is harm, because people who don't mean something vacuous by 'God' like to give the impression that they do to shield themselves against criticism. And thanks to 'pragmatism', it usually works.

Comment author: thomblake 17 August 2009 07:11:55PM 5 points [-]

There is harm, because people who don't mean something vacuous by 'God' like to give the impression that they do to shield themselves against criticism. And thanks to 'pragmatism', it usually works.

If theists need to pretend to be atheists to be taken seriously, then we've already won.

Comment author: Furcas 17 August 2009 07:34:07PM 2 points [-]

I didn't think that by a vacuous God you meant a non-existent God.

Obviously, theists don't need to pretend to be atheists: Theism is respected by everyone except a small minority of neo-militant ultra-materialist fundamentalist atheists. To be taken seriously, theists merely need to be (or pretend to be, in the presence of critics) moderates, i.e. believers in a God that acts in a very subtle way and conforms to modern secular morality.

So no, "we" haven't won. The limited form of insanity we call faith is still the norm and is still respected.

Comment author: Alicorn 17 August 2009 06:41:02PM *  4 points [-]

I did a little poking on Wikipedia.

  • An atheist, culturally Jewish
  • A Dominican friar
  • A Methodist
  • A possible Muslim, although the Wikipedia page doesn't come out and actually say it and there's some evidence that he is a non-theist and critical of Islam
  • A non-theist with a Christian upbringing and general theist sympathies
  • An atheist raised Orthodox Jewish
  • Christopher-freakin'-Hitchens
  • A Church of England priest
  • Another atheist
  • Unclear what Jerome Groopman is
  • Another atheist
  • A Catholic
  • A guy with a very nontraditional definition of God, sort of reminiscent of what byrnema has said

Given the demographics of the population at large and the content of the question the contributors were answering, I think four actual Christians out of thirteen contributors is very modest.

Comment author: Furcas 17 August 2009 07:20:21PM 1 point [-]

Like I said, three or four forthright atheists (depending on what you think of Michael Shermer), the rest are theists or faitheists.

I mean, just take a quick look at the essays (not the titles). Only three answer the question, "Does science make belief in God obsolete?" with a clear Yes. Shermer is less clear, but let's count him as a Yes. The remaining nine answer with No.

Comment author: Alicorn 17 August 2009 07:25:24PM *  3 points [-]

The question was not, "Does science make it clear that it is an error to believe in God?" I have not read the essays, but if I were answering the question about whether religion is obsolete, I doubt my answer would be interpreted as an unambiguous Yes. Obsolescence isn't about accuracy, it's about consensus of historicity over contemporary usefulness.

Comment author: thomblake 17 August 2009 07:29:30PM 4 points [-]

I must say, I'd answer "No" straightforwardly to that question. While it may be the case that belief in God is 'obsolete', I think what that question means at least needs some unpacking (How is a belief obsolete? Is that a category mistake?), and I don't think science is necessarily what makes that belief 'obsolete'.

Reason, perhaps, or good philosophy, might do the trick.

Comment author: PhilGoetz 17 August 2009 09:07:10PM 3 points [-]

Look at the past winners of the Templeton prize. If you look at the winners before 2000, a lot of them were evangelists who had nothing to do with science+religion: Pandurang Shastri Athavale, Bill Bright, Billy Graham, Chuck Colson, Kyung-Chik Han Mother Theresa.

Comment author: Wei_Dai 17 August 2009 06:51:33PM *  14 points [-]

It's interesting to compare the 1996 Templeton site:

The Templeton Prize for Progress in Religion (especially spiritual information through science) is awarded each year to a living person who shows extraordinary originality in advancing humankind's understanding of God.

to the current site:

The Prize is intended to recognize exemplary achievement in work related to life's spiritual dimension.

Another one. Old:

  • Create and fund projects forging stronger relationships and insights linking the sciences and all religions
  • Apply scientific methodology to the study of religious and spiritual subjects
  • Support progress in religion by increasing the body of spiritual information through scientific research
  • Encourage a greater appreciation of the importance of the free enterprise system and the values that support it
  • Promote character and value development in educational institutions

New:

Established in 1987, the Foundation’s mission is to serve as a philanthropic catalyst for discovery in areas engaging life’s biggest questions. These questions range from explorations into the laws of nature and the universe to questions on the nature of love, gratitude, forgiveness and creativity.

ETA: I wonder what LessWrong will look like in 13 years. :)

Comment author: Furcas 17 August 2009 07:22:41PM 1 point [-]

Thank you!!! That's exactly what I've been looking for (on and off) for the last 20 minutes.

Comment author: PhilGoetz 17 August 2009 09:01:44PM 4 points [-]

If you look at the history of the Templeton Prize and their other endeavors, you will find that they never gave an award or a grant to anybody who came up with the "wrong answers". I mean, if they were really interested in "engaging life's biggest questions" they would have given a Templeton to Dawkins for "The God Delusion".

Comment author: timtyler 17 August 2009 08:09:08PM 1 point [-]

They seem like dark forces to me. The more dangerous for conveying an innoculous appearance. Religion in scientific clothing.

Comment author: thomblake 17 August 2009 08:15:51PM 3 points [-]

If 'seem like dark forces' is the best you can come up with, then it sounds like you're on no better ground than the theists.

It doesn't seem to me that they're "religion in scientific clothing", but rather an institution that cares about lots of big questions, some of which have traditionally been (and are still) answered primarily by religious sources.

You can't just excise a whole part of the human experience and not expect to lose something good. Diversity is sometimes far more valuable than optimality.

Comment author: timtyler 17 August 2009 08:27:26PM 2 points [-]

Right, well, I have limited resources to spend on criticising their particular perversion of science. The purpose of the Templeton Foundation is to blur the line between straightforward science and explicitly religious activity, making it seem like the two enterprises are part of one big undertaking. It's an enterprise I find noxious.

Comment author: Cyan 17 August 2009 08:27:38PM 10 points [-]

Saying that something is better than optimality is abuse of the term "optimality". There's an idea missing -- optimal what, exactly?

Comment author: timtyler 16 August 2009 11:24:37PM 3 points [-]

I note that the Born probabilities were claimed to have been derived from decision theory for the MWI in 2007 by Wallace and Deutsch:

“Probabilities used to be regarded as the biggest -problem for Everett, but ironically, they are now its most powerful success” - David Deutsch.

"In a September 2007 conference David Wallace reported on what is claimed to be a proof by Deutsch and himself of the Born Rule starting from Everettian assumptions. The status of these arguments remains highly controversial."

Comment author: dmfdmf 17 August 2009 07:51:13AM -1 points [-]

Am I missing something here? EY and SA were discussing the advance of computer technology, the end of Moore's rule-of-thumb, quantum computing, BIg Blue, etc. It seems to me that AI is an epistemological problem not an issue of more computing power. Getting Big Blue to go down all the possible branches is not really intelligence at all. Don't we need a theory of knowledge first? I'm new here so this has probably already been discussed but what about freewill? How do AI researchers address that issue?

I'm with SA on the MWI of QM. I think EY is throwing the scientific baby out with the physics bath water. It seems to me that the MWI is committing the mind projection fallacy or the fallacy of the primacy of consciousness. I also agree with whoever said (paraphrased) that all these interpretations of QM just differ on where they hide the contradictions... they are all unsatisfactory and it will take a genius to figure it out.

Comment author: timtyler 17 August 2009 07:59:08AM 2 points [-]

Neither consciousness nor mind are primary in the MWI - so I can't see where you are getting that from.

Comment author: dmfdmf 17 August 2009 07:44:17PM 0 points [-]

Its not an explicit form of Primacy of Consciousness like prayer or wishing. Its implicit in QM and its basic premises. One example of an implicit form of PoC is to project properties or aspects of consciousness onto reality and treating them as metaphysical and not epistemological factors. I think the ancient philosophers got hung up on this when debating whether a color like "red" was in the object or subject. This went round and round for a few hundred years until someone pointed out that its both (form/object distinction).

Jaynes covers similar idea in his book and articles where he ascribes this error to traditional frequentists who hold probabilities as a property of things (a metaphysical concept) instead of a measure or property of our lack of knowledge (an epistemological, bayesian concept). Moreover, committing the PoC error will lead you to supernaturalism eventually so MWI is just a logical outcome of that error.

Comment author: timtyler 17 August 2009 08:46:04PM *  0 points [-]

So: you know all about the mind projection fallacy - but don't seem to be able to find a coherent way to link it to the MWI, even though you seem to want to do that. I don't know what your motives are - and so don't see the point.

Comment author: dmfdmf 17 August 2009 10:41:02PM 1 point [-]

Of course my motives are irrelevant here but for the record I am trying to understand epistomology and its application to my self and, ultimately to AI. How about you, what are your motives?

Not knowing the exact details of where the PoC flaw is in QM is not a devastating criticism of my point, though your tone seems to suggest that you think it is. Why does the USPTO no longer accept applications for perpetual motion machines? Because it violates the first and/or second laws of thermo, no need to dig further into the details. This is just how principles work and once a fundamental error is identified then that's it, end of discussion.... unless I was a physicist and wanted to dig in and take a crack at resolving the QM quandries which I do not. Jaynes left us a pretty large clue that the PoC error probably lies in the mis-use of probability theory as he described. As a non physicist that's all (and more) than I need to know.

Comment author: Cyan 17 August 2009 10:51:18PM 6 points [-]

If you can't tell us why Primacy of Consciousness is necessary for MWI, then we have no grounds for doubting MWI on the basis of your argument. It's like saying that X is a perpetual motion machine and therefore impossible, and then when asked in what way is X a perpetual motion machine, replying that it's implicitly a perpetual motion machine and you can't relate the exact details.

Comment author: timtyler 18 August 2009 07:14:48AM *  0 points [-]

Probably mostly to learn things - though you would have to consult with my shrink for more details. Of course I'm not doing that in this thread - I guess that, here I'm trying to help you out on this issue while showing that I know what I'm talking about. Maybe someday, someone can return the favour - if they see me talking nonsense.

Or maybe it's just a case of:

http://mohel.dk/grafik/andet/Someone_Is_Wrong_On_The_Internet.jpg

Jaynes' criticism doesn't apply to the MWI. The MWI doesn't involve probabilities - it's a deterministic theory:

http://www.hedweb.com/manworld.htm#deterministic

Comment author: dmfdmf 21 August 2009 01:15:44AM 1 point [-]

Shouldn't this cartoon be revised "Someone is more wrong on the Internet" ?

BTW, got slammed with work but as soon as I get the chance I am going to reply to comments. Thank you for your patience.

Comment author: JGWeissman 21 August 2009 01:55:36AM 1 point [-]

What principle do you believe that MWI is violating that is analogous to a perpetual motion machine violating conservation of energy?

In the case of the perpetual motion machine, it is easy to see that the described system violates energy conservation, because you can compare the energy in the system at different times. From this global violation, one can deduce that there was a mistake somewhere in the calculations that predicted it for a system that follows the physical laws that imply conservation of energy.

So, what is the global problem with MWI that leads you to believe that it has a PoC flaw?

Comment author: Douglas_Knight 18 August 2009 04:24:57AM 2 points [-]

One example of an implicit form of PoC is to project properties or aspects of consciousness onto reality and treating them as metaphysical and not epistemological factors.

you mean like collapse?

Comment author: dmfdmf 18 August 2009 06:04:32AM -3 points [-]

Could be but I don't know QM well enough to say for sure.

If I understand it correctly, the collapse of the wave function is when the probabilities change at the moment of observation or measurement. So if one holds that the wave collapse is a metaphysical event (and you agree with Jaynes that probabilities are epistemological) then that would be a case of what Jaynes called the mind projection fallacy. Much of the debates in QM regarding wave collapse revolve around exactly this point. Of course, camps have formed on both sides of the dichotomy and I don't think it can be resolved by just asserting that probabilities are epistemological. The error is deeper than that and I suspect QM needs to be derived from bayesian principles but I am not sure that bayesian probability theory is yet up to the task. The situation is very similar to the ancient debates on whether color was a property of the object or in the mind, which makes me think there is an object/subject distinction that is being missed.

Comment author: Eliezer_Yudkowsky 18 August 2009 07:39:36AM *  6 points [-]

Just read the Less Wrong sequence on QM. All the answers to your questions may be found there. I consider myself an aspiring disciple of Jaynes, probably as versed as any living human being in the ways of the Mind Projection Fallacy, and MWI is the version of QM which does not have such difficulties.

You've certainly arrived at the correct website to find the answers that you in particular seek, fellow Bayesian and Jaynesian; but you're being voted down because you haven't read the existing material.

Comment author: timtyler 18 August 2009 07:42:19AM 0 points [-]

Right - but there's no collapse in the MWI. Everything remains in superposition forever - thus the "many worlds".

Comment author: timtyler 17 August 2009 07:51:28AM *  2 points [-]

Thumbs up to Eliezer Yudkowsky for getting around to giving some actual timescales. They are incredibly vague timescales - but it is still a tricky thing to estimate the difficulty of - so that's OK, I guess.

Comment author: timtyler 17 August 2009 07:54:53AM *  1 point [-]

Dennett and Hofstadter have "extremely large" estimates of the time to intelligent machines as well. I expect such estimates will prove to be wrong - but it is true that we don't know much about the size of the target in the search space - or how rough that space is - so almost any estimate is defensible.

Comment author: timtyler 17 August 2009 08:10:14AM *  0 points [-]

Scott cites the Doomsday Argument in his "The Singularity Is Far":

http://scottaaronson.com/blog/?p=346

Surely that is a mistake. The Doomsday Argument may suggest that the days of humans like us may be numbered, but doesn't say much more than that - in particular it can't be used to argue against a long and rich future filled with angelic manifestations. So: it is poor evidence against a relatively near era of transcension.

Comment author: timtyler 17 August 2009 09:19:59AM 0 points [-]

I'm not sure the halved doubling time for quantum computers is right.

Maybe I'm not getting into the spirit of accepting the proposed counterfactuals - but is quantum computer performance doubling regularly at all? It seems more as though it is jammed up against decoherence problems already.

Comment author: marc 17 August 2009 11:23:56PM 2 points [-]

I guess that quantum computers halve the doubling time, as compared to a classical computer, because every extra qubit squares the available state space. This could give the factor two in the exponential of Moore's law.

Quantum computing performance currently isn't doubling but it isn't jammed either. Decoherence is no longer considered to be a fundamental limit, it's more a practical inconvenience. The change that brought this about was the invention of quantum error correcting codes.

However experimental physicists are still searching for the ideal practical implementation. You might compare the situation to that of the pre-silicon days of classical computing. Until this gets sorted I doubt there will be any Moore's law type growth.

Comment author: timtyler 18 August 2009 07:55:27AM 0 points [-]

I looked at:

http://en.wikipedia.org/wiki/Quantum_error_correction

The bit about the threshold theorem looks interesting.

However, I would be more impressed by a working implementation ;-)

Comment author: Douglas_Knight 18 August 2009 04:36:31AM 2 points [-]

It's a purely theoretical counterfactual about the combination of Moore's law and Grover's algorithm.

Moore's law says that the computer becomes twice as efficient in 18 months. Grover's algorithm says that the time taken by a quantum computer to solve SAT is the square root of the time required by a classical computer. Thus in 18 months, Moore's law of hardware should make the quantum computer 4 times as fast.

Comment author: timtyler 18 August 2009 07:39:55AM *  1 point [-]

Well, I can see what math was done. The problem is the false assertion. I learned in math classes that if you accept one false thing, you can prove everything, and consequently your understanding of the difference between what's true and what's not dwindles to zero. You can't just believe one false thing.

If we actually "switched to quantum computers" it isn't clear we would get an exponential trajectory at all - due to the proximity of physical limits. If we did get an exponential trajectory, I can see no coherent reason for thinking the doubling time would relate to that of classical computers - because the technology is quite different. Currently, quantum computers grow mostly by adding qubits - not by the shrinking in component size that drives Moore's law in classical computers. That increases their quantum-parallelism, but doesn't affect their speed.

Comment author: pengvado 18 August 2009 08:19:58AM *  1 point [-]

Assume the number of quantum gate-ops per second doubles every 18 months. Assume SAT is O(2^n) on a classical computer and O(2^(n/2)) by Grover's. Then the maximum feasible problem size on a classical computer increases by 1 every 18 months, and on a quantum computer increases by 2. No factors of anything involved.
Alternately, if you measure a fixed problem size, then by assumption speed doubles for both.
So where does 4x come from?

Comment author: Douglas_Knight 18 August 2009 05:59:14PM 1 point [-]

It just comes from treating classical computers as the correct measuring stick. It would be more precise to refer, as you do, to 18 months as the add one time than the doubling time. But if you do call it the doubling time, then for quantum computers, it becomes the 4x time. Of course, it's not uniform--it doesn't apply to problems in P.

Comment author: timtyler 18 August 2009 06:28:15PM 0 points [-]

With classical computers Moore's law improves serial and parallel performance simulataneously - by making components smaller.

With quantum computers serial and parallel performance are decoupled - more qubits improves parallel performance and minaturisation has no effect on the number of qubits, but improves serial processing performance. So, there are two largely independent means of speeding up quantum computing. Which one supposedly doubles twice as fast as classical computers? Neither - AFAICS.

Comment author: Douglas_Knight 18 August 2009 09:25:25PM 1 point [-]

Sorry, my original response should have been "yes, you aren't getting into the spirit of the counterfactual."

Comment author: shirisaya 17 August 2009 08:16:25PM 1 point [-]

On the issue of many-world, I must just be slow because I can't see how it is "obviously" correct. It certainly seems both self consistent and consistent with observation, but I don't see how this in particular puts it so far ahead of other ways of understanding QM as to be the default view. If anyone knows of a really good summary for somebody who's actually studied physics on why MWI is so great (and sadly, Eliezer's posts here and on overcomingbias don't do it for me) I would greatly appreciate the pointer.

In particular, two things that I have a hard time wrapping my head around are: -If multiple worlds really are "splitting" from our own how is this accomplished without serious violations of mass and energy conservation. (I'm sure somebody has treated this somewhere since it's so basic, but I've never seen it.) -Even assuming everything else is fine, the actual mechanism for which world diverge has to be spelled out. (Maybe it is somewhere, if so please help me end my ignorance.)

I'll admit that I haven't actually spent a great deal of time considering the issue, but I've never come across answers to basic questions of this sort.

Comment author: timtyler 17 August 2009 09:05:46PM *  3 points [-]

For energy conservation see:

http://www.hedweb.com/manworld.htm#violate

The main reason for following the MWI is Occam's razor:

http://www.hedweb.com/manworld.htm#ockham%27s

Comment author: shirisaya 18 August 2009 03:21:33AM 0 points [-]

Thank you, this is exactly the type of linking that I was looking for. Unfortunately, the FAQ that you so kindly provided isn't providing the rigor that I'm looking for. In fact, for the energy conservation portion, I think (although I'm by no means certain) that the argument has been simplified to the point that the explanation being offered isn't true.

I guess what I'd really like is an explanation of MWI that actually ties the math and the explanations together closely. (I think that I'm expressing myself poorly, so I'm sorry if my point seems muddled, but I'd actually like to really understand what Eliezer seems to find so obvious.)

Comment author: timtyler 18 August 2009 08:02:19AM *  0 points [-]

The first sentence lays out the issue:

"the law conservation of energy is based on observations within each world. All observations within each world are consistent with conservation of energy, therefore energy is conserved."

Conservation of energy takes place within worlds, not between them.

FWIW, I first learned about the MWI from: Paul C.W. Davies' book: "Other Worlds" - waay back in the 1980s. It was quite readable - and one of the better popular books on QM from that era. It succeeded in conveying the "Occam" advantage of the theory.

Comment author: shirisaya 18 August 2009 02:56:16PM 1 point [-]

OK, if that's really what it takes I guess I'll leave it at that. But I don't see the loss of generality from conservation laws operating on any closed system as a good thing, and I can't understand how weighting a world (that is claimed to actually exist) by a probability measure (that I've seen claimed to be meant as observed frequencies) is actually a reasonable thing to do.

I would actually like to understand this, and I suspect strongly that I'm missing something basic. Unfortunately, I don't have the time to make my ignorance suitable for public consumption, but if anyone would like to help enlighten me privately, I'd be delighted.

Comment author: byrnema 18 August 2009 12:23:24AM *  2 points [-]

I think that the many world hypothesis is aesthetic because it doesn't break symmetry. Suppose that in some set-up a particle can move down one path to the right or another path to the left and there are exactly equal probabilities of either path being taken. Choosing one of the paths -- by any mechanism -- seems arbitrary. It is more logical that both paths are taken. But the two possibilities can't interact: two different worlds.

In the world we experience, objects do occasionally move to the right. If there is not an alternate reality in which the object moved to the left, eventually, with either that object's movement, or the object that pushed it, or the object that pushed that, and so on, you have to explain how symmetry was ever broken in the first place.

Physicists don't like spontaneous breaking of symmetry. So much so, that the idea of many worlds suddenly seems totally reasonable.

Later edit: This is similar to the argument Eliezer made, in more detail and with more physics here.

Comment author: shirisaya 18 August 2009 03:13:57AM 0 points [-]

In my understanding, what you have presented is an argument for why MWI is interesting (is has strong aesthetic appeal) and why it's worth looking into seriously (it doesn't seem to have spontaneous breaking of symmetry).

What I'm looking for is a compilation of reasons that I should believe that it is true, basically a list of problems with other interpretations and how MWI fixes it along with refutations of common objections to MWI. I should also note that I'm explicitly asking for rigorous arguments (I actually am a physicist and I'd like to see the math) and not just casual arguments that make things seem plausible.

Comment author: byrnema 18 August 2009 01:11:45PM *  1 point [-]

I should also note that I'm explicitly asking for rigorous arguments

Many worlds is an interpretation of quantum mechanics. QM stays exactly the same; mathematics, evidence and everything. Whether an interpretation is plausible really just depends on what is aesthetic and what makes sense to you. I explained why some other physicists find Many Worlds reasonable. It's always going to be this nebulous opinion-based "support" because it's not a matter of empirical fact -- unless it ever turned out there is some way the worlds interact.

In my understanding, what you have presented is an argument for why MWI is interesting (is has strong aesthetic appeal) and why it's worth looking into seriously (it doesn't seem to have spontaneous breaking of symmetry).

You've made a distinction between MWI being aesthetic and MWI being worth looking into seriously, which makes it sounds like you view that the argument to avoid spontaneous breaking of symmetry is more than just an aesthetic one. Can you pinpoint the physical reason why we like to avoid it? (I was wondering before.)

And then a question for the physical materialists: Why do you feel comfortable discussing multiple worlds; with it being an interpretation rather than an empirical fact? Or do you think there could ever be evidence one way or the other? (I just read Decoherence is Falsifiable and Testable and I believe Eliezer is saying that Many Worlds is a logical deduction of QM, so that having a non-many-world-theory would require additional postulates and evidence.)

Comment author: timtyler 18 August 2009 01:40:48PM 1 point [-]

Uh huh. See:

"What unique predictions does many-worlds make?"

"Could we detect other Everett-worlds?"

"Many worlds is often referred to as a theory, rather than just an interpretation, by those who propose that many worlds can make testable predictions (such as David Deutsch) or is falsifiable (such as Everett)"

Comment author: byrnema 18 August 2009 01:54:32PM *  1 point [-]

OK, thanks. I see that many-worlds could be falsifiable, if the many-worlds interact (or interfere). I really didn't know that was on the table.

Comment author: Z_M_Davis 18 August 2009 03:56:33AM *  2 points [-]

If anyone knows of a really good summary for somebody who's actually studied physics on why MWI is so great (and sadly, Eliezer's posts here and on overcomingbias don't do it for me) I would greatly appreciate the pointer.

You say Eliezer's posts didn't do it for you, but how much of it did you read? In particular, the point about parsimony favoring MWI is explained in "Decoherence is Simple". As for the mechanism of world divergence, I think the answer is that "worlds" are not an ontologically basic element of the theory. Rather, the theory is about complex amplitude in configuration space, and then from our perspective embedded within the physics, the evolution of the wavefunction seems like "worlds" "splitting."

Comment author: shirisaya 18 August 2009 03:04:37PM *  1 point [-]

You say Eliezer's posts didn't do it for you, but how much of it did you read?

I have read every post on overcomingbias and I'm pretty sure I've ready every top-level post by Eliezer on less wrong. Although I very much enjoyed Eliezer's posts on the issue, they were intended for a wide audience and I'm looking for a technical discussion.

Comment author: Douglas_Knight 18 August 2009 04:18:54AM 3 points [-]

if multiple worlds really are "splitting"

What if instead of talking about "many worlds" we just said "no collapse"? If there's just this state and it evolves according to Schroedinger's equation. Then then of course there's conservation of energy.

Comment author: shirisaya 18 August 2009 02:34:05PM 0 points [-]

Sure, I'm certainly not saying that the Copenhagen interpretation is correct, and my understanding is that a decoherence view is both more useful and simpler. MWI (at least as I understand it) is a significantly stronger claim. When we take the probabilities that come from wave state amplitudes as observed frequencies among actually existing "worlds" then we are claiming that there are many different versions of me that actually exist. It's this last part that I find a bit of a stretch.

Comment author: Douglas_Knight 18 August 2009 06:13:40PM 2 points [-]

If many different versions of you existing bothers you, does Schroedinger's cat bother you?

The extent to which MWI is a stronger claim than "no collapse," it's purely interpretative. It certainly doesn't posit any "splitting" beyond vanilla QM. Questions about conservation of energy suggest that you don't get this.

Comment author: timtyler 18 August 2009 10:38:21AM *  0 points [-]

It mostly revolves around the idea of collapse. There's no expermental evidence for a collapse. In the MWI, there's no collapse. If we find evidence for a collapse someday, we will have to discard the MWI. However, people have been looking for a while now - and there's no sign of a collapse so far. So, applying Occam's razor, you get the MWI - or something similar.

Comment author: timtyler 18 August 2009 09:40:42AM *  0 points [-]

The discussion got a bit sidetracked around about when EY asked something like:

If you are assuming that you can give the machine one value and have it stable, why assume that there are all these other values coming into it which you can't control.

...about 27 minutes in.

Scott said something about that being how humans work. That could be expanded on a bit:

In biology, it's hard to build values in explicitly, since the genes have limited control over the brain - since the brain is a big self-organising system. It's as though the genes can determine the initial developmental trajectory - but then there's the wind to deal with.

If machine intelligence turns out to work much like that, then we may have similar difficulties building in machine values. If we can find a way of getting the machines to absorb values from surrounding agents, then that might save a lot of trouble.

Humans get many of their values from surrounding humans - via human culture. Were it not for that we would be like our cannibal ancestors from 1MY ago. Conscience and guilt are some of the mechanisms used to absorb those values. Evolution built those in - rather than all the details of the values of human society. It would have been technically difficult to build those in - and the result would have been inflexible. Instead it built a learning machine - and allowed the details of the values of human society to be one of the things learned.

Machine intelligence is quite likely to work along those lines if it is built on a connectionist model - where the brain grows from a simple initial state. There, we can't easily wire in the details of particular values - since it is so hard to understand the details of what is going on. However, we can wire in some gross values - pain, suffering, irritation, etc. Guilt is basically a way of applying negative reinforcement to past actions. It's a fairly primitive value - the kind that it is easier to build in.

Comment author: timtyler 18 August 2009 09:59:30AM 0 points [-]

Time symmetry is probably not a big selling point of the classical formulation of the MWI. What with all those worlds in the future that don't exist in the past.

OK - no information is created or destroyed - so it's technically reversible - but that's not quite the same thing as temporal symmetry.

It would be better if it were formulated so there were lots of worlds in the past too. You don't lose anything that way - AFAICS.

Comment author: timtyler 18 August 2009 10:26:57AM 2 points [-]

Robot ant: http://www.youtube.com/watch?v=0jyBiECoS3Q

I would say real ants are currently waaay ahead of robot ant controllers.

On the other hand - like EY says - there's a whole bunch of things that we can do which ants can't. So it is not trivial to compare.

Comment author: wedrifid 29 August 2009 01:36:24PM 1 point [-]

And robot ant controllers are not examples of our most powerful AI creations to date.

Comment author: BrandonReinhart 18 August 2009 07:30:49PM *  3 points [-]

I picked up a copy of Jaynes off of ebay for a good price ($35.98). There are 2 copies left in that auction. Someone here might be interested:

http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=280380684353

No need to vote this comment up or down.

Comment author: Christian_Szegedy 18 August 2009 08:07:49PM *  3 points [-]

I liked the discussion, especially the final part on the many world interpretation (MWI).

I had the impression that Eliezer had a better understanding of quantum mechanics (QM), however I found one of his remarks very misleading (and it also confused Scott rightly): Eliezer seemed to argue that MWI somehow resolves the difficulty of unifying QM with general relativity (GR) by resolving non-locality.

It is true that non-locality is resolved by Everett's interpretation, but the real problem with QM+GR is that the renormalization of the gravity wave function does not seem to work out mathematically. At least not in a straightforward manner. However MWI requires gravity to be quantized and therefore forced the physicists to come up with a more elaborate solution.

Anyways, I agree with Eliezer on the other arguments in favor of MWI (linearity, locality, objectivity, etc...), but think that making overreaching remarks rendered his position at least a bit suspect for no good reason.

To be fair: MWI has its own technical quirks (e.g. choice of basis, explanation of probabilities,...) but they don't seem to be as fundamental as those of the classical interpretation. However the discussion would have been more interesting if Scott could have brought up those points rather than the purely philosophical issues.

Comment author: Eliezer_Yudkowsky 18 August 2009 09:06:31PM 4 points [-]

"relativity" was meant to refer to SR not GR

Comment author: Christian_Szegedy 19 August 2009 06:04:39AM 2 points [-]

Sorry, it seems I was too sloppy, I even must revise my opinion on Scott who seemed to represent a very reasonable point of view although (I agree with you) he tries to conform a bit too much for my taste as well.

Still, I have a very special intutitive suspicions with the WMI: if the physics is so extremely generous and powerful that it spits out all those universes with ease, why does not it allow us to solve exponential problems?

How comes that our world has such a very special physics that it allows us to constructs machines that are slightly more powerful than Turing machines (in an asymptotical sense) still not making exponential (or even NP-complete) problems tractable?

It looks like a strange twist of nature that we have this really special physics that allows us to construct computational processes in this very narrow middle ground in asymptotic complexity. Generating all those exponentially increasing number of universes, but does not allow their inhabitants to exploit them algorithmically to the full extent.

Can't it be that that our world still has to obey certain complexity limits and some of the universes have to be pruned away for some reason?

Comment author: timtyler 19 August 2009 02:43:04PM *  -3 points [-]

You're asking why we can't yet build quantum computers?

It may be down to inexperience.

Comment author: Eliezer_Yudkowsky 19 August 2009 03:39:13PM *  2 points [-]

This is a fascinating way of looking at it.

My first thought was to reply, "Yes, most worlds may need to be pruned a la Hanson's mangled worlds, but that doesn't mean you can end up with a single global world without violating Special Relativity, linearity, unitarity, continuity, CPT invariance, etc."

But on second thought this seems to be arguing even further, for the sort of deep revolution in QM that Scott wants - a reformulation that would nakedly expose the computational limits, and make the ontology no more extravagant than the fastest computation it can manage within a single world's quantum computer. So this would have to reduce the proliferation of worlds to sub-exponential, if I understand it correctly, based on the strange reasoning that if we can't do exponential computations in one world then this should be nakedly revealed in a sub-exponential global universe.

But you still cannot end up with a single world, for all the reasons already given - and quantum computers do not seem to be merely as powerful as classical computers, they do speed things up. So that argues that the ontology should be more than polynomial, even if sub-truly-exponential.

Comment author: Christian_Szegedy 20 August 2009 09:40:14PM *  3 points [-]

Thanks. I was not aware the Scott has the same concerns based on computational complexity that I have.

I am not even sure that the ontology needs to rely on non-classical capabilities. If our multiverse is a super-sophisticated branch-and-bound type algorithm for some purpose, then it still could be fastest, albeit super-polynomial, algorithm.

Comment author: Eliezer_Yudkowsky 20 August 2009 10:14:18PM 2 points [-]

I was not aware the Scott has the same concerns based on computational complexity that I have.

Don't know if he does. I just mean that Scott wants a deep revolution in general, not that particular deep revolution.

Comment author: Christian_Szegedy 21 August 2009 08:16:15PM *  2 points [-]

Some other thoughts about the MWI, that come to my mind after a bit more thinking:

Here is a version of the Schroedinger's cat experiment that would let anyone to test the MWI for himself: Let us devise a quantum process that has 99 percent probability of releasing a nerve-gas in a room that kills humans without any pain. If I'd be really sure of the MWI, I would have no problems going into the room and press the button to start the experiment. In my own experience I would simply come out of the room unscratched for certain as it will be the only world I would experience. OTOH, if I really get out of the room as if nothing happened I could deduce with high probability that the MWI is correct. (If not: just repeat the experiment for a couple of times...)

I must admit, I am not really keen on doing the experiment. Why? Am I really so unconvinced about the MWI? What are my reasons not to perform it, even if I'd be 100% sure?

Another variation of the above line of thoght: assume that we are in 2020 and we say that since 2008, year after year, the Large Hadron Collider had all kind of random-looking technical defects that prevented it from performing the planned experiments in the 7Tv scale. Finally a physicist comes up with a convincing calculation showing that the probability that the collider will produce a black hole is much much higher than anticipated and the chances that the earth is destroyed are significant.

Would it be a convincing demonstration of the MWI? Even without the calculation, should we insist on trying to fix the LHC, if we experience the pattern of its breaking down for years?

Comment author: bstark 23 August 2009 11:18:32PM 2 points [-]