ChristianKl comments on MIRI strategy - Less Wrong

5 Post author: ColonelMustard 28 October 2013 03:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread.

Comment author: ChristianKl 28 October 2013 04:32:34PM -2 points [-]

HPMOR could end with Harry destroying the world through an UFAI. The last chapters already pointed to Harry destroying the world.

Strategically that seems to be the best choice. HPMOR is more viral than some technical document. There already effort invested in getting a lot of people to read HPMOR.

People bond with the characters. Ending the book with, now everyone is dead because an AGI went FOOM let's people take that scenario seriously and that's exactly the right time to tell them: "Hey, this scenario could also happen in our world, so let's do something to prevent it from happening."

Comment author: shminux 28 October 2013 05:59:31PM 7 points [-]

HPMOR could end with Harry destroying the world through an UFAI.

I would consider it probably the worst possible ending for HPMoR. I assume that Eliezer is smart enough to avoid overt propaganda.

Comment author: Coscott 28 October 2013 06:03:58PM 0 points [-]

What do you mean "smart enough?" You think that that ending would do harm for FAI?

Comment author: shminux 28 October 2013 06:09:12PM 5 points [-]

It would likely "do harm" to the story and consequently reduce its appeal and influence.

Comment author: Mitchell_Porter 29 October 2013 12:16:38AM 3 points [-]

Even more people have read the Bible, the Quran, and the Vedas, so why not put out pamphlets in which Jesus, Muhammad and Krishna discuss AGI?

Comment author: Lumifer 29 October 2013 12:51:59AM *  0 points [-]

why not put out pamphlets in which Jesus, Muhammad and Krishna discuss AGI?

Jesus: We excel at absorbing external influences and have no problems with setting up new cults (just look at Virgin Mary) -- so we'll just make a Holy Quadrinity! Once you go beyond monotheism there's no good reason to stop at three...

Muhammad: Ah, another prophet of Allah! I said I was the last but maybe I was mistaken about that. But one prophet more, one prophet less -- all is in the hands of Allah.

Krishna: Meh, Kali is more impressive anyways. Now where are my girls?

Comment author: BaconServ 29 October 2013 01:08:11AM 1 point [-]

That would probably upset many existing Christians. Clearly Jesus' second coming is in AI form.

Comment author: Lumifer 29 October 2013 01:24:13AM 5 points [-]

Robot Jesus! :-) And rapture is clearly just an upload.

Comment author: ChristianKl 29 October 2013 12:37:21AM -1 points [-]

I would be interested in reading them.

Comment author: ArisKatsaris 28 October 2013 06:58:14PM 2 points [-]

HPMOR could end with Harry destroying the world through an UFAI.

No, it couldn't.

Comment author: ChristianKl 28 October 2013 07:16:12PM 3 points [-]

There are multiple claims in the book that Harry will destroy the world. It starts in the first chapter with "The world will end". Interessingly that wasn't threre at the time the chapter was first published, but retrospectively added.

Creating a AI in the world is just a matter of creating a magical item. Harry knows how to make them self aware. Harry knows that magical creatures like trolls constantly self modify through magic. Harry is into inventing new powerful spells.

All the pieces for building an AGI that goes FOOM are there in the book.

Comment author: ArisKatsaris 28 October 2013 07:29:10PM 1 point [-]

All the pieces for building an AGI that goes FOOM are there in the book.

I assign 2% probability on this scenario. What probability do you assign?

Comment author: ChristianKl 28 October 2013 07:34:03PM 0 points [-]

Given that the pieces the last time I read it p=.99 for that claim.

The more interesting claim is that an AGI actually goes FOOM. I say p=.65.

Comment author: ArisKatsaris 28 October 2013 07:53:01PM 0 points [-]

The more interesting claim is that an AGI actually goes FOOM.

Yeah. that was the claim I meant.

I say p=.65.

Would you be willing to bet on this? I'd be willing to bet 2 of my dollars against 1 of yours, that no AGI will go FOOM in the remainder of the HPMoR story (for a maximum of 200 of my dollars vs 100 of yours)

Comment author: gwern 28 October 2013 10:50:17PM 2 points [-]

I'd be willing to bet 2 of my dollars against 1 of yours, that no AGI will go FOOM in the remainder of the HPMoR story (for a maximum of 200 of my dollars vs 100 of yours)

Even in early 2012, I didn't think 2:1 was the odds for an AGI fooming in MoR...

How would you like to bet 1 of your dollars against 3 of my dollars that an AGI will go FOOM? Up to a max of 120 of my dollars and 40 of yours; ie. if an AGI goes foom, I pay you $120 and if it doesn't, you pay me $40. (Payment through Paypal.) Given your expressed odds, this should look like a good deal to you.

Comment author: ArisKatsaris 28 October 2013 11:10:41PM 0 points [-]

ie. if an AGI goes foom, I pay you $120 and if it doesn't, you pay me $40. (Payment through Paypal.) Given your expressed odds, this should look like a good deal to you.

Ι said I assign 2% probability on an AGI going FOOM in the story. So how would this look like a good deal for me?

The odds I offered to ChristianKI were meant to express a middle ground between the odds I expressed (2%) and the odds he expressed (65%) so that the bet would seem about equally profitable to both of us, given our stated probabilities.

Comment author: gwern 28 October 2013 11:25:27PM 1 point [-]

Bah! Fine then, we won't bet. IMO, you should have offered more generous terms. If your true probability is 2%, then that's an odds against of 1:49, while his 65% would be 1:0.53, if I'm cranking the formula right. So a 1:2 doesn't seem like a true split.

Comment author: ArisKatsaris 29 October 2013 02:58:42PM 0 points [-]

You are probably right about how it's not a true split -- I just did a stupid "add and divide by 2" on the percentages, but it doesn't really work like that.. He would anticipate to lose once every 3 times, but given my percentages I anticipated to lose once every 50 times. (I'm not very mathy at all)

Comment author: ChristianKl 28 October 2013 08:19:47PM 0 points [-]

Would you be willing to bet on this? I'd be willing to bet 2 of my dollars against 1 of yours, that no AGI will go FOOM in the remainder of the HPMoR story (for a maximum of 200 of my dollars vs 100 of yours)

At the moment I unfortunately don't have enough cash to invest in betting projects.

Additionally I don't know Eliezer personally and there are people on LessWrong that do and which might have access to nonpublic information. As a result it's not a good topic for betting money.

Comment author: gwern 28 October 2013 10:20:01PM *  2 points [-]

At the moment I unfortunately don't have enough cash to invest in betting projects.

Fortunately, that's why we have PredictionBook! Looking through my compilation of predictions (http://www.gwern.net/hpmor-predictions), I see we already have two relevant predictions:

(I've added a new more general one as well.)

Comment author: ChristianKl 29 October 2013 03:26:25AM 0 points [-]

I added my prediction to that.

Comment author: TheOtherDave 28 October 2013 07:17:40PM 1 point [-]

This comment from a few years back and the associated discussion seems vaguely relevant.

Comment author: gattsuru 28 October 2013 04:49:25PM *  1 point [-]

That strikes me as incredibly likely to backfire. Most obviously, a paper with more than half a million words is a little much to as an introductory work, especially with things like the War of the Three Armies (because Death Note wasn't complicated enough!). Media where our heroes destroy a planet also tend to have issues with word of mouth when not a comedy or written by Tomino.

More subtly, there are some serious criticisms of the idea of the Singularity and more generally of transhumanism, which rest on things that would be obviated in HPMoR by nature of the Harry Potter series starting as a fantasy series for young teens, and genre conventions of fantasy series, rather than by the strength of MIRI's arguments. Many of these criticisms are not very terribly strong. They are still shouted as if strong AI were Rumpelstilskin, unable to stand the sound of an oddly formed name, and HPMoR would have to be twisted very hard to counter them.

Comment author: ChristianKl 28 October 2013 04:57:37PM 3 points [-]

A lot of people think of strong AI like C3PO from Star Wars. Science fiction has the power of giving people mental models even it isn't realitstic.

The magical enviroment of the Matrix movies shapes how people think about the simulation argument.

Comment author: gattsuru 28 October 2013 05:18:17PM *  2 points [-]

Very true. I'd recommend against using Star Wars as a setting for cautionary tales about the Singularity, as well. The Harry Potter setting is just particularly bad, because we've already seen and encountered methods for producing human-intelligence artificial constructs that think just like a human. If Rationalist!Harry ends up having the solar system wallpapered with smiley faces, it's a lot less believable that he did it because The Machine Doesn't Care when quite a number of other machines already have.

You'll have to fight assumptions like metaphysical dualism or what limitations self-reinforcing processes might have, no matter what you do, because those mental models apply in fairly broad strokes, but it's a lot easier to do so when the setting isn't fighting you at the same time.

Comment author: ChristianKl 28 October 2013 09:18:17PM -1 points [-]

You'll have to fight assumptions like metaphysical dualism or what limitations self-reinforcing processes might have, no matter what you do, because those mental models apply in fairly broad strokes, but it's a lot easier to do so when the setting isn't fighting you at the same time.

I don't think that you have to fight assumptions of metaphysical dualism. I think that the people who don't believe in UFAI as a risk on that basis are not the ones that are dangerous and might develop an AGI.

Comment author: gattsuru 29 October 2013 04:39:38PM 2 points [-]

That's an appealing thought, but I'm not sure it's a true one.

For one, if we're talking about appealing to general audiences, many folk won't be trying to develop an AGI, but still be relevant to our interests. Thinking AGI can not invent because they lack souls, or that AGI will be friendly if annoying golden translation droids, may be inconsistent with writing evolutionary algorithms, but is not certainly inconsistent with having investment or political capital.

At a deeper level, a lot of folk do hold such beliefs and simultaneously have inconsistent belief structures, which may still leave them dangerous. It is demonstrably possible have incorrect beliefs about evolution yet run a PCR, or to think it's easy to preserve semantic significance but also be a computer programmer. It's tempting to dismiss people who hold irrational beliefs since rationality strongly correlates with long-term success, but from an absolute safety perspective that gets increasingly risky.

Comment author: ChristianKl 29 October 2013 11:46:45PM -1 points [-]

You need a bit more to develop an AGI than running PCR that someone else invents. I don't think you can develop an AGI when you think AGI are impossible due to metaphysical dualism.

You can believe that humans have souls are still design AGI that have minds but no souls, but you won't get far at developing an AGI with something like a mind if you think that task is impossible.

Comment author: ChrisHallquist 29 October 2013 07:08:44PM 0 points [-]

I don't expect AI itself to show up, but I think it's clear that in the story that magic is serving as a sort of metaphor for AI, with Harry playing the role of an ambitious AI researcher: Harry wants to use magic to solve death and make everything perfect, but we've gotten a lot of warning that Harry's plans could go horribly wrong and possibly destroy the world.

Eliezer once mentioned he was considering a "solve this puzzle or the story ends sad" conclusion for HPMOR like he did for Three Worlds Collide. If Eliezer goes through with that, I expect the "sad" ending to be "Harry destroys the world." Or if Eliezer doesn't do that, he may just make it clear how Harry came very close to destroying the world before finding another solution.

Comment author: ChrisHallquist 29 October 2013 07:10:32PM 0 points [-]

EDIT: So how does Harry almost destroy the world? My own personal theory is "conservation law arbitrage." Or maybe some plan involving Dementors going horribly wrong.

Comment author: fubarobfusco 29 October 2013 05:20:47PM 0 points [-]

In fiction, deus (or diabolus) ex machina is considered an anti-pattern.