Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

BOOK DRAFT: 'Ethics and Superintelligence' (part 1)

11 Post author: lukeprog 13 February 2011 10:09AM

I'm researching and writing a book on meta-ethics and the technological singularity. I plan to post the first draft of the book, in tiny parts, to the Less Wrong discussion area. Your comments and constructive criticisms are much appreciated.

This is not a book for a mainstream audience. Its style is that of contemporary Anglophone philosophy. Compare to, for example, Chalmers' survey article on the singularity.

Bibliographic references are provided here.

Part 1 is below...

 

 

 

Chapter 1: The technological singularity is coming soon.

 

The Wright Brothers flew their spruce-wood plane for 200 feet in 1903. Only 66 years later, Neil Armstrong walked on the moon, more than 240,000 miles from Earth.

The rapid pace of progress in the physical sciences drives many philosophers to science envy. Philosophers have been researching the core problems of metaphysics, epistemology, and ethics for millennia and not yet come to consensus about them like scientists have for so many core problems in physics, chemistry, and biology.

I won’t argue about why this is so. Instead, I will argue that maintaining philosophy’s slow pace and not solving certain philosophical problems in the next two centuries may lead to the extinction of the human species.

This extinction would result from a “technological singularity” in which an artificial intelligence (AI) of human-level general intelligence uses its intelligence to improve its own intelligence, which would enable it to improve its intelligence even more, which would lead to an “intelligence explosion” feedback loop that would give this AI inestimable power to accomplish its goals. If so, then it is critically important to program its goal system wisely. This project could mean the difference between a utopian solar system of unprecedented harmony and happiness, and a solar system in which all available matter is converted into parts for a planet-sized computer built to solve difficult mathematical problems.

The technical challenges of designing the goal system of such a superintelligence are daunting.[1] But even if we can solve those problems, the question of which goal system to give the superintelligence remains. It is a question of philosophy; it is a question of ethics.

Philosophy has impacted billions of humans through religion, culture, and government. But now the stakes are even higher. When the technological singularity occurs, the philosophy behind the goal system of a superintelligent machine will determine the fate of the species, the solar system, and perhaps the galaxy.

***

Now that I have laid my positions on the table, I must argue for them. In this chapter I argue that the technological singularity is likely to occur within the next 200 years unless a worldwide catastrophe drastically impedes scientific progress. In chapter two I survey the philosophical problems involved in designing the goal system of a singular superintelligence, which I call the “singleton.”

In chapter three I show how the singleton will produce very different future worlds depending on which normative theory is used to design its goal system. In chapter four I describe what is perhaps the most developed plan for the design of the singleton’s goal system: Eliezer Yudkowsky’s “Coherent Extrapolated Volition.” In chapter five, I present some objections to Coherent Extrapolated Volition.

In chapter six I argue that we cannot decide how to design the singleton’s goal system without considering meta-ethics, because normative theory depends on meta-ethics. In chapter seven I argue that we should invest little effort in meta-ethical theories that do not fit well with our emerging reductionist picture of the world, just as we quickly abandon scientific theories that don’t fit the available scientific data. I also specify several meta-ethical positions that I think are good candidates for abandonment.

But the looming problem of the technological singularity requires us to have a positive theory, too. In chapter eight I propose some meta-ethical claims about which I think naturalists should come to agree. In chapter nine I consider the implications of these plausible meta-ethical claims for the design of the singleton’s goal system.

 ***

 




[1] These technical challenges are discussed in the literature on artificial agents in general and Artificial General Intelligence (AGI) in particular. Russell and Norvig (2009) provide a good overview of the challenges involved in the design of artificial agents. Goertzel and Pennachin (2010) provide a collection of recent papers on the challenges of AGI. Yudkowsky (2010) proposes a new extension of causal decision theory to suit the needs of a self-modifying AI. Yudkowsky (2001) discusses other technical (and philosophical) problems related to designing the goal system of a superintelligence.

 

Comments (107)

Comment author: XiXiDu 13 February 2011 11:42:39AM 6 points [-]

In chapter two I survey the philosophical problems involved in designing the goal system of a singular superintelligence, which I call the “singleton.”

That sounds a bit like you invented the term "singleton". I suggest to clarify that with a footnote.

Comment author: Perplexed 13 February 2011 02:11:59PM *  7 points [-]

You (Luke) also need to provide reasons for focusing on the 'singleton' case. To the typical person first thinking about AI singularities, the notion of AIs building better AIs will seem natural, but the idea of an AI enhancing itself will seem weird and even paradoxical.

Comment author: lukeprog 13 February 2011 04:25:32PM 1 point [-]

Indeed I shall.

Comment author: gwern 13 February 2011 06:44:25PM *  4 points [-]

It's also worth noting that more than one person thinks the singleton wouldn't exist and alternative models are more likely. For example, Robin Hanson's em model (crack of a future dawn) is fairly likely given that we have a decent Whole Brain Emulation Roadmap, but nothing of the sort for a synthetic AI, and people like Nick Szabo emphatically disagree that a single agent could outperform a market of agents.

Comment author: Perplexed 13 February 2011 09:57:47PM 1 point [-]

Of course, people can be crushed by impersonal markets as easily as they can by singletons. The case might be made that we would prefer a singleton because the task of controlling it would be less complex and error-prone.

Comment author: gwern 13 February 2011 10:16:11PM 1 point [-]

A reasonable point, but I took Luke to be discussing the problems of designing a good singleton because a singleton seemed like the most likely outcome, not because he likes the singleton aesthetically or because a singleton would be easier to control.

Comment author: Perplexed 13 February 2011 10:57:06PM *  2 points [-]

In the context of CEV, Eliezer apparently thinks that a singleton is desirable, not just likely.

Only one superintelligent AMA (Artificial Moral Agent) is to be constructed, and it is to take control of the entire future light cone with whatever goal function is decided upon. Justification: a singleton is the likely default outcome for superintelligence, and stable co-existence of superintelligences, if achievable, would offer no inherent advantages for humans.

I'm not convinced, but since Luke is going to critique CEV in any case, this aspect should be addressed.

ETA: I have been corrected - the quotation was not from Eliezer. Also, the quote doesn't directly say that a singleton is a desirable outcome; it says that the assumption that we will be dealing with a singleton is a desirable feature of an FAI strategy.

Comment author: Nick_Tarleton 14 February 2011 11:59:09PM *  0 points [-]

I don't know how much you meant to suggest otherwise, but just for context, the linked paper was written by Roko and me, not Eliezer, and doesn't try to perfectly represent his opinions.

Comment author: Perplexed 15 February 2011 12:09:05AM 0 points [-]

No, I didn't realize that. Thx for the correction, and sorry for the misattribution.

Comment author: lukeprog 14 February 2011 06:21:48AM 0 points [-]

I have different justifications in mind, and yes I will be explaining them in the book.

Comment author: lukeprog 13 February 2011 07:25:43PM 0 points [-]

Yup, thanks.

Comment author: lukeprog 13 February 2011 04:33:19PM 0 points [-]

Done. It's from Bostrom (2006).

Comment author: Kevin 13 February 2011 09:36:34PM *  5 points [-]

Luke, as an intermediate step before writing a book you should write a book chapter for Springer's upcoming edited volume on the Singularity Hypothesis. http://singularityhypothesis.blogspot.com/p/about-singularity-hypothesis.html I'm not sure how biased they are against non-academics... probably depends on how many submissions they get.

Maybe email Louie and me and we can brainstorm about topics; meta-ethics might not be the best thing compared to something like making an argument about how we need to solve all of philosophy in order to safely build AI.

Comment author: mwaser 16 February 2011 12:05:48PM 2 points [-]

I know the individuals involved. They are not biased against non-academics and would welcome a well-thought-out contribution from anyone. You could easily have a suitable abstract ready by March 1st (two weeks early) if you believed that it was important enough -- and I would strongly urge you to do so.

Comment author: lukeprog 19 February 2011 05:53:16PM 3 points [-]

Thanks for this input. I'm currently devoting all my spare time to research on a paper for this volume so that I can hopefully have an extended abstract ready by March 15th.

Comment author: lukeprog 13 February 2011 10:18:21PM *  2 points [-]

I will probably write papers and articles in the course of developing the book. Whether or not I could have an abstract ready by March 15th is unknown; at the moment, I still work a full-time job. Thanks for bringing this to my attention.

Comment author: James_Miller 13 February 2011 07:17:27PM *  5 points [-]

The first sentence is the most important of any book because if a reader doesn't like it he will stop. Your first sentence contains four numbers, none of which are relevant to your core thesis. Forgive me for being cruel but a publisher reading this sentence would conclude that you lack the ability to write a book people would want to read.

Look at successful non-fiction books to see how they get started.

Comment author: lukeprog 13 February 2011 07:41:24PM *  6 points [-]

This is not a book for a popular audience. Also, it's a first draft. That said, you needn't apologize for saying anything "cruel."

But, based on your comments, I've now revised my opening to the following...

Compared to science, philosophy moves at a slow pace. A few decades after the Wright Brothers flew their spruce-wood plane for half the length of a football field, Neil Armstrong walked on the moon. Meanwhile, philosophers are still debating the questions Plato raised more than two millennia ago.

But the world is about to change. Maintaining philosophy’s slow pace and not solving certain philosophical problems in the next two centuries may lead to the extinction of the human species.

This extinction would result from...

It's still not like the opening of a Richard Dawkins book, but it's not supposed to be like a Richard Dawkins book.

Comment author: James_Miller 14 February 2011 12:00:57AM 7 points [-]

Better, but how about this:

"Philosophy's pathetic pace could kill us."

Comment author: [deleted] 13 February 2011 02:39:55PM 4 points [-]

I'm glad you're writing a book!

Comment author: Perplexed 13 February 2011 01:44:52PM 4 points [-]

Bibliographic references are provided here.

I notice some of the references you suggest are available as online resources. It would be a courtesy if you provided links.

Comment author: lukeprog 13 February 2011 07:24:59PM 0 points [-]

Done.

Comment author: Unnamed 13 February 2011 06:40:56PM 3 points [-]

I tried reading this through the eyes of someone who wasn't familiar with the singularity & LW ideas, and you lost me with the fourth paragraph ("This extinction..."). Paragraph 3 makes the extremely bold claim that humanity could face its extinction soon unless we solve some longstanding philosophical problems. When someone says something outrageous-sounding like that, they have a short window to get me to see how their claim could be plausible and is worth at least considering as a hypothesis, otherwise it gets classified as ridiculous nonsense. You missed that chance, and instead went with a dense paragraph filled with jargon (which is too inferentially distant to add plausibility) and more far-fetched claims (which further activate my bullshit detector).

What I'd like to see instead is a few paragraphs sketching out the argument in a way that's as simple, understandable, and jargon-free as possible. First why to expect an intelligence explosion (computers getting better and more domain general, what happens when they can do computer science?), then why the superintelligences could determine the fate of the planet (humans took over the planet once we got smart enough, what happens when the computers are way smarter than us?), then what this has to do with philosophy (philosophical rules about how to behave aren't essential for humans to get along with each other since we have genes, socialization, and interdependence due to limited power, but these computers won't have that so the way to behave will need to be programmed in).

Comment author: lukeprog 13 February 2011 07:27:51PM 1 point [-]

This is a difference between popular writing and academic writing. The opening is my abstract. See here.

Comment author: Unnamed 13 February 2011 09:02:33PM 3 points [-]

The problem that I described in my first paragraph is there regardless of how popular or academic a style you're aiming for. The bold, attention-grabbing claims about extinction/utopia/the fate of the world are a turnoff, and they actually seem more out of place for academic writing than for popular writing.

If you don't want to spend more time elaborating on your argument in order to make the bold claims sound plausible, you could just get rid of those bold claims. Maybe you could include one mention of the high stakes in your abstract, as part of the teaser of the argument to come, rather than vividly describing the high stakes before and after the abstract as a way to shout out "hey this is really important!"

Comment author: lukeprog 13 February 2011 09:27:27PM *  7 points [-]

Thanks for your comment, but I'm going with a different style. This kind of opening is actually quite common in Anglophone philosophy, as the quickest route to tenure is to make really bold claims and then come up with ingenius ways of defending them.

I know that Less Wrong can be somewhat averse to the style of contemporary Anglophone philosophy, but that will not dissuade me from using it. To drive home the point that my style here is common in Anglophone philosophy (I'm avoiding calling it analytic philosophy), here a few examples...

The opening paragraphs of David Lewis' On the Plurality of Worlds, in which he defends a radical view known as modal realism, that all possible worlds actually exist:

This book defends modal realism: the thesis that the world we are part of is but one of a plurality of worlds, and that we who inhabit this world are only a few out of all the inhabitants of all the worlds.

I begin the first chapter by reviewing the many ways in which systematic philosophy goes more easily if we may presuppose modal realism...

In the second chapter, I reply to numerous objections...

In the third chapter, I consider the prospect that a more credible ontology might yield the same benefits...

Opening paragraph (abstract) of Neil Sinhababu's "Possible Girls" for the Pacific Philosophical Quarterly:

I argue that if David Lewis’ modal realism is true, modal realists from different possible worlds can fall in love with each other. I offer a method for uniquely picking out possible people who are in love with us and not with our counterparts. Impossible lovers and trans-world love letters are considered. Anticipating objections, I argue that we can stand in the right kinds of relations to merely possible people to be in love with them and that ending a transworld relationship to start a relationship with an actual person isn’t cruel to one’s otherworldly lover.

Opening paragraph of Peter Klein's "Human Knowledge and the Infinite Regress of Reasons" for Philosophical Perspectives:

The purpose of this paper is to ask you to consider an account of justification that has largely been ignored in epistemology. When it has been considered, it has usually been dismissed as so obviously wrong that arguments against it are not necessary. The view that I ask you to consider can be called "Infinitism." Its central thesis is that the structure of justificatory reasons is infinite and non-repeating. My primary reason for recommending infinitism is that it can provide an acceptable account of rational beliefs, i.e., beliefs held on the basis of adequate reasons, while the two alternative views, foundationalism and coherentism, cannot provide such an account.

And, the opening paragraph of Steven Maitzen's paper arguing that a classical theistic argument actually proves atheism:

Chapter 15 of Anselm's Prosblogion contains the germ of an argument that confronts theology with a serious trilemma: atheism, utter mysticism, or radical anti-Anselmianism. The argument establishes a disjunction of claims that Anselmians in particular, but not only they, will find disturbing: (a) God does not exist, (b) no human being can have even the slightest conception of God, or (c) the Anselmian requirement of maximal greatness in God is wrong. Since, for reasons I give below, (b) and (c) are surely false, I regard the argument as establishing atheism.

And those are just the first four works that came to mind. This kind of abrupt opening is the style of Anglophone philosophy, and that's the style I'm using. Anyone who keeps up with Anglophone philosophy lives and breathes this style of writing every week.

Anglophone philosophy is not written for people who are casually browsing for interesting things to read. It is written for academics who have hundreds and hundreds of papers and books we might need to read, and we need to know right away in the opening lines whether or not a particular book or paper addresses the problems we are researching.

Comment author: JohnD 13 February 2011 11:26:13AM 3 points [-]

There's not much to critically engage with yet, but...

I find it odd that you claim to have "laid [your] positions on the table" in the first half of this piece. As far as I can make out, the first half only describes a set of problems and possibilities arising from the "intelligence explosion". It doesn't say anything about your response or proposed solution to those problems.

Comment author: Eliezer_Yudkowsky 13 February 2011 05:14:34PM 6 points [-]

You say you'll present some objections to CEV. Can you describe a concrete failure scenario of CEV, and state a computational procedure that does better?

Comment author: lukeprog 13 February 2011 07:22:59PM 2 points [-]

As for concrete failure scenarios, yes - that will be the point of that chapter.

As for a computational procedure that does better, probably not. That is beyond the scope of this book. The book will be too long merely covering the ground that it does. Detailed alternative proposals will have to come after I have laid this groundwork - for myself as much as for others. However, I'm not convinced at all that CEV is a failed project, and that an alternative is needed.

Comment author: Eliezer_Yudkowsky 13 February 2011 08:59:33PM 4 points [-]

Can you give me one quick sentence on a concrete failure mode of CEV?

Comment author: cousin_it 13 February 2011 11:08:01PM *  6 points [-]

I'm confused by your asking such questions. Roko's basilisk is a failure mode of CEV. I'm not aware of any work by you or other SIAI people that addresses it, never mind work that would prove the absence of other, yet undiscovered "creative" flaws.

Comment author: Eliezer_Yudkowsky 14 February 2011 06:43:09AM 4 points [-]

Roko's original proposed basilisk is not and never was the problem in Roko's post. I don't expect it to be part of CEV, and it would be caught by generic procedures meant to prevent CEV from running if 80% of humanity turns out to be selfish bastards, like the Last Jury procedure (as renamed by Bostrom) or extrapolating a weighted donor CEV with a binary veto over the whole procedure.

EDIT: I affirm all of Nesov's answers (that I've seen so far) in the threads below.

Comment author: cousin_it 14 February 2011 01:56:17PM *  10 points [-]

wedrifid is right: if you're now counting on failsafes to stop CEV from doing the wrong thing, that means you could apply the same procedures to any other proposed AI, so the real value of your life's work is in the failsafe, not in CEV. What happened to all your clever arguments saying you can't put external chains on an AI? I just don't understand this at all.

Comment author: wedrifid 15 February 2011 08:53:11AM *  5 points [-]

wedrifid is right: if you're now counting on failsafes to stop CEV from doing the wrong thing, that means you could apply the same procedures to any other proposed AI, so the real value of your life's work is in the failsafe, not in CEV.

Since my name was mentioned I had better confirm that I generally agree with your point but would have left out this sentence:

What happened to all your clever arguments saying you can't put external chains on an AI?

I don't disagree with the principle of having a failsafe - and don't think it is incompatible with the aforementioned clever arguments. But I do agree that "but there is a failsafe" is an utterly abysmal argument in favour of preferring CEV<humanity> over an alternative AI goal system.

I just don't understand this at all.

Tell me about it. With most people if they kept asking the same question when the answer is staring them in the face and then act oblivious as it is told to them repeatedly I dismiss them as either disingenuous or (possibly selectively) stupid in short order. But, to borrow wisdom from HP:MoR:

.... that just doesn't sound like /Eliezer's/ style.

...but you can only think that thought so many times, before you start to wonder about the trustworthiness of that whole 'style' concept.

Comment author: Vladimir_Nesov 14 February 2011 02:53:58PM *  6 points [-]

Any given FAI design can turn out to be unable to do the right thing, which corresponds to tripping failsafes, but to be a FAI it must also be potentially capable (for all we know) of doing the right thing. Adequate failsafe should just turn off an ordinary AGI immediately, so it won't work as an AI-in-chains FAI solution. You can't make AI do the right thing just by adding failsafes, you also need to have a chance of winning.

Comment author: Eliezer_Yudkowsky 14 February 2011 04:29:26PM 1 point [-]

Affirmed.

Comment author: ciphergoth 14 February 2011 08:14:27AM 5 points [-]

Is the Last Jury written up anywhere? It's not in the draft manuscript I have.

Comment author: gwern 18 July 2011 03:35:49AM 3 points [-]

I assume Last Jury is just the Last Judge from CEV but with majority voting among n Last Judges.

Comment author: wedrifid 14 February 2011 08:16:00AM *  5 points [-]

it would be caught by generic procedures meant to prevent CEV from running if 80% of humanity turns out to be selfish bastards

I too am confused by your asking of such questions. Your own "80% of humanity turns out to be selfish bastards" gives a pretty good general answer to the question already.

"But we will not run it if is bad" seems like it could be used to reply to just about anything. Sure, it is good to have safety measures no matter what you are doing but not running it doesn't make CEV<humanity> desirable.

Comment author: XiXiDu 14 February 2011 11:30:14AM 3 points [-]

I'm completely confused now. I thought CEV was right by definition? If "80% of humanity turns out to be selfish bastards" then it will extrapolate on that. If we start to cherry pick certain outcomes according to our current perception, why run CEV at all?

Comment author: Vladimir_Nesov 14 February 2011 11:56:39AM 3 points [-]

CEV is not right by definition, it's only well-defined given certain assumptions that can fail. It should be designed so that if it doesn't shut down, then it's probably right.

Comment author: Tyrrell_McAllister 14 February 2011 05:58:35PM 4 points [-]

Sincere question: Why would "80% of humanity turns out to be selfish bastards" violate one of those assumptions? Is the problem the "selfish bastard" part? Or is it that the "80%" part implies less homogeneity among humans than CEV assumes?

Comment author: wedrifid 15 February 2011 02:34:17AM 1 point [-]

Why would "80% of humanity turns out to be selfish bastards" violate one of those assumptions?

It would certainly seem that 80% of humanity turning out to be selfish bastards is compatible with CEV<humanity> being well defined, but not with being 'right'. This does not technically contradict anything in the grandparent (which is why I didn't reply with the same question myself). It does, perhaps, go against the theme of Nesov's comments.

Basically, and as you suggest, either it must be acknowledged that 'not well defined' and 'possibly evil' are two entirely different problems or something that amounts to 'humans do not want things that suck' must be one of the assumptions.

Comment author: wedrifid 14 February 2011 12:15:16PM *  4 points [-]

I'm completely confused now. I thought CEV was right by definition? If "80% of humanity turns out to be selfish bastards" then it will extrapolate on that.

No, CEV<wedrifid> is right by definition. When CEV is used as shorthand for "the coherent extrapolated volitions of all of humanity" as is the case there then it is quite probably not right at all. Because many humans, to put it extremely politely, have preferences that are distinctly different to what I would call 'right'.

If we start to cherry pick certain outcomes according to our current perception, why run CEV at all?

Yes, that would be pointless, it would be far better to compare the outcomes to CEV<group_I_identify_with_sufficiently> (then just use the latter!) The purpose of doing CEV<humanity> at all is for signalling and cooperation.

Comment author: steven0461 14 February 2011 07:44:23PM 2 points [-]

Because many humans, to put it extremely politely, have preferences that are distinctly different to what I would call 'right'.

Before or after extrapolation? If the former then why does that matter, if the latter then how do you know?

Comment author: wedrifid 15 February 2011 02:22:09AM *  3 points [-]

Before or after extrapolation? If the former then why does that matter, if the latter then how do you know?

Former in as much as it allows inferences about the latter. I don't need to know with any particular confidence for the purposes of the point. The point was to illustrate possible (and overwhelmingly obvious) failure modes.

Hoping that CEV<humanity> is desirable rather than outright unfriendly isn't a particularly good reason to consider it. It is going to result in outcomes that are worse from the perspective of whoever is running the GAI than CEV<that person> and CEV<group more closely identified with>.

The purpose of doing CEV<humanity> at all is for signalling and cooperation (or, possibly, outright confusion).

Comment author: XiXiDu 14 February 2011 02:17:13PM 1 point [-]

The purpose of doing CEV<humanity> at all is for signalling and cooperation.

Do you mean it is simply an SIAI marketing strategy and that it is not what they are actually going to do?

Comment author: wedrifid 14 February 2011 02:44:17PM 4 points [-]

Do you mean it is simply an SIAI marketing strategy and that it is not what they are actually going to do?

Signalling and cooperation can include actual behavior.

Comment author: Vladimir_Nesov 14 February 2011 11:16:41AM 3 points [-]

"But we will not run it if is bad" seems like it could be used to reply to just about anything. Sure, it is good to have safety measures no matter what you are doing but not running it doesn't make CEV<humanity> desirable.

In case where assumptions fail, and CEV ceases to be predictably good, safety measures shut it down, so nothing happens. In case where assumptions hold, it works. As a result, CEV has good expected utility, and gives us a chance to try again with a different design if it fails.

Comment author: wedrifid 14 February 2011 11:56:05AM 3 points [-]

This does not seem to weaken the position you quoted in any way.

Failsafe measures are a great idea. They just don't do anything to privileged CEV<humanity> + failsafe over anything_else + failsafe.

Comment author: Vladimir_Nesov 14 February 2011 12:09:40PM 1 point [-]

Failsafe measures are a great idea. They just don't do anything to privilege CEV<humanity> + failsafe over anything_else + failsafe.

Yes. They make sure that [CEV<humanity> + failsafe] is not worse than not running any AIs. Uncertainty about whether CEV<humanity> works makes expected [CEV<humanity> + failsafe] significantly better than doing nothing. Presence of potential controlled shutdown scenarios doesn't argue for worthlessness of the attempt, even where detailed awareness of these scenarios could be used to improve the plan.

Comment author: wedrifid 14 February 2011 12:21:19PM *  0 points [-]

I'm actually not even sure whether you are trying to disagree with me or not but once again, in case you are, nothing here weakens my position.

Comment author: wedrifid 14 February 2011 11:05:12AM 2 points [-]

Roko's original proposed basilisk is not and never was the problem in Roko's post.

Of course, Roko did not originally propose a basilisk at all. Just a novel solution to a obscure game theory problem.

Comment deleted 14 February 2011 11:13:28AM *  [-]
Comment author: Vladimir_Nesov 14 February 2011 11:59:13AM 1 point [-]

If CEV has a serious bug, it won't correctly implement anyone's volition, and so someone's volition saying that CEV shouldn't have that bug won't help.

Comment author: lukeprog 13 February 2011 09:02:58PM 2 points [-]

Not until I get to that part of the writing and research, no.

Comment author: lukeprog 14 February 2011 04:56:31AM 5 points [-]
Comment author: Adele_L 13 November 2013 05:18:16AM 0 points [-]

Has this been published anywhere yet?

Comment author: lukeprog 13 November 2013 04:58:30PM 1 point [-]

A related thing that has since been published is Ideal Advisor Theories and Personal CEV.

I have no plans to write the book; see instead Bostrom's far superior Superintelligence, forthcoming.

Comment author: Dorikka 14 February 2011 06:09:46AM *  1 point [-]

Extrapolated humanity decides that the best possible outcome is to become the Affront. Now, if the FAI put everyone in a separate VR and tricked him into believing that he was acting all Affront-like, then everything would be great -- everyone would be content. However, people don't just want the experience of being the Affront -- everyone agrees that they want to be truly interacting with other sentiences which will often feel the brunt of each other's coercive action.

Comment author: Eliezer_Yudkowsky 14 February 2011 06:40:23AM 3 points [-]

Original version of grandparent contained, before I deleted it, "Besides the usual 'Eating babies is wrong, what if CEV outputs eating babies, therefore a better solution is CEV plus code that outlaws eating babies.'"

Comment author: lukeprog 14 February 2011 06:20:09AM *  2 points [-]

Dorikka,

I don't understand this. If the singleton's utility function was written such that it's highest value was for humans to become the Affront, then making it the case that humans believed they were the Affront while not being the Affront would not satisfy the utility function. So why would the singleton do such a thing?

Comment author: Dorikka 15 February 2011 02:45:39AM 2 points [-]

I don't think that my brain was working optimally at 1am last night.

My first point was that our CEV might decide to go Baby-Eater, and so the FAI should treat the caring-about-the-real-world-state part of its utility function as a mere preference (like chocolate ice cream), and pop humanity into a nicely designed VR (though I didn't have the precision of thought necessary to put it into such language). However, it's pretty absurd for us to be telling our CEV what to do, considering that they'll have much more information than we do and much more refined thinking processes. I actually don't think that our Last Judge should do anything more than watch for coding errors (as in, we forgot to remove known psychological biases when creating the CEV).

My second point was that the FAI should also slip us into a VR if we desire a world-state in which we defect from each other (with similar results as in the prisoner's dilemma). However, the counterargument from point 1 also applies to this point.

Comment author: nazgulnarsil 16 February 2011 01:21:39AM 2 points [-]

I have never understood what is wrong with the amnesia-holodecking scenario. (is there a proper name for this?)

Comment author: Dorikka 16 February 2011 02:20:56AM 3 points [-]

If you want to, say, stop people from starving to death, would you be satisfied with being plopped on a holodeck with images of non-starving people? If so, then your stop-people-from-starving-to-death desire is not a desire to optimize reality into a smaller set of possible world-states, but simply a desire to have a set of sensations so that you believe starvation does not exist. The two are really different.

If you don't understand what I'm saying, the first two paragraphs of this comment might explain it better.

Comment author: nazgulnarsil 16 February 2011 02:25:44AM *  0 points [-]

thanks for clarifying. I guess I'm evil. It's a good thing to know about oneself.

Comment author: Dorikka 16 February 2011 02:30:23AM 0 points [-]

Uh, that was a joke, right?

Comment author: nazgulnarsil 16 February 2011 06:19:09AM 0 points [-]

no.

Comment author: Dorikka 16 February 2011 11:53:37PM 0 points [-]

What definition of evil are you using? I'm having trouble understanding why (how?) you would declare yourself evil, especially evil_nazgulnarsil.

Comment author: Sniffnoy 16 February 2011 09:03:04AM 0 points [-]

Well, it's essentially equivalent to wireheading.

Comment author: nazgulnarsil 16 February 2011 10:16:45AM 0 points [-]

which I also plan to do if everything goes tits-up.

Comment author: XiXiDu 13 February 2011 08:02:41PM 0 points [-]

However, I'm not convinced at all that CEV is a failed project, and that an alternative is needed.

Maybe you should rephrase it then to say that you'll present some possible failure modes of CEV that will have to be taken care of rather than "objections".

Comment author: lukeprog 13 February 2011 08:18:34PM 2 points [-]

No, I'm definitely presenting objections in that chapter.

Comment author: mwaser 16 February 2011 01:03:28PM 1 point [-]

MY "objection" to CEV is exactly the opposite of what you're expecting and asking for. CEV as described is not descriptive enough to allow the hypothesis "CEV is an acceptably good solution" to be falsified. Since it is "our wish if we knew more", etc., any failure scenrio that we could possibly put forth can immediately be answered by altering the potential "CEV space" to answer the objection.

I have radically different ideas about where CEV is going to converge to than most people here. Yet, the lack of distinctions in the description of CEV cause my ideas to be included under any argument for CEV because CEV potentially is . . . ANYTHING! There are no concrete distinctions that clearly state that something is NOT part of the ultimate CEV.

Arguing against CEV is like arguing against science. Can you argue a concrete failure scenario of science? Now -- keeping Hume in mind, what does science tell the AI to do? It's precisely the same argument, except that CEV as a "computational procedure" is much less well-defined than the scientific method.

Don't get me wrong. I love the concept of CEV. It's a brilliant goal statement. But it's brilliant because it doesn't clearly exclude anything that we want -- and human biases lead us to believe that it will include everything we truly want and exclude everything we truly don't want.

My concept of CEV disallows AI slavery. Your answer to that is "If that is truly what a grown-up humanity wants/needs, then that is what CEV will be". CEV is the ultimate desire -- ever-changing and never real enough to be pinned down.

Comment author: XiXiDu 13 February 2011 11:37:15AM 2 points [-]

But even if we can solve those problems, the question of which goal system to give the superintelligence remains. It is a question of philosophy; it is a question of ethics.

Isn't it an interdisciplinary question, also involving decision theory, game theory and evolutionary psychology etc.? Maybe it is mainly a question about philosophy of ethics, but not solely?

Comment author: XiXiDu 13 February 2011 11:33:57AM *  2 points [-]

...and a solar system in which all available matter is converted into parts for a planet-sized computer built to solve difficult mathematical problems.

This sentence isn't very clear. People who don't know about the topic will think, "to create an utopia you also have to solve difficult mathematical problems."

This project could mean the difference between a utopian solar system of unprecedented harmony and happiness, and a solar system void of human values in which all available matter is being used to to pursue a set of narrow goals.

Comment author: CharlesR 13 February 2011 04:32:45PM *  3 points [-]

"This extinction would result from a “technological singularity” in which an artificial intelligence (AI) . . . "

By this point, you've talked about airplanes, Apollo, science good, philosophy bad. Then you introduce the concepts of existential risk, claim we are at the cusp of an extinction level event, and the end of the world is going to come from . . . Skynet.

And we're only to paragraph four.

These are complex ideas. Your readers need time to digest them. Slow down.

You may also want to think about coming at this from another direction. If the goal is to convince your readers AI is dangerous, maybe you should introduce the concept of AI first. Then explain why their dangerous. Use an example that everyone knows about and build on that. You need to establish rapport with your readers before you try to get them to accept strange ideas. (For example, it is common knowledge computers are better at chess than humans.)

Finally, is your goal to get published? Nonfiction is usually written on spec. Some (many, all?) publishers are wary of buying anything that has already appeared on the internet. Just a few things to keep in mind.

Comment author: lukeprog 13 February 2011 07:24:47PM 4 points [-]

This is a difference between popular writing and academic writing. Academic writing begins with an abstract - a summary of your position and what you argue, without any explanation of the concepts involved or arguments for your conclusions. Only then do you proceed to explanation and argument.

As for publishing, that is less important than getting it written, and getting it written well. That said, the final copy will be quite a bit different than the draft sections posted here. My copy of this opening is already quite a bit different than what you see above.

Comment author: CharlesR 14 February 2011 03:11:20AM -2 points [-]

Clearly, I and others thought you were writing a popular book. No need to "school" us on the difference.

Comment author: lukeprog 14 February 2011 03:50:22AM *  0 points [-]

Okay.

It wasn't clear to me that you thought I was writing a popular book, since I denied that in my second paragraph (before the quoted passage from the book).

Comment author: CharlesR 14 February 2011 03:50:22PM *  0 points [-]

Your clarification wasn't in the original version of the preamble that I read. Or are you claiming that you haven't edited it? Because I clearly remember a different sentence structure.

However, I am willing to admit my memory is faulty on this.

Comment author: lukeprog 14 February 2011 08:36:45PM 1 point [-]

CharlesR,

My original clarification said that it was a cross between academic writing and mainstream writing, the result being something like 'Epistemology and the Psychology of Human Judgment.' That apparently wasn't clear enough, so I did indeed change my preamble recently to be clearer in its denial of popular style. Sorry if that didn't come through in the first round.

Comment author: CharlesR 14 February 2011 10:36:09PM 1 point [-]

And people wonder how wars get started . . .

Comment author: lukeprog 14 February 2011 11:31:34PM 1 point [-]

Heh. Sorry; I didn't mean to offend. I thought it was clear from my original preamble that this wasn't a popular-level work, but apparently not!

Comment author: XiXiDu 13 February 2011 11:12:40AM 3 points [-]

I haven't read all of the recent comments. Have you made progress yet on understanding Yudkowsky's meta-ethics sequence? I hope you let us know if you do (via a top-level post). It seems a bit weird to write a book on it if you don't either understand it yet or haven't disregarded understanding it for the purpose of your book.

Anyway, I appreciate your efforts very much and think that the book will be highly valuable either way.

Comment author: lukeprog 13 February 2011 04:26:58PM 1 point [-]

For now, see here, though my presentation of Yudkowsky's views in the book will be longer and clearer.

Comment author: XiXiDu 13 February 2011 11:22:38AM *  2 points [-]

The Wright Brothers flew their spruce-wood plane for 200 feet in 1903. Only 66 years later, Neil Armstrong walked on the moon, more than 240,000 miles from Earth.

I'm not sure if there is a real connection here? Has any research on "flight machines" converged with rocket science? They seem not to be correlated very much or the correlation is not obvious. Do you think it might be good to advance on that point or rephrase it to show that there has been some kind of intellectual or economic speedup that caused the quick development of various technologies?

Comment author: timtyler 13 February 2011 12:04:48PM 1 point [-]

The connection is - presumably - powered flight.

Comment author: XiXiDu 13 February 2011 11:41:18AM 1 point [-]

In this chapter I argue that the technological singularity is likely to occur within the next 200 years...

If it takes 200 years it could as well take 2000. I'm skeptical that if it doesn't occur this century it will occur next century for sure. If it doesn't occur this century then that might as well mean that it won't occur any time soon afterwards either.

Comment author: Normal_Anomaly 13 February 2011 04:44:00PM 3 points [-]

I have a similar feeling. If it hasn't happened within a century, I'll probably think (assume for the sake of argument I'm still around) that it will be in millenia or never.

Comment author: lukeprog 13 February 2011 07:26:17PM 0 points [-]

200 years is my 'outer bound.' It may very well happen much sooner, for example in 45 years.

Comment author: Daniel_Burfoot 13 February 2011 05:00:26PM 1 point [-]

I'll offer you a trade: an extensive and in-depth analysis of your book in return for an equivalent analysis of my book.

Quick note: I think explicit metadiscourse like "In Chapter 7 I argue that..." is ugly. Instead, try to fold those kinds of organizational notes into the flow of the text or argument. So write something like "But C.E.V. has some potential problems, as noted in Chapter 7, such as..." Or just throw away metadiscourse altogether.

Comment author: lukeprog 13 February 2011 05:07:42PM 0 points [-]

What is your book?

Comment author: Daniel_Burfoot 13 February 2011 06:01:53PM 0 points [-]

It's about the philosophy of science, machine learning, computer vision, computational linguistics, and (indirectly) artificial intelligence. It should be interesting/relevant to you, even if you don't buy the argument.

Comment author: lukeprog 13 February 2011 07:04:21PM 0 points [-]

Sorry, outside my expertise. In this book I'm staying away from technical implementation problems and sticking close to meta-ethics.

Comment author: lukeprog 13 February 2011 04:32:49PM 1 point [-]

Thanks, everyone. I agree with almost every point here and have updated my own copy accordingly. I especially look forward to your comments when I have something meaty to say.