I'm researching and writing a book on meta-ethics and the technological singularity. I plan to post the first draft of the book, in tiny parts, to the Less Wrong discussion area. Your comments and constructive criticisms are much appreciated.

This is not a book for a mainstream audience. Its style is that of contemporary Anglophone philosophy. Compare to, for example, Chalmers' survey article on the singularity.

Bibliographic references are provided here.

Part 1 is below...

 

 

 

Chapter 1: The technological singularity is coming soon.

 

The Wright Brothers flew their spruce-wood plane for 200 feet in 1903. Only 66 years later, Neil Armstrong walked on the moon, more than 240,000 miles from Earth.

The rapid pace of progress in the physical sciences drives many philosophers to science envy. Philosophers have been researching the core problems of metaphysics, epistemology, and ethics for millennia and not yet come to consensus about them like scientists have for so many core problems in physics, chemistry, and biology.

I won’t argue about why this is so. Instead, I will argue that maintaining philosophy’s slow pace and not solving certain philosophical problems in the next two centuries may lead to the extinction of the human species.

This extinction would result from a “technological singularity” in which an artificial intelligence (AI) of human-level general intelligence uses its intelligence to improve its own intelligence, which would enable it to improve its intelligence even more, which would lead to an “intelligence explosion” feedback loop that would give this AI inestimable power to accomplish its goals. If so, then it is critically important to program its goal system wisely. This project could mean the difference between a utopian solar system of unprecedented harmony and happiness, and a solar system in which all available matter is converted into parts for a planet-sized computer built to solve difficult mathematical problems.

The technical challenges of designing the goal system of such a superintelligence are daunting.[1] But even if we can solve those problems, the question of which goal system to give the superintelligence remains. It is a question of philosophy; it is a question of ethics.

Philosophy has impacted billions of humans through religion, culture, and government. But now the stakes are even higher. When the technological singularity occurs, the philosophy behind the goal system of a superintelligent machine will determine the fate of the species, the solar system, and perhaps the galaxy.

***

Now that I have laid my positions on the table, I must argue for them. In this chapter I argue that the technological singularity is likely to occur within the next 200 years unless a worldwide catastrophe drastically impedes scientific progress. In chapter two I survey the philosophical problems involved in designing the goal system of a singular superintelligence, which I call the “singleton.”

In chapter three I show how the singleton will produce very different future worlds depending on which normative theory is used to design its goal system. In chapter four I describe what is perhaps the most developed plan for the design of the singleton’s goal system: Eliezer Yudkowsky’s “Coherent Extrapolated Volition.” In chapter five, I present some objections to Coherent Extrapolated Volition.

In chapter six I argue that we cannot decide how to design the singleton’s goal system without considering meta-ethics, because normative theory depends on meta-ethics. In chapter seven I argue that we should invest little effort in meta-ethical theories that do not fit well with our emerging reductionist picture of the world, just as we quickly abandon scientific theories that don’t fit the available scientific data. I also specify several meta-ethical positions that I think are good candidates for abandonment.

But the looming problem of the technological singularity requires us to have a positive theory, too. In chapter eight I propose some meta-ethical claims about which I think naturalists should come to agree. In chapter nine I consider the implications of these plausible meta-ethical claims for the design of the singleton’s goal system.

 ***

 




[1] These technical challenges are discussed in the literature on artificial agents in general and Artificial General Intelligence (AGI) in particular. Russell and Norvig (2009) provide a good overview of the challenges involved in the design of artificial agents. Goertzel and Pennachin (2010) provide a collection of recent papers on the challenges of AGI. Yudkowsky (2010) proposes a new extension of causal decision theory to suit the needs of a self-modifying AI. Yudkowsky (2001) discusses other technical (and philosophical) problems related to designing the goal system of a superintelligence.

 

New Comment
112 comments, sorted by Click to highlight new comments since: Today at 6:14 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

In chapter two I survey the philosophical problems involved in designing the goal system of a singular superintelligence, which I call the “singleton.”

That sounds a bit like you invented the term "singleton". I suggest to clarify that with a footnote.

9Perplexed13y
You (Luke) also need to provide reasons for focusing on the 'singleton' case. To the typical person first thinking about AI singularities, the notion of AIs building better AIs will seem natural, but the idea of an AI enhancing itself will seem weird and even paradoxical.
2lukeprog13y
Indeed I shall.
4gwern13y
It's also worth noting that more than one person thinks the singleton wouldn't exist and alternative models are more likely. For example, Robin Hanson's em model (crack of a future dawn) is fairly likely given that we have a decent Whole Brain Emulation Roadmap, but nothing of the sort for a synthetic AI, and people like Nick Szabo emphatically disagree that a single agent could outperform a market of agents.
1Perplexed13y
Of course, people can be crushed by impersonal markets as easily as they can by singletons. The case might be made that we would prefer a singleton because the task of controlling it would be less complex and error-prone.
1gwern13y
A reasonable point, but I took Luke to be discussing the problems of designing a good singleton because a singleton seemed like the most likely outcome, not because he likes the singleton aesthetically or because a singleton would be easier to control.
2Perplexed13y
In the context of CEV, Eliezer apparently thinks that a singleton is desirable, not just likely. I'm not convinced, but since Luke is going to critique CEV in any case, this aspect should be addressed. ETA: I have been corrected - the quotation was not from Eliezer. Also, the quote doesn't directly say that a singleton is a desirable outcome; it says that the assumption that we will be dealing with a singleton is a desirable feature of an FAI strategy.
0Nick_Tarleton13y
I don't know how much you meant to suggest otherwise, but just for context, the linked paper was written by Roko and me, not Eliezer, and doesn't try to perfectly represent his opinions.
0Perplexed13y
No, I didn't realize that. Thx for the correction, and sorry for the misattribution.
0lukeprog13y
I have different justifications in mind, and yes I will be explaining them in the book.
0lukeprog13y
Yup, thanks.
0lukeprog13y
Done. It's from Bostrom (2006).

You say you'll present some objections to CEV. Can you describe a concrete failure scenario of CEV, and state a computational procedure that does better?

4lukeprog13y
As for concrete failure scenarios, yes - that will be the point of that chapter. As for a computational procedure that does better, probably not. That is beyond the scope of this book. The book will be too long merely covering the ground that it does. Detailed alternative proposals will have to come after I have laid this groundwork - for myself as much as for others. However, I'm not convinced at all that CEV is a failed project, and that an alternative is needed.
4Eliezer Yudkowsky13y
Can you give me one quick sentence on a concrete failure mode of CEV?
9cousin_it13y
I'm confused by your asking such questions. Roko's basilisk is a failure mode of CEV. I'm not aware of any work by you or other SIAI people that addresses it, never mind work that would prove the absence of other, yet undiscovered "creative" flaws.
5Eliezer Yudkowsky13y
Roko's original proposed basilisk is not and never was the problem in Roko's post. I don't expect it to be part of CEV, and it would be caught by generic procedures meant to prevent CEV from running if 80% of humanity turns out to be selfish bastards, like the Last Jury procedure (as renamed by Bostrom) or extrapolating a weighted donor CEV with a binary veto over the whole procedure. EDIT: I affirm all of Nesov's answers (that I've seen so far) in the threads below.

wedrifid is right: if you're now counting on failsafes to stop CEV from doing the wrong thing, that means you could apply the same procedures to any other proposed AI, so the real value of your life's work is in the failsafe, not in CEV. What happened to all your clever arguments saying you can't put external chains on an AI? I just don't understand this at all.

8Vladimir_Nesov13y
Any given FAI design can turn out to be unable to do the right thing, which corresponds to tripping failsafes, but to be a FAI it must also be potentially capable (for all we know) of doing the right thing. Adequate failsafe should just turn off an ordinary AGI immediately, so it won't work as an AI-in-chains FAI solution. You can't make AI do the right thing just by adding failsafes, you also need to have a chance of winning.
0Eliezer Yudkowsky13y
Affirmed.
5wedrifid13y
Since my name was mentioned I had better confirm that I generally agree with your point but would have left out this sentence: I don't disagree with the principle of having a failsafe - and don't think it is incompatible with the aforementioned clever arguments. But I do agree that "but there is a failsafe" is an utterly abysmal argument in favour of preferring CEV over an alternative AI goal system. Tell me about it. With most people if they kept asking the same question when the answer is staring them in the face and then act oblivious as it is told to them repeatedly I dismiss them as either disingenuous or (possibly selectively) stupid in short order. But, to borrow wisdom from HP:MoR:
9Paul Crowley13y
Is the Last Jury written up anywhere? It's not in the draft manuscript I have.
4gwern13y
I assume Last Jury is just the Last Judge from CEV but with majority voting among n Last Judges.
6wedrifid13y
I too am confused by your asking of such questions. Your own "80% of humanity turns out to be selfish bastards" gives a pretty good general answer to the question already. "But we will not run it if is bad" seems like it could be used to reply to just about anything. Sure, it is good to have safety measures no matter what you are doing but not running it doesn't make CEV desirable.
4XiXiDu13y
I'm completely confused now. I thought CEV was right by definition? If "80% of humanity turns out to be selfish bastards" then it will extrapolate on that. If we start to cherry pick certain outcomes according to our current perception, why run CEV at all?
5wedrifid13y
No, CEV is right by definition. When CEV is used as shorthand for "the coherent extrapolated volitions of all of humanity" as is the case there then it is quite probably not right at all. Because many humans, to put it extremely politely, have preferences that are distinctly different to what I would call 'right'. Yes, that would be pointless, it would be far better to compare the outcomes to CEV<group_I_identify_with_sufficiently> (then just use the latter!) The purpose of doing CEV at all is for signalling and cooperation.
3steven046113y
Before or after extrapolation? If the former then why does that matter, if the latter then how do you know?
3wedrifid13y
Former in as much as it allows inferences about the latter. I don't need to know with any particular confidence for the purposes of the point. The point was to illustrate possible (and overwhelmingly obvious) failure modes. Hoping that CEV is desirable rather than outright unfriendly isn't a particularly good reason to consider it. It is going to result in outcomes that are worse from the perspective of whoever is running the GAI than CEV and CEV. The purpose of doing CEV at all is for signalling and cooperation (or, possibly, outright confusion).
1XiXiDu13y
Do you mean it is simply an SIAI marketing strategy and that it is not what they are actually going to do?
4wedrifid13y
Signalling and cooperation can include actual behavior.
4Vladimir_Nesov13y
CEV is not right by definition, it's only well-defined given certain assumptions that can fail. It should be designed so that if it doesn't shut down, then it's probably right.
7Tyrrell_McAllister13y
Sincere question: Why would "80% of humanity turns out to be selfish bastards" violate one of those assumptions? Is the problem the "selfish bastard" part? Or is it that the "80%" part implies less homogeneity among humans than CEV assumes?
0wedrifid13y
It would certainly seem that 80% of humanity turning out to be selfish bastards is compatible with CEV being well defined, but not with being 'right'. This does not technically contradict anything in the grandparent (which is why I didn't reply with the same question myself). It does, perhaps, go against the theme of Nesov's comments. Basically, and as you suggest, either it must be acknowledged that 'not well defined' and 'possibly evil' are two entirely different problems or something that amounts to 'humans do not want things that suck' must be one of the assumptions.
0XiXiDu13y
I suppose you have to comprehend Yudkowsky's metaethics to understand that sentence. I still don't get what kind of 'right' people are talking about.
8wedrifid13y
Very similar to your right, for all practical purposes. A slight difference in how it is described though. You describe (if I recall), 'right' as being "in accordance with XiXiDu's preferences". Using Eliezer's style of terminology you would instead describe 'right' as more like a photograph of what XiXiDu's preferences are, without them necessarily including any explicit reference to XiXiDu. In most cases it doesn't really matter. It starts to matter once people start saying things like "But what if XiXiDu could take a pill that made him prefer that he eat babies? Would that mean it became right? Should XiXiDu take the pill?" By the way, 'right' would also mean what the photo looks like after it has been airbrushed a bit in photoshop by an agent better at understanding what we actually want than we are at introspection and communication. So it's an abstract representation of what you would want if you were smarter and more rational but still had your preferences. Also note that Eliezer sometimes blurs the line between 'right' meaning what he would want and what some abstract "all of humanity" would want.
3Vladimir_Nesov13y
In case where assumptions fail, and CEV ceases to be predictably good, safety measures shut it down, so nothing happens. In case where assumptions hold, it works. As a result, CEV has good expected utility, and gives us a chance to try again with a different design if it fails.
5wedrifid13y
This does not seem to weaken the position you quoted in any way. Failsafe measures are a great idea. They just don't do anything to privileged CEV + failsafe over anything_else + failsafe.
0Vladimir_Nesov13y
Yes. They make sure that [CEV + failsafe] is not worse than not running any AIs. Uncertainty about whether CEV works makes expected [CEV + failsafe] significantly better than doing nothing. Presence of potential controlled shutdown scenarios doesn't argue for worthlessness of the attempt, even where detailed awareness of these scenarios could be used to improve the plan.
-1wedrifid13y
I'm actually not even sure whether you are trying to disagree with me or not but once again, in case you are, nothing here weakens my position.
0Vladimir_Nesov13y
"Not running it" does make [CEV + failsafe] desirable, as compared to doing nothing, even in the face of problems with [CEV], and nobody is going to run just [CEV]. So most arguments for presence of problems in CEV, if they are met with adequate failsafe specifications (which is far from a template to reply to anything, failsafes are not easy), do indeed lose a lot of traction. Besides, what are they arguments for? One needs a suggestion for improvement, and failsafes are intended to make it so that doing nothing is not an improvement, even though improvements over any given state of the plan would be dandy.
-1wedrifid13y
Yes, this is trivially true and not currently disputed by anyone here. Nobody is suggesting doing nothing. Doing nothing is crazy.
2wedrifid13y
Of course, Roko did not originally propose a basilisk at all. Just a novel solution to a obscure game theory problem.
-4[anonymous]13y
From your current perspective. But also given your extrapolated volition? If it is, then it won't happen. ETA The above was confusing and unclear. I don't believe that one person can change the course of CEV. I rather meant to ask if he believes that it would be a failure mode even if it was the correct extrapolated volition of humanity.
2Vladimir_Nesov13y
If CEV has a serious bug, it won't correctly implement anyone's volition, and so someone's volition saying that CEV shouldn't have that bug won't help.
0[anonymous]13y
Never mind, upvoted your comment. I wrote "then it won't happen". That was wrong, I don't actually believe that. I meant to ask something different. Edited the comment to add a clarification.
0[anonymous]13y
Obviously. A bug would be the inability to extrapolate volition correctly, not a certain outcome that is based on the correct extrapolated volition. So what did cousin_it mean by saying that outcome X is a failure mode? Does he mean that from his current perspective he doesn't like outcome X or that outcome X would imply a bug in the process of extrapolating volition? (ETA I'm talking about CEV-humanity and not CEV-cousin-it. There would be no difference in the latter case.)
5lukeprog13y
Not until I get to that part of the writing and research, no.
8lukeprog13y
That is, I'm applying your advice to hold off on proposing solutions until the problem has been discussed as thoroughly as possible without suggesting any.
0Adele_L10y
Has this been published anywhere yet?
2lukeprog10y
A related thing that has since been published is Ideal Advisor Theories and Personal CEV. I have no plans to write the book; see instead Bostrom's far superior Superintelligence, forthcoming.
2Dorikka13y
Extrapolated humanity decides that the best possible outcome is to become the Affront. Now, if the FAI put everyone in a separate VR and tricked him into believing that he was acting all Affront-like, then everything would be great -- everyone would be content. However, people don't just want the experience of being the Affront -- everyone agrees that they want to be truly interacting with other sentiences which will often feel the brunt of each other's coercive action.
5Eliezer Yudkowsky13y
Original version of grandparent contained, before I deleted it, "Besides the usual 'Eating babies is wrong, what if CEV outputs eating babies, therefore a better solution is CEV plus code that outlaws eating babies.'"
3nazgulnarsil13y
I have never understood what is wrong with the amnesia-holodecking scenario. (is there a proper name for this?)
4Dorikka13y
If you want to, say, stop people from starving to death, would you be satisfied with being plopped on a holodeck with images of non-starving people? If so, then your stop-people-from-starving-to-death desire is not a desire to optimize reality into a smaller set of possible world-states, but simply a desire to have a set of sensations so that you believe starvation does not exist. The two are really different. If you don't understand what I'm saying, the first two paragraphs of this comment might explain it better.
0nazgulnarsil13y
thanks for clarifying. I guess I'm evil. It's a good thing to know about oneself.
0Dorikka13y
Uh, that was a joke, right?
0nazgulnarsil13y
no.
0Dorikka13y
What definition of evil are you using? I'm having trouble understanding why (how?) you would declare yourself evil, especially evil_nazgulnarsil.
5nazgulnarsil13y
i don't care about suffering independent of my sensory perception of it causing me distress.
0Dorikka13y
Oh. In that case, it might be more precise to say that your utility function does not assign positive or negative utility to the suffering of others (if I'm interpreting your statement correctly). However, I'm curious about whether this statement holds true for you at extremes, so here's a hypothetical. I'm going to assume that you like ice cream. If you don't like any sort of ice cream, substitute in a certain quantity of your favorite cookie. If you could get a scoop of ice cream (or a cookie) for free at the cost of a million babies thumbs cut off, would you take the ice cream/cookie? If not, then you assign a non-zero utility to others suffering, so it might be true that you care very little, but it's not true that you don't care at all.
6nazgulnarsil13y
I think you misunderstand slightly. Sensory experience includes having the idea communicated to me that my action is causing suffering. I assign negative utility to other's suffering in real life because the thought of such suffering is unpleasant.
0Dorikka13y
Alright. Would you take the offer if Omega promised that he would remove your memories of the agreement of having a million babies' thumbs cut off for a scoop of ice cream right after you made the agreement, so you could enjoy your ice-cream without guilt?
2nazgulnarsil13y
no, at the time of the decision i have sensory experience of having been the cause of suffering. I don't feel responsibility to those who suffer in that I would choose to holodeck myself rather than stay in reality and try to fix problems. this does not mean that I will cause suffering on purpose. a better hypothetical dilemma might be if I could ONLY get access to the holodeck if I cause others to suffer (cypher from the matrix).
0Dorikka13y
Okay, so you would feel worse if you had caused people the same amount of suffering than you would if someone else had done so?
1nazgulnarsil13y
yes
0Dorikka13y
Mmkay. I would say that our utility functions are pretty different, in that case, since, with regard to suffering, I value world-states according to how much suffering they contain, not according to who causes the suffering.
0Sniffnoy13y
Well, it's essentially equivalent to wireheading.
1nazgulnarsil13y
which I also plan to do if everything goes tits-up.
2lukeprog13y
Dorikka, I don't understand this. If the singleton's utility function was written such that it's highest value was for humans to become the Affront, then making it the case that humans believed they were the Affront while not being the Affront would not satisfy the utility function. So why would the singleton do such a thing?
3Dorikka13y
I don't think that my brain was working optimally at 1am last night. My first point was that our CEV might decide to go Baby-Eater, and so the FAI should treat the caring-about-the-real-world-state part of its utility function as a mere preference (like chocolate ice cream), and pop humanity into a nicely designed VR (though I didn't have the precision of thought necessary to put it into such language). However, it's pretty absurd for us to be telling our CEV what to do, considering that they'll have much more information than we do and much more refined thinking processes. I actually don't think that our Last Judge should do anything more than watch for coding errors (as in, we forgot to remove known psychological biases when creating the CEV). My second point was that the FAI should also slip us into a VR if we desire a world-state in which we defect from each other (with similar results as in the prisoner's dilemma). However, the counterargument from point 1 also applies to this point.
0XiXiDu13y
Maybe you should rephrase it then to say that you'll present some possible failure modes of CEV that will have to be taken care of rather than "objections".
3lukeprog13y
No, I'm definitely presenting objections in that chapter.
0mwaser13y
MY "objection" to CEV is exactly the opposite of what you're expecting and asking for. CEV as described is not descriptive enough to allow the hypothesis "CEV is an acceptably good solution" to be falsified. Since it is "our wish if we knew more", etc., any failure scenrio that we could possibly put forth can immediately be answered by altering the potential "CEV space" to answer the objection. I have radically different ideas about where CEV is going to converge to than most people here. Yet, the lack of distinctions in the description of CEV cause my ideas to be included under any argument for CEV because CEV potentially is . . . ANYTHING! There are no concrete distinctions that clearly state that something is NOT part of the ultimate CEV. Arguing against CEV is like arguing against science. Can you argue a concrete failure scenario of science? Now -- keeping Hume in mind, what does science tell the AI to do? It's precisely the same argument, except that CEV as a "computational procedure" is much less well-defined than the scientific method. Don't get me wrong. I love the concept of CEV. It's a brilliant goal statement. But it's brilliant because it doesn't clearly exclude anything that we want -- and human biases lead us to believe that it will include everything we truly want and exclude everything we truly don't want. My concept of CEV disallows AI slavery. Your answer to that is "If that is truly what a grown-up humanity wants/needs, then that is what CEV will be". CEV is the ultimate desire -- ever-changing and never real enough to be pinned down.
0[anonymous]13y
What source would you recommend to someone who wants to understand CEV as a computational procedure?

Luke, as an intermediate step before writing a book you should write a book chapter for Springer's upcoming edited volume on the Singularity Hypothesis. http://singularityhypothesis.blogspot.com/p/about-singularity-hypothesis.html I'm not sure how biased they are against non-academics... probably depends on how many submissions they get.

Maybe email Louie and me and we can brainstorm about topics; meta-ethics might not be the best thing compared to something like making an argument about how we need to solve all of philosophy in order to safely build AI.

3mwaser13y
I know the individuals involved. They are not biased against non-academics and would welcome a well-thought-out contribution from anyone. You could easily have a suitable abstract ready by March 1st (two weeks early) if you believed that it was important enough -- and I would strongly urge you to do so.
4lukeprog13y
Thanks for this input. I'm currently devoting all my spare time to research on a paper for this volume so that I can hopefully have an extended abstract ready by March 15th.
2lukeprog13y
I will probably write papers and articles in the course of developing the book. Whether or not I could have an abstract ready by March 15th is unknown; at the moment, I still work a full-time job. Thanks for bringing this to my attention.

The first sentence is the most important of any book because if a reader doesn't like it he will stop. Your first sentence contains four numbers, none of which are relevant to your core thesis. Forgive me for being cruel but a publisher reading this sentence would conclude that you lack the ability to write a book people would want to read.

Look at successful non-fiction books to see how they get started.

6lukeprog13y
This is not a book for a popular audience. Also, it's a first draft. That said, you needn't apologize for saying anything "cruel." But, based on your comments, I've now revised my opening to the following... It's still not like the opening of a Richard Dawkins book, but it's not supposed to be like a Richard Dawkins book.
7James_Miller13y
Better, but how about this: "Philosophy's pathetic pace could kill us."
0[anonymous]13y
If his target audience is academia, then drastic claims (whether substantiated or not) are going to be an active turnoff, and should only be employed when absolutely necessary.

Bibliographic references are provided here.

I notice some of the references you suggest are available as online resources. It would be a courtesy if you provided links.

0lukeprog13y
Done.

"This extinction would result from a “technological singularity” in which an artificial intelligence (AI) . . . "

By this point, you've talked about airplanes, Apollo, science good, philosophy bad. Then you introduce the concepts of existential risk, claim we are at the cusp of an extinction level event, and the end of the world is going to come from . . . Skynet.

And we're only to paragraph four.

These are complex ideas. Your readers need time to digest them. Slow down.

You may also want to think about coming at this from another direction. If the... (read more)

4lukeprog13y
This is a difference between popular writing and academic writing. Academic writing begins with an abstract - a summary of your position and what you argue, without any explanation of the concepts involved or arguments for your conclusions. Only then do you proceed to explanation and argument. As for publishing, that is less important than getting it written, and getting it written well. That said, the final copy will be quite a bit different than the draft sections posted here. My copy of this opening is already quite a bit different than what you see above.
-3CharlesR13y
Clearly, I and others thought you were writing a popular book. No need to "school" us on the difference.
0lukeprog13y
Okay. It wasn't clear to me that you thought I was writing a popular book, since I denied that in my second paragraph (before the quoted passage from the book).
0CharlesR13y
Your clarification wasn't in the original version of the preamble that I read. Or are you claiming that you haven't edited it? Because I clearly remember a different sentence structure. However, I am willing to admit my memory is faulty on this.
1lukeprog13y
CharlesR, My original clarification said that it was a cross between academic writing and mainstream writing, the result being something like 'Epistemology and the Psychology of Human Judgment.' That apparently wasn't clear enough, so I did indeed change my preamble recently to be clearer in its denial of popular style. Sorry if that didn't come through in the first round.
1CharlesR13y
And people wonder how wars get started . . .
1lukeprog13y
Heh. Sorry; I didn't mean to offend. I thought it was clear from my original preamble that this wasn't a popular-level work, but apparently not!
[-][anonymous]13y50

I'm glad you're writing a book!

I tried reading this through the eyes of someone who wasn't familiar with the singularity & LW ideas, and you lost me with the fourth paragraph ("This extinction..."). Paragraph 3 makes the extremely bold claim that humanity could face its extinction soon unless we solve some longstanding philosophical problems. When someone says something outrageous-sounding like that, they have a short window to get me to see how their claim could be plausible and is worth at least considering as a hypothesis, otherwise it gets classified as ridiculous no... (read more)

1lukeprog13y
This is a difference between popular writing and academic writing. The opening is my abstract. See here.
4Unnamed13y
The problem that I described in my first paragraph is there regardless of how popular or academic a style you're aiming for. The bold, attention-grabbing claims about extinction/utopia/the fate of the world are a turnoff, and they actually seem more out of place for academic writing than for popular writing. If you don't want to spend more time elaborating on your argument in order to make the bold claims sound plausible, you could just get rid of those bold claims. Maybe you could include one mention of the high stakes in your abstract, as part of the teaser of the argument to come, rather than vividly describing the high stakes before and after the abstract as a way to shout out "hey this is really important!"
9lukeprog13y
Thanks for your comment, but I'm going with a different style. This kind of opening is actually quite common in Anglophone philosophy, as the quickest route to tenure is to make really bold claims and then come up with ingenius ways of defending them. I know that Less Wrong can be somewhat averse to the style of contemporary Anglophone philosophy, but that will not dissuade me from using it. To drive home the point that my style here is common in Anglophone philosophy (I'm avoiding calling it analytic philosophy), here a few examples... The opening paragraphs of David Lewis' On the Plurality of Worlds, in which he defends a radical view known as modal realism, that all possible worlds actually exist: Opening paragraph (abstract) of Neil Sinhababu's "Possible Girls" for the Pacific Philosophical Quarterly: Opening paragraph of Peter Klein's "Human Knowledge and the Infinite Regress of Reasons" for Philosophical Perspectives: And, the opening paragraph of Steven Maitzen's paper arguing that a classical theistic argument actually proves atheism: And those are just the first four works that came to mind. This kind of abrupt opening is the style of Anglophone philosophy, and that's the style I'm using. Anyone who keeps up with Anglophone philosophy lives and breathes this style of writing every week. Anglophone philosophy is not written for people who are casually browsing for interesting things to read. It is written for academics who have hundreds and hundreds of papers and books we might need to read, and we need to know right away in the opening lines whether or not a particular book or paper addresses the problems we are researching.

There's not much to critically engage with yet, but...

I find it odd that you claim to have "laid [your] positions on the table" in the first half of this piece. As far as I can make out, the first half only describes a set of problems and possibilities arising from the "intelligence explosion". It doesn't say anything about your response or proposed solution to those problems.

I haven't read all of the recent comments. Have you made progress yet on understanding Yudkowsky's meta-ethics sequence? I hope you let us know if you do (via a top-level post). It seems a bit weird to write a book on it if you don't either understand it yet or haven't disregarded understanding it for the purpose of your book.

Anyway, I appreciate your efforts very much and think that the book will be highly valuable either way.

1lukeprog13y
For now, see here, though my presentation of Yudkowsky's views in the book will be longer and clearer.

But even if we can solve those problems, the question of which goal system to give the superintelligence remains. It is a question of philosophy; it is a question of ethics.

Isn't it an interdisciplinary question, also involving decision theory, game theory and evolutionary psychology etc.? Maybe it is mainly a question about philosophy of ethics, but not solely?

...and a solar system in which all available matter is converted into parts for a planet-sized computer built to solve difficult mathematical problems.

This sentence isn't very clear. People who don't know about the topic will think, "to create an utopia you also have to solve difficult mathematical problems."

This project could mean the difference between a utopian solar system of unprecedented harmony and happiness, and a solar system void of human values in which all available matter is being used to to pursue a set of narrow goals.

The Wright Brothers flew their spruce-wood plane for 200 feet in 1903. Only 66 years later, Neil Armstrong walked on the moon, more than 240,000 miles from Earth.

I'm not sure if there is a real connection here? Has any research on "flight machines" converged with rocket science? They seem not to be correlated very much or the correlation is not obvious. Do you think it might be good to advance on that point or rephrase it to show that there has been some kind of intellectual or economic speedup that caused the quick development of various technologies?

1timtyler13y
The connection is - presumably - powered flight.

I'll offer you a trade: an extensive and in-depth analysis of your book in return for an equivalent analysis of my book.

Quick note: I think explicit metadiscourse like "In Chapter 7 I argue that..." is ugly. Instead, try to fold those kinds of organizational notes into the flow of the text or argument. So write something like "But C.E.V. has some potential problems, as noted in Chapter 7, such as..." Or just throw away metadiscourse altogether.

0lukeprog13y
What is your book?
0Daniel_Burfoot13y
It's about the philosophy of science, machine learning, computer vision, computational linguistics, and (indirectly) artificial intelligence. It should be interesting/relevant to you, even if you don't buy the argument.
0lukeprog13y
Sorry, outside my expertise. In this book I'm staying away from technical implementation problems and sticking close to meta-ethics.

Thanks, everyone. I agree with almost every point here and have updated my own copy accordingly. I especially look forward to your comments when I have something meaty to say.

In this chapter I argue that the technological singularity is likely to occur within the next 200 years...

If it takes 200 years it could as well take 2000. I'm skeptical that if it doesn't occur this century it will occur next century for sure. If it doesn't occur this century then that might as well mean that it won't occur any time soon afterwards either.

4Normal_Anomaly13y
I have a similar feeling. If it hasn't happened within a century, I'll probably think (assume for the sake of argument I'm still around) that it will be in millenia or never.
0lukeprog13y
200 years is my 'outer bound.' It may very well happen much sooner, for example in 45 years.