Eliezer_Yudkowsky comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 12 December 2010 08:23:25AM 3 points [-]

I'm inclined to think that Eliezer's clear confidence in his own very high intelligence and his apparent high estimation of his expected importance (not the dictionary-definition "expected", but rather, measured as an expected quantity the usual way) are not actually unwarranted, and only violate the social taboo against admitting to thinking highly of one's own intelligence and potential impact on the world

Leaving aside the question of whether such apparently strong estimation is warranted in the case at hand; I would suggest that there's a serious possibility that the social taboo that you allude to is adaptive; that having a very high opinion of oneself (even if justified) is (on account of the affect heuristic) conducive to seeing a halo around oneself, developing overconfidence bias, rejecting criticisms prematurely, etc. leading to undesirable epistemological skewing.

Meanwhile, I'm glad Eliezer says "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me", and I hope he takes that seriously.

Same here.

it seems that any piece of writing with the implication "This project is very important, and this guy happens, through no fault of his own, to be one of very few people in the world working on it" will always be read by some people as "This guy thinks he's one of the most important people in the world".

It's easy to blunt this signal.

Suppose that any of:

  1. A billionaire decided to devote most of his or her wealth to funding Friendly AI research.

  2. A dozen brilliant academics became interested in and started doing Friendly AI research.

  3. The probability of Friendly AI research leading to a Friendly AI is sufficiently low so that another existential risk reduction effort (e.g. pursuit of stable whole brain emulation) is many orders of magnitude more cost-effective at reducing existential risk than Friendly AI research.

Then the Eliezer would not (by most estimations) be the highest utilitarian expected value human in the world. If he were to mention such possibilities explicitly this would greatly mute the undesired connotations.

Comment author: Eliezer_Yudkowsky 12 December 2010 08:48:46AM 5 points [-]

If I thought whole-brain emulation were far more effective I would be pushing whole-brain emulation, FOR THE LOVE OF SQUIRRELS!

Comment author: multifoliaterose 12 December 2010 09:26:23AM *  2 points [-]

Good to hear from you :-)

  1. My understanding is that at present there's a great deal of uncertainty concerning how future advanced technologies are going to develop (I've gotten an impression that e.g. Nick Bostrom and Josh Tenenbaum hold this view). In view of such uncertainty, it's easy to imagine new data emerging over the next decades that makes it clear that pursuit of whole-brain emulation (or some currently unimagined strategy) is a far more effective strategy for existential risk reduction than Friendly AI research.

  2. At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.

  3. Various people have suggested to me that initially pursuing Friendly AI might have higher expected value on the chance that it turns out to be easy. So I could imagine that it's rational for you personally to focus your efforts on Friendly AI research (EDIT: even if I'm correct in my estimation in the above point). My remarks in the grandparent above were not intended as a criticism of your strategy.

  4. I would be interested in hearing more about your own thinking about the relative feasibility of Friendly AI vs. stable whole-brain emulation and current arbitrage opportunities for existential risk reduction, whether on or off the record.

Comment author: ata 12 December 2010 10:45:53AM *  2 points [-]

At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.

That's an interesting claim, and you should post your analysis of it (e.g. the evidence and reasoning that you use to form the estimate that a positive singularity is "substantially more likely" given WBE).

Comment author: multifoliaterose 12 December 2010 06:09:40PM 1 point [-]

There's a thread with some relevant points (both for and against) titled Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future. I hadn't looked at the comments until just now and still have to read them all; but see in particular a comment by Carl Shulman.

After reading all of the comments I'll think about whether I have something to add beyond them and get back to you.

Comment author: CarlShulman 14 December 2010 03:07:15PM 3 points [-]

You may want to read this paper I presented at FHI. Note that there's a big difference between the probability of risk conditional on WBE coming first or AI coming first and marginal impact of effort. In particular some of our uncertainty is about logical facts about the space of algorithms and technology landscape, and some of it is about the extent and effectiveness of activism/intervention.

Comment author: multifoliaterose 14 December 2010 08:42:30PM 2 points [-]

Thanks for the very interesting reference! Is it linked on the SIAI research papers page? I didn't see it there.

Note that there's a big difference between the probability of risk conditional on WBE coming first or AI coming first and marginal impact of effort.

I appreciate this point which you've made to me previously (and which appears in your comment that I linked above!).

Comment author: Vladimir_Nesov 13 December 2010 09:28:54AM *  1 point [-]

At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.

Do you mean that the role of ems is in developing FAI faster (as opposed to biological-human-built FAI), or are you thinking of something else? If ems merely speed time up, they don't change the shape of FAI challenge much, unless (and to the extent that) we leverage them in a way we can't for the human society to reduce existential risk before FAI is complete (but this can turn out worse as well, ems can well launch the first arbitrary-goal AGI).

Comment author: ata 13 December 2010 10:32:25PM *  4 points [-]

but this can turn out worse as well, ems can well launch the first arbitrary-goal AGI

That's the main thing that's worried me about the possibility of ems coming first. But it depends on who is able to upload and who wants to, I suppose. If an average FAI researcher is more likely to upload, increase their speed, and possibly make copies of themselves than an average non-FAI AGI researcher, then it seems like that would be a reduction in risk.

I'm not sure whether that would be the case — a person working on FAI is likely to consider their work to be a matter of life and death, and would want all the speed increases they could get, but an AGI researcher may feel the same way about the threat to their career and status posed by the possibility of someone else getting to AGI first. And if uploading is very expensive at first, it'll only be the most well-funded AGI researchers (i.e. not SIAI and friends) who will have access to it early on and will be likely to attempt it (if it provides enough of a speed increase that they'd consider it to be worth it).

(I originally thought that uploading would be of little to no help in increasing one's own intelligence (in ways aside from thinking the same way but faster), since an emulation of a brain isn't automatically any more comprehensible than an actual brain, but now I can see a few ways it could help — the equivalent of any kind of brain surgery could be attempted quickly, freely, and reversibly, and the same could be said for experimenting with nootropic-type effects within the emulation. So it's possible that uploaded people would get somewhat smarter and not just faster. Of course, that's only soft self-improvement, nowhere near the ability to systematically change one's cognition at the algorithmic level, so I'm not worried about an upload bootstrapping itself to superintelligence (as some people apparently are). Which is good, since humans are not Friendly.)

Comment author: multifoliaterose 14 December 2010 03:55:15AM 3 points [-]

There's a lot to respond to here. Some quick points:

  1. It should be born in mind that greatly increased speed and memory may by themselves strongly affect a thinking entity. I imagine that if I could think a million times as fast I would think a lot more carefully about my interactions with the outside world than I do now.

  2. I don't see any reason to think that SIAI will continue to be the only group thinking about safety considerations. If nothing else, SIAI or FHI can raise awareness of the dangers of AI within the community of AI researchers.

  3. Assuming that brain uploads precede superhuman artificial intelligence, it would obviously be very desirable to have the right sort of human uploaded first.

  4. I presently have a very dim view as to the prospects for modern day humans developing Friendly AI. This skepticism is the main reason why I think that pursuing whole-brain emulations first is more promising. See the comment by Carl that I mentioned in response to Vladimir Nesov's question. Of course, my attitude on this point is subject to change with incoming evidence.

Comment author: CarlShulman 14 December 2010 03:01:14PM 2 points [-]

Sped-up ems have slower computers relative to their thinking speed. If Moore's Law of Mad Science means that increasing computing power allows researchers to build AI with less understanding (and thus more risk of UFAI), then a speedup of researchers relative to computing speed makes it more likely that the first non-WBE AIs will be the result of a theory-intensive approach with high understanding. Anders Sandberg of FHI and I are working on a paper exploring some of these issues.

Comment author: Vladimir_Nesov 14 December 2010 09:20:47PM 2 points [-]

This argument lowers the estimate of danger, but AIs developed on relatively slow computers are not necessarily theory-intense, could also be coding-intense, which leads to UFAI. And theory-intense doesn't necessarily imply adequate concern about AI's preference.

Comment author: multifoliaterose 14 December 2010 03:08:41AM 1 point [-]

My idea here is the same as the one that Carl Shulman mentioned in a response to one of your comments from nine months ago.