Comment author: Brian_Tomasik 22 March 2015 12:03:33AM *  9 points [-]

Short answer:

Donate to MIRI, or split between MIRI and GiveWell charities if you want some fuzzies for short-term helping.

Long answer:

I'm a negative utilitarian (NU) and have been thinking since 2007 about the sign of MIRI for NUs. (Here's some relevant discussion.) I give ~70% chance that MIRI's impact is net good by NU lights and ~30% that it's net bad, but given MIRI's high impact, the expected value of MIRI is still very positive.

As far as your question: I'd put the probability of uncontrolled AI creating hells higher than 1 in 10,000 and the probability that MIRI as a whole prevents that from happening higher than 1 in 10,000,000. Say such hells used 10^-15 of the AI's total computing resources. Assuming computing power to create ~10^30 humans for ~10^10 years, MIRI would prevent in expectation ~10^18 hell-years. Assuming MIRI's total budget ever is $1 billion (too high), that's ~10^9 hell-years prevented per dollar. Now apply rigorous discounts to account for priors against astronomical impacts and various other far-future-dampening effects. MIRI still seems very promising at the end of the calculation.

Comment author: drnickbone 14 March 2015 08:24:49PM 0 points [-]

I had a look at this: the KCA (Kolmogorov Complexity) approach seems to match my own thoughts best.

I'm not convinced about the "George Washington" objection. It strikes me that a program which extracts George Washington as an observer from insider a wider program "u" (modelling the universe) wouldn't be significantly shorter than a program which extracts any other human observer living at about the same time. Or indeed, any other animal meeting some crude definition of an observer.

Searching for features of human interest (like "leader of a nation") is likely to be pretty complicated, and require a long program. To reduce the program size as much as possible, it ought to just scan for physical quantities which are easy to specify but very diagnostic of a observer. For example, scan for a physical mass with persistent low entropy compared to its surroundings, persistent matter and energy throughput (low entropy in, high entropy out, maintaining its own low entropy state), a large number of internally structured electrical discharges, and high correlation between said discharges and events surrounding said mass. The program then builds a long list of such "observers" encountered while stepping through u, and simply picks out the nth entry on the list, giving the "nth" observer complexity about K(n). Unless George Washington happened to be a very special n (why would he be?) he would be no simpler to find than anyone else.

Comment author: Brian_Tomasik 17 March 2015 01:47:57AM 0 points [-]

Nice point. :)

That said, your example suggests a different difficulty: People who happen to be special numbers n get higher weight for apparently no reason. Maybe one way to address this fact is to note that what number n someone has is relative to (1) how the list is enumerated and (2) what universal Turing machine is being used for KC in the first place, and maybe averaging over these arbitrary details would blur the specialness of, say, the 1-billionth observer according to any particular coding scheme. Still, I doubt the KCs of different people would be exactly equal even after such adjustments.

Comment author: Synaptic 21 February 2015 11:22:51PM 5 points [-]

I think I did not explain my proposal clearly enough. What I'm claiming is if that you could see intermediate steps suggesting that a worst-type future is imminent, or merely crosses your probability threshold as "too likely", then you could enumerate those and request to be removed from biostasis then. Before those who are resuscitating you would have a chance to do so.

Comment author: Brian_Tomasik 21 February 2015 11:29:15PM 3 points [-]

Ah, got it. Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).

Comment author: Brian_Tomasik 21 February 2015 11:16:57PM 7 points [-]

A "do not resuscitate" kind of request would probably help with some futures that are mildly bad in virtue of some disconnect between your old self and the future (e.g., extreme future shock). But in those cases, you could always just kill yourself.

In the worst futures, presumably those resuscitating you wouldn't care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.

Comment author: imuli 13 January 2015 05:09:42PM *  4 points [-]

Birth Year vs Foom:

A bit less striking than the famous enough to have Google pop up their birth year subset (green).

Comment author: Brian_Tomasik 15 January 2015 03:12:51AM 1 point [-]

This is awesome! Thank you. :) I'd be glad to copy it into my piece if I have your permission. For now I've just linked to it.

Comment author: imuli 08 January 2015 08:31:53PM 2 points [-]

The subset that you can get birth years off the first page of a google search of their name (n=9), has a pretty clear correlation with younger people believing in harder takeoff. (I'll update if I get time to dig out other's birth years.)

Comment author: Brian_Tomasik 09 January 2015 02:30:00AM 1 point [-]

Cool. Another interesting question would be how the views of a single person change over time. This would help tease out whether it's a generational trend or a generic trend with getting older.

In my own case, I only switched to finding a soft takeoff pretty likely within the last year. The change happened as I read more sources outside LessWrong that made some compelling points. (Note that I still agree that work on AI risks may have somewhat more impact in hard-takeoff scenarios, so that hard takeoffs deserve more than their probability's fraction of attention.)

Comment author: imuli 07 January 2015 10:48:33PM 2 points [-]

How different would this be with age as the x axis?

Comment author: Brian_Tomasik 08 January 2015 03:19:12AM 0 points [-]

Good question. :) I don't want to look up exact ages for everyone, but I would guess that this graph would look more like a teepee, since Yudkowsky, Musk, Bostrom, etc. would be shifted to the right somewhat but are still younger than the long-time software veterans.

Comment author: Emile 05 January 2015 12:52:36PM 4 points [-]

Side note: I keep seeing a bizarre assumption (which I can only assume is a Hollywood trope) from a lot of people here that even a merely human-level AI would automatically be awesome at dealing with software just because they're made of software. (like how humans are automatically experts in advanced genetic engineering just because we're made of DNA)

Not "just because they're made of software" - but because there are many useful things that a computer is already better than a human at (notably, vastly greater "working memory"), so a human-level AI can be expected to have those and whatever humans can do now. And a programmer who could easily do things like "check all lines of code to see if they seem like they can be used", or systematically checking from where a function could be called, or "annotating" each variable, function or class by why it exists ... all things that a human programmer could do, but that either require a lot of working memory, or are mind-numblingly boring.

Comment author: Brian_Tomasik 07 January 2015 09:06:17PM 0 points [-]

Good points. However, keep in mind that humans can also use software to do boring jobs that require less-than-human intelligence. If we were near human-level AI, there may by then be narrow-AI programs that help with the items you describe.

Comment author: DanArmak 04 January 2015 03:29:01PM 14 points [-]

This has a very low n=16, and so presumably some strong selection biases. (Surely these are not the only people who have published thought-out opinions on the the likelihood of fooming.) Without an analysis of the reasons these people give for their views, I don't think this correlation is very interesting.

Comment author: Brian_Tomasik 07 January 2015 08:51:52PM 3 points [-]

Thanks for the comment. There is some "multiple hypothesis testing" effect at play in the sense that I constructed the graph because of a hunch that I'd see a correlation of this type, based on a few salient examples that I knew about. I wouldn't have made a graph of some other comparison where I didn't expect much insight.

However, when it came to adding people, I did so purely based on whether I could clearly identify their views on the hard/soft question and years worked in industry. I'm happy to add anyone else to the graph if I can figure out the requisite data points. For instance, I wanted to add Vinge but couldn't clearly tell what x-axis value to use for him. For Kurzweil, I didn't really know what y-axis value to use.

Comment author: [deleted] 06 January 2015 06:43:59PM 2 points [-]

Something feels very,very wrong that Elon Musk is on the left-hand side of the chart, and Ben Goertzel on the right. I'd reckon that Elon Musk is a more reliable source about the timelines of engineering projects in general (with no offense meant to Goertzel). Maybe this axis isn't measuring the right thing?

Comment author: Brian_Tomasik 07 January 2015 08:44:44PM 1 point [-]

This is a good point, and I added it to the penultimate paragraph of the "Caveats" section of the piece.

View more: Prev | Next