Comment author: skeptical_lurker 04 October 2016 05:23:48AM *  3 points [-]

I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."

The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.

Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.

Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:

"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."

Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)

As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.

Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.

Comment author: John_Maxwell_IV 09 October 2016 07:11:27AM 1 point [-]
Comment author: John_Maxwell_IV 10 September 2016 01:46:48PM 4 points [-]

Thanks for the analysis!

The median amount donated to bugs rights charities is listed as $157.5. That implies that half of survey respondents donated >$150 to bugs rights charities. Obviously this is kind of implausible. I assume the real number who donated to bugs rights charities is 4 people, since the donations sum to $1083.0 and the average amount donated is $270.75. This also goes for the other donation-related questions--just something to keep in mind.

Comment author: entirelyuseless 28 July 2016 03:02:13AM 0 points [-]

While I don't think Eugene is the only one doing this anymore (I doubt he was the one who did it to me, for example), I do think Eugene may have realized that very thing, and it may have become his only real motivation: a way to get his revenge.

Comment author: John_Maxwell_IV 28 July 2016 11:59:35AM -2 points [-]

Disable downvoting then?

Comment author: John_Maxwell_IV 23 July 2016 02:30:27AM 0 points [-]

Indeed.com has an interesting salary tool that lets you do things like figure out what topics to study as a software developer.

Comment author: rmoehn 19 July 2016 04:52:45AM 0 points [-]

Yeah, that would be great indeed. Unfortunately my Japanese is so rudimentary that I can't even explain to my landlord that I need a big piece of cloth to hang it in front of my window (just to name an example). :-( I'm making progress, but getting a handle on Japanese is about as time-consuming as getting a handle on ML, although more mechanical.

Comment author: John_Maxwell_IV 21 July 2016 11:13:53AM 0 points [-]

Do you get the impression that Japan has numerous benevolent and talented researchers who could and would contribute meaningfully to AI safety work? If so, it seems possible to me that your comparative advantage is in evangelism rather than research (subject to the constraint that you're staying in Japan indefinitely). If you're able to send multiple qualified Japanese researchers west, that's potentially more than you'd be able to do as an individual.

You'd still want to have thorough knowledge of the issues yourself, if only to convince Japanese researchers that the problems were interesting.

Comment author: Daniel_Burfoot 18 July 2016 07:45:48PM *  1 point [-]

Coincidentally, I've recently been toying with the idea of setting up a consulting company which would allow people who want to work on "indy" research like AI safety to make money by working on programming projects part-time.

The key would be to 1) find fun/interesting consulting projects in areas like ML, AI, data science and 2) use the indy research as a marketing tool to promote the consulting business.

It should be pretty easy for good programmers with no family obligations to support themselves comfortably by working half-time on consulting projects.

Comment author: John_Maxwell_IV 19 July 2016 02:35:31AM 0 points [-]

I recently had an idea for an app that would make use of natural language processing and provide a service to businesses doing online marketing. I think there's a pretty good chance businesses would be willing to pay for this service, and after a quick Google search, I don't think there are companies doing anything similar yet. If you or anyone else interested in AI safety wants to hear more, feel free to send me a PM.

Comment author: John_Maxwell_IV 18 July 2016 05:53:09AM *  5 points [-]

That's awesome that you are looking to work on AI safety. Here are some options that I don't see you mentioning:

  • If you're able to get a job working on AI or machine learning, you'll be getting paid to improve your skills in that area. So you might choose to direct your study and independent projects towards building a resume for AI work (e.g. by participating in Kaggle competitions).

  • If you get in to the right graduate program, you'll be able to take classes and do research in to AI and ML topics.

  • Probably quite difficult, but if you're able to create an app that uses AI or machine learning to make money, you'd also fulfill the goal of both making money and studying AI at the same time. For example, you could earn money through this stock market prediction competition.

  • 80000 hours has a guide on using your career to work on AI risk.

  • MIRI has set up a research guide for getting the background necessary to do AI safety work. (Note that if MIRI is correct, your understanding of math may be much more important than your understanding of AI in order to do AI safety research. So the previous plans I suggested might look less attractive. The best path might be to aim for a job doing AI work, and then once you have that, start studying math relevant to AI safety part time.)

BTW, the x risk career network is also a good place to ask questions like this. (Folks on that mailing list are probably more qualified than me to answer this question but they don't browse LW that often.)

Comment author: GraceFu 04 July 2016 09:08:20AM 6 points [-]

Is there a version of the Sequences geared towards Instrumental Rationality? I can find (really) small pieces such as the 5 Second Level LW post and intelligence.org's Rationality Checklist, but can't find any overarching course or detailed guide to actually improving instrumental rationality.

Comment author: John_Maxwell_IV 08 July 2016 12:14:35PM 2 points [-]
Comment author: John_Maxwell_IV 05 July 2016 07:23:59AM 0 points [-]

Looks like a pretty good blog.

Comment author: GraceFu 04 July 2016 09:08:20AM 6 points [-]

Is there a version of the Sequences geared towards Instrumental Rationality? I can find (really) small pieces such as the 5 Second Level LW post and intelligence.org's Rationality Checklist, but can't find any overarching course or detailed guide to actually improving instrumental rationality.

Comment author: John_Maxwell_IV 05 July 2016 06:57:22AM 2 points [-]

View more: Next