KatjaGrace comments on Superintelligence Reading Group 2: Forecasting AI - Less Wrong

10 Post author: KatjaGrace 23 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread. Show more comments above.

Comment author: KatjaGrace 23 September 2014 01:02:48AM 1 point [-]

I think an important fact for understanding the landscape of opinions on AI, is that AI is often taken as a frivolous topic, much like aliens or mind control.

Two questions:

1) Why is this?

2) How should we take it as evidence? For instance, if a certain topic doesn't feel serious, how likely is it to really be low value? Under what circumstances should I ignore the feeling that something is silly?

Comment author: KatjaGrace 23 September 2014 01:03:42AM 3 points [-]

Relatedly, Scott Alexander criticizes the forms of popular reporting on dangers from AI. Why does reporting takes these forms?

Comment author: gallabytes 23 September 2014 11:58:43PM 3 points [-]

AGI takeoff is an event we as a culture have never seen before, except in popular culture. So, that in mind, reporters draw on the only good reference points the population has, sci fi.

What would sane AI reporting look like? Is there a way to talk about AI to people who have only been exposed to the cultural background (if even that) in a way that doesn't either bore them or look at least as bad as this?

Comment author: KatjaGrace 25 September 2014 09:23:58PM 3 points [-]

A reasonable analog I can think of is concern about corporations. They are seen as constructed to seek profit alone and thereby destroying social value, they are smarter and more powerful than individual humans, and the humans interacting with them (or even in them) can't very well control them or predict them. We construct them in some sense, but their ultimate properties are often unintentional.

Comment author: KatjaGrace 25 September 2014 09:18:17PM 2 points [-]

The industrial revolution is some precedent, at least with respect to automation of labor. But it was long ago, and indeed, the possibility of everyone losing their jobs seems to be reported on more seriously than the other possible consequences of artificial intelligence.

Why does reporting need a historical precedent to be done in a sane-looking way?

Comment author: Liso 25 September 2014 03:58:03AM -1 points [-]

what we have in history - it is hackable minds which were misused to make holocaust. Probably this could be one possibility to improve writings about AI danger.

But to answer question 1) - it is too wide topic! (social hackability is only one possibility of AI superpower takeoff path)

For example still miss (and probably will miss) in book:

a) How to prepare psychological trainings for human-AI communication. (or for reading this book :P )

b) AI Impact to religion

etc.