I've given responses before where I go into detail about how I disagree with some public presentation on AI; the primary example is this one from January 2017, which Yvain also responded to. Generally this is done after messaging the draft to the person in question, to give them a chance to clarify or correct misunderstandings (and to be cooperative instead of blindsiding them).
I generally think it's counterproductive to 'partially engage' or to be dismissive; for example, one consequence of XiXiDu's interviews with AI experts was that some of them (that received mostly dismissive remarks in the LW comments) came away with the impression that people interested in AI risk were jerks who aren't really worth engaging with. For example, I might think someone is confused if they think climate change is more important than AI safety, but I don't think that it's useful to just tell them that they're confused or off-handedly remark that "of course AI safety is more important," since the underlying considerations (like the difference between catastrophic risks and existential risks) are actually non-obvious.
I've read the slides of the underlying talk, but not listened to it. I currently don't expect to write a long response to this. My thoughts about points the talk touches on:
Hi rmoehn,
I didn't look at the contents of that talk yet, but I felt uncomfortable about a specific speaker/talk being named and singled out for the target of rather hard-to-respond-to criticism (consider how you might take it if you came across a forum discussion calling your talk misleading and not well-reasoned, without going into any specifics), so I edited out those details for now.
I feel that the AI risk community should do its best to build friendly rather than hostile relationships with mainstream computer science researchers. In particular, there have been cases before where researchers looked at how their work was being discussed on LW, picked up a condescending tone, and decided that LW/AI risk people were not worth engaging with. Writing a response outlining one's disagreement to the talk (in the style of e.g. "Response to Cegłowski on superintelligence") wouldn't be a problem as it communicates engagement with the talk. But if we are referencing people's work in a manner which communicates a curt dismissal, I think we should be careful about naming specific people.
The question in general is fine, though. :)
I've added specifics. I hope this improves things. If not, feel free to edit it out.
Thanks for pointing out the problems with my question. I see now that I was wrong to combine strong language with no specifics and a concrete target. I would amend it, but then the context for the discussion would be gone.
We briefly discussed this internally. I reverted Kaj's edit since I think we should basically never touch other user's content unless it is dealing with some real information hazards, or threatens violence or doxxes a specific individual (and probably some weird edge cases that are rare and can't easily enumerate, but which "broad PR concerns" are definitely not an instance of).
(We also sometimes edit user's content if there is some broken formatting or something in that reference class, though that feels like a different kind of thing)
Probably useful to clarify so people can understand how moderation works:
The semi-official norms around moderation tend to be "moderators have discretion to take actions without waiting for consensus, but then should report actions they took to other moderators for sanity checking." (I don't think this is formal policy but I'd personally endorse it being policy. Waiting for consensus on things often makes it impossible to take action in time sensitive situations, but checking in after the fact gets you most of the benefit)
(Note: posted after the parent was retracted.)
consider how you might take it if you came across a forum discussion calling your talk misleading and not well-reasoned, without going into any specifics
I would be grateful for the free marketing! (And entertainment—internet randos' distorted impressions of you are fascinating to read.) Certainly, it would be better for people to discuss the specifics of your work, but it's a competitive market for attention out there: vague discussion is better than none at all!
there have been cases before where researchers looked at how their work was being discussed on LW, picked up a condescending tone, and decided that LW/AI risk people were not worth engaging with
If I'm interpreting this correctly, this doesn't seem very consistent with the first paragraph? First, you seem to be saying that it's unfair to Sussman to make him the target of vague criticism ("consider how you might take it"). But then you seem to saying that it looks bad for "us" (you know, the "AI risk community", Yudkowski's robot cult, whatever you want to call it) to be making vague criticisms that will get us written off as cranks ("not worth engaging with"). But I mostly wouldn't expect both concerns to be operative in the same world—in the possible world where Sussman feels bad about being named and singled out, that means he's taking "us" seriously enough for our curt dismissal to hurt, but in the possible world where we're written off as cranks, then being named and singled out doesn't hurt.
(I'm not very confident in this analysis, but it seems important to practice trying to combat rationalization in social/political thinking??)
But I mostly wouldn't expect both concerns to be operative in the same world—in the possible world where Sussman feels bad about being named and singled out, that means he's taking "us" seriously enough for our curt dismissal to hurt, but in the possible world where we're written off as cranks, then being named and singled out doesn't hurt.
The world can change as a result of one of the concerns. At first you're taking someone seriously (or might at least be open to taking them seriously), then they say something hurtful, then you write them off to make it hurt less. Sour grapes.
Also, the reactions of people who are not being directly criticized but who respect the person being criticized are also important. Even if the target of the criticism never saw it, other people in the target's peer group may also feel disrespected and react in a similar way. (This is not speculation - I've seen various computer scientists have this reaction to writings on LW, many times.)
Does it make sense to give a public response? Who would be able to do it?
The conference organizer, who had asked me to evaluate the talk, offered to interview me to set things straight. However, I don't know if that is sensible, and given my level of experience, I'm afraid I would misrepresent AI risk myself.
To be concrete: the talk was Should We Fear Intelligent Machines? by Gerald Sussman of SICP fame. He touched on important research questions and presented some interesting ideas. But much of what he said was misleading and not well-reasoned.
In response to the comments I add specifics. This is the same as I sent to the conference organizer, who had asked me for an evaluation. Note that this evaluation is separate from the interview mentioned above. The evaluation was private, the interview would be public.
Because of the low sound quality, I might have misunderstood some statements.
Mr. Sussman touched on important research questions.
His solution approaches might be useful.
He touched on some of the concerns about (strong) AI, especially the shorter term ones.
He acknowledged AI as a threat, which is good. But he wrongly dismissed some concerns about strong AI.
Many of his arguments made big jumps.
It was hard to understand, but I think he made fun of Max Tegmark and Eliezer Yudkowsky who are very active in the field. At least Tegmark would laugh with anyone joking about him. [(This is my expectation given his public appearances. I don't know him personally.)] But those remarks do give the audience a wrong impression and are therefore not helpful.
Having such a talk at an engineering conference might be good, because it raises awareness, and there was a call to action. There is also the downside of things being misrepresented and misunderstood.