New Comment
8 comments, sorted by Click to highlight new comments since:

I could probably write a lot more about this somewhere else, but I'm wondering if anyone else felt that this paper seemed to be kind of shallow. This comment is probably too brief to really do this feeling justice, but I'll probably decompose this into two things I found disappointing:

  1. "Intelligence" is defined in such a way that leaves a lot to be desired. It doesn't really define it in a way that makes it qualitatively different than technology in general ("tasks thought to require intelligence" is probably much less useful than "narrowing the set of possible futures into one that match an agent's preference ordering."). For this reason, the paper imagines a lot of scenarios that amount to basically one party being able to do one narrow task much better than another party. This is not specific enough to really narrow us down to any approaches that deal with AI more generally.
  2. As a consequence of the choice of the authors to leave their framework sort of fuzzy, their suggestions for how to respond to this problem also take on this fuzziness. For example their first suggestion is that policy leaders should consult with AI researchers. This reads a little bit like an applause light, and it doesn't seem to offer many suggestions about how to make this more likely, or about how to make sure that policy leaders are well informed enough to make sure they are considering the right people to be advised by and take their advice seriously.

Overall I'm happy that these kinds of things can be discussed by a large group of various organizations. But I think any public efforts to work towards mitigating AI risk need to be very careful that they aren't losing something extremely important by trying to appeal to too many uncertainties and disagreements at once.

My cynical take is that the point of writing papers like this is for them to be cited, not read.

Meta: It feels weird to give people karma for linkposts to things they didn't write. StackExchange has a "community wiki" flag for posts that means they award no reputation (and that everyone can edit them); should we have something like that?

I feel something like “giving you half karma capped at a certain number” seems more reasonable, since someone is still providing a valuable service to the community by posting it.

So far, I've never seen this sort of link post get that much karma, so I'm not too worried about it.

Well, that's part of the problem though. I think it's very good that this paper exists but I didn't upvote this post because it felt weird to me. So the low karma total doesn't reflect the goodness of the link.

I'm obviously a bit biased here but I generally think worrying too much about who gets karma is an antipattern. I posted this link because I came across the paper, it had been out for a few days, none of the authors had shared it here, so seemed worth sharing with the LW community, but with karma having a logarithmic impact on capability on the site it would require a lot of link gaming (which I assume is the main adversarial, free-riding behavior we don't want to encourage here) for someone to get lots of useful karma without doing anything other than being first to post links to things that will get upvotes.

Or put another way I think there's enough noise in karma that you don't need to worry about it; it'll take a lot of karma "misattribution" to have a serious impact on the quality of the site.

Also, just out of curiosity, would you have felt differently about voting if I had, say, provided an executive summary rather than just giving the link?

That's fair. And yes, I would have been happy to vote if you'd provided a one-paragraph summary or something.