Thanks, I think there is a lot of instrumental value in collecting high-quality answers to these questions. I look forward to reading more, and to pointing people to this site.
Looking at the very bottom of AI Impacts home page - the disclaimer looks rather unfriendly.
I'd suggest petitioning to change it to the LessWrong variety
Here is the text: To the extent possible under law, the person who associated CC0 with AI Impacts has waived all copyright and related or neighboring rights to AI Impacts. This work is published from: United States.
If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form.
My comment is intended as helpful feedback. If it is not helpful I'd be happy to delete it.
Your original feedback seems helpful but your follow-up doesn't. You could have said "I don't know" or "I have nothing further to add on that point".
I mean unfriendly in the ordinary sense of the word. Maybe uninviting would be as good.
Perhaps a careful reading of that disclaimer would be friendly or neutral - I don't know. My quick reading of it was: by interacting with AI Impacts you could be waiving some sort of right. To be honest I don't know what a CCO is.
I have nothing further to add to this.
Ah, I see. Thanks. We just meant that Paul and I are waiving our own rights to the content - it's like Wikipedia in the sense that other people are welcome to use the content. We should perhaps make that clearer.
I don't think so—the copyright rights to AI Impacts are waived, in the sense that we don't have them.
The text I was questioning (see above) would have the contributor waive copyright without assigning it, which ends up placing the contributed work in the public domain. If that is the intention I find it a little surprising.
I've been working on a thing with Paul Christiano that might interest some of you: the AI Impacts project.
The basic idea is to apply the evidence and arguments that are kicking around in the world and various disconnected discussions respectively to the big questions regarding a future with AI. For instance, these questions:
In the medium run we'd like to provide a good reference on issues relating to the consequences of AI, as well as to improve the state of understanding of these topics. At present, the site addresses only a small fraction of questions one might be interested in, so only suitable for particularly risk-tolerant or topic-neutral reference consumers. However if you are interested in hearing about (and discussing) such research as it unfolds, you may enjoy our blog.
If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form.
Crossposted from my blog.