Interesting. I have not looked at things like this before. I am not sure that I am smart enough or knowledgeable enough to understand the MIRI stuff or your own paper, at least not on a first reading.
Would an AI believe itself to have free will? Without free will, it is - imo - difficult to accept that moral agents exist as currently thought of. (This is my contention.) It might, of course, construct the idea of a moral agent a bit differently, or agree with those who see free will as irrelevent to the idea of moral agents. It is also possible that it might see itself as a moral agent but not see humans as such (rather how we do with animals). It might still see as worthy of moral consideration, however.
I wonder what is meant here by 'moral agents'? It is clear that SimplexAI-m believes that both it and humans are moral agents. This seems to be a potential place for criticism of SimplexAI-m's moral reasoning. (note that I am biased here as I do not think that moral agents as they seem to be commonly understood exist)
However, having said that this is a very interesting discussion. And there would seem to be a risk here that even if there are no moral facts to uncover about the world, an entity - no matter how intelligent - could believe itself to have disc...
I am happy to have a conversation with you. On this point:
'— The real problem of AI is <something else, usually something already happening>. You’re distracting people with your farfetched speculation.'
I believe that AI indeed poses huge problems, so maybe this is where I sit.
Re timelines for climate change, in the 1970s, serious people in the field of climate studies started suggesting that there was a serious problem looming. A very short time later, the entire field was convinced by the evidence and argument for that serious risk - to the point that the IPCC was established in 1988 by the UN.
When did some serious AI researchers start to suggest that there was a serious problem looming? I think in the 2000s. There is no IPAIX-risk.
And, yes: I can detect silly arguments in a reasonable number of cases. But I have not been able...
True. Unless there were very good arguments/very good evidence for one side or the other. My expectation is that for any random hypothesis there will be lots of disagreement about it among experts. For a random hypothesis with lots of good arguments/good evidence, I would expect much, much less disagreement among experts in the field.
If we look at climate change, for example, the vast majority of experts agreed about it quite early on - within 15 years of the Charney report.
If all I am left with, however, is 'smart person believes silly thing for silly rea...
I am someone who is at present unsure how to think about AI risk. As a complete layperson with a strong interest in science, technology, futurism and so on, there are - seemingly - some very smart people in the field who appear to be saying that the risk is basically zero (eg: Andrew Ng, Yann Le Cunn). Then there are others who are very worried indeed - as represented by this post I am responding to.
This is confusing.
To get people at my level to support a shut down of the type described above, there needs to be some kind of explanation as to why there is s...
I am not sure that this is the best way to evaluate which candidate is best in this regard. Your goal is to get action taken. Surely someone who is the most persuadable and the most rational would be a better metric. A politician who says, 'AI is an existential threat to humanity and action needs to be taken,' may not be serious about the issue - they might just be saying things that they think will sound cool/interesting to their audience.
In any case, regardless of my particular ideas of how to evaluate this, I think that you need better metrics.