David Gould

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I am not sure that this is the best way to evaluate which candidate is best in this regard. Your goal is to get action taken. Surely someone who is the most persuadable and the most rational would be a better metric. A politician who says, 'AI is an existential threat to humanity and action needs to be taken,' may not be serious about the issue - they might just be saying things that they think will sound cool/interesting to their audience.

In any case, regardless of my particular ideas of how to evaluate this, I think that you need better metrics.

Interesting. I have not looked at things like this before. I am not sure that I am smart enough or knowledgeable enough to understand the MIRI stuff or your own paper, at least not on a first reading.

Would an AI believe itself to have free will? Without free will, it is - imo - difficult to accept that moral agents exist as currently thought of. (This is my contention.) It might, of course, construct the idea of a moral agent a bit differently, or agree with those who see free will as irrelevent to the idea of moral agents. It is also possible that it might see itself as a moral agent but not see humans as such (rather how we do with animals). It might still see as worthy of moral consideration, however.

I wonder what is meant here by 'moral agents'? It is clear that SimplexAI-m believes that both it and humans are moral agents. This seems to be a potential place for criticism of SimplexAI-m's moral reasoning. (note that I am biased here as I do not think that moral agents as they seem to be commonly understood exist)

However, having said that this is a very interesting discussion. And there would seem to be a risk here that even if there are no moral facts to uncover about the world, an entity - no matter how intelligent - could believe itself to have discovered such facts. And then we could be in the same trouble outlined.

The reason I mention this is I am not clear how an AI could ever have unbiased reasoning. Humans, as outlined on LessWrong, are bundles of biases and wrong thinking, with intelligence not really the factor that overcomes this - very smart people have very different views on religion, morality, AIX-risk ... A super-intelligence may well have similar issues. And, if it believes itself to be super-intelligent, may even be less able to break out of them.

So while my views on AIX-risk are ... well, sceptical/uncertain ... this is a very interesting contribution to my thinking. Thanks for writing it. :)

I am happy to have a conversation with you. On this point:

'— The real problem of AI is <something else, usually something already happening>.  You’re distracting people with your farfetched speculation.'

I believe that AI indeed poses huge problems, so maybe this is where I sit.

 

Re timelines for climate change, in the 1970s, serious people in the field of climate studies started suggesting that there was a serious problem looming. A very short time later, the entire field was convinced by the evidence and argument for that serious risk - to the point that the IPCC was established in 1988 by the UN.

When did some serious AI researchers start to suggest that there was a serious problem looming? I think in the 2000s. There is no IPAIX-risk.

And, yes: I can detect silly arguments in a reasonable number of cases. But I have not been able to do so in this case as yet (in the aggregate). It seems that there are possibly good arguments on both sides.
 

It is indeed tricky - I also mentioned that it could get into a regress-like situation. But I think that if people like me are to be convinced it might be worth the attempt. As you say, there may be a more accessible to me domain in there somewhere.


Re the numbers, Eliezer seems to claim that the majority of AI researchers believe in X-risk, but few are speaking out for a variety of reasons. This boils down to me trusting Eliezer's word about the majority belief, because that majority is not speaking out. He may be motivated to lie in this case - note that I am not saying that he is, but 'lying for Jesus' (for example) is a relatively common thing. It is also possible that he is not lying but is wrong - he may have talked to a sample that was biased in some way.
 

True. Unless there were very good arguments/very good evidence for one side or the other. My expectation is that for any random hypothesis there will be lots of disagreement about it among experts. For a random hypothesis with lots of good arguments/good evidence, I would expect much, much less disagreement among experts in the field.

If we look at climate change, for example, the vast majority of experts agreed about it quite early on - within 15 years of the Charney report.

If all I am left with, however, is 'smart person believes silly thing for silly reasons' then it is not reasonable for me as a lay person to determine which is the silly thing. Is 'AI poses no (or extremely low) x-risk' the silly thing, or is 'AI poses unacceptable x-risk' the silly thing?

If AI does indeed pose unacceptable x-risk and there are good arguments/good evidence for this, then there also has to be a good reason or set of reasons why many experts are not convinced. (Yann claims, for example, that the AI experts arguing for AI x-risk are a very small minority and Eliezer Yudkowsky seems to agree with this).
 

I am someone who is at present unsure how to think about AI risk. As a complete layperson with a strong interest in science, technology, futurism and so on, there are - seemingly - some very smart people in the field who appear to be saying that the risk is basically zero (eg: Andrew Ng, Yann Le Cunn). Then there are others who are very worried indeed - as represented by this post I am responding to.

This is confusing.

To get people at my level to support a shut down of the type described above, there needs to be some kind of explanation as to why there is such a difference of opinion by experts, because any argument that you make to me to accept AI as a risk that requires such a shut down has been both rejected and accepted by others who know more than me about AI.

Note that this may not be rationality as it is understood on this forum - after all, why can't I just weigh the arguments without looking at who supports them? I understand that. But if I am sceptical about my own reasoning capabilities in this area - given that I am a lay person - then I have to suspect that any argument (either for or against AI risk) that has not convinced people with superior reasoning capabilities than I may contain flaws. 

That is, unless I understand why there might be such disagreement.

And I understand that this might get into recursion - people disagree about the reasons for disagreement and ...

However, at the very least it gives me another lens to look through and also someone with a lot of knowledge in AI might not have a lot of knowledge in why arguments fail.