shminux comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 16 May 2012 12:13:00PM *  6 points [-]

Can you pretend to be the actual person you are trying to convince and do your absolute best to demolish the arguments presented in this paper?

No, I cannot. I've read the various papers, and they all orbit around an implicit and often unstated moral realism. I've also debated philosophers on this, and the same issue rears its head - I can counter their arguments, but their opinions don't shift. There is an implicit moral realism that does not make any sense to me, and the more I analyse it, the less sense it makes, and the less convincing it becomes. Every time a philosopher has encouraged me to read a particular work, it's made me find their moral realism less likely, because the arguments are always weak.

I can't really put myself in their shoes to successfully argue their position (which I could do with theism, incidentally). I've tried and failed.

If someone can help we with this, I'd be most grateful. Why does "for reasons we don't know, any being will come to share and follow specific moral principles (but we don't know what they are)", rise to seem plausible?

Comment author: shminux 16 May 2012 03:05:20PM 0 points [-]

There is an implicit moral realism that does not make any sense to me.

You have made a number of posts on paraconsistent logic. Now it's time to walk the walk. For the purpose of this referee report, accept moral realism and use it explicitly to argue with your paper.

Comment author: Stuart_Armstrong 16 May 2012 03:29:27PM *  5 points [-]

It's not that simple. I can't figure out what the proposition being defended is exactly. It shifts in ways I can't predict in the course of arguments and discussions. If I tried to defend it, my defence would end up being too caricatural or too weak.

Comment author: shminux 16 May 2012 03:55:21PM 2 points [-]

Is your goal to affect their point of view? Or is it something else? For example, maybe your true target audience is those who donate to your organization and you just want to have a paper published to show them that they are not wasting their money. In any case, the paper should target your real audience, whatever it may be.

Comment author: Stuart_Armstrong 16 May 2012 04:00:07PM 4 points [-]

I want a paper to point those who make the thoughtless "the AI will be smart, so it'll be nice" argument to. I want a paper that forces the moral realists (using the term very broadly) to make specific counter arguments. I want to convince some of these people that AI is a risk, even if it's not conscious or rational according to their definitions. I want something to build on to move towards convincing the AGI researchers. And I want a publication.