Gunnar_Zarncke comments on Natural selection defeats the orthogonality thesis - Less Wrong

-13 Post author: aberglas 29 September 2014 08:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (71)

You are viewing a single comment's thread.

Comment author: Gunnar_Zarncke 29 September 2014 09:27:04PM *  4 points [-]

I don't understand why this post is so clearly down-voted. I thinks its main point

Instead this post argues that there is one and only one super goal for any agent, and that goal is simply to exist in a competitive world. Our human sense of other purposes is just an illusion created by our evolutionary origins.

is quite valid if steelmaned by

1) not assuming that every AGI automatically prevents value drift and mutation and

2) goal is not taken literally but in the same sense as genes have the main function of reproduction (existence of gene copies).

My understanding of AGI mechanics is that in general AGIs are subject to evolution. Could be that drift-preventing AGI win in the long run. But maybe there are just too few of them.

Note: I think one should consider the main part of the post only and not the provided and lengthy extracts; these should better have been linked to.

Comment author: drethelin 30 September 2014 03:35:59PM 2 points [-]

Really long and full of unsupported statements that seem to miss the point of what it's arguing against.

Comment author: TheAncientGeek 29 September 2014 11:08:38PM 0 points [-]

I don't understand why this post is so clearly down-voted.

It's vaguely anti MIRI?

Comment author: aberglas 30 September 2014 07:52:40AM *  1 point [-]

The post was not meant to be anti-anything. But it is a different point of view from that posted by several others in this space. I hope many of the down voters take the time to comment here.

One thing that I would say is that while it may not be the best post ever posted to less wrong, it is certainly not a troll. Yet one has to go back over 100 posts to find another article voted down so strongly!

Comment author: ChristianKl 01 October 2014 01:12:18PM 0 points [-]

The most upvoted post on LW is anti MIRI. You don't get down-voted on LW just because you are contrarian.

Comment author: TheAncientGeek 01 October 2014 03:14:32PM 1 point [-]

Indeed not, pre existing status is important as well.

Comment author: RobbBB 30 November 2014 02:04:47AM 1 point [-]

Suppose Holden Karnofsky had written the exact post above ("Natural selection defeats the orthogonality thesis") and some unknown had written the substantive points from Holden's critique (minus the GiveWell-specific stuff). What karma values would you expect for those two posts?

Comment author: ChristianKl 01 October 2014 01:26:59PM 1 point [-]

1) not assuming that every AGI automatically prevents value drift and mutation and

The question is not whether every AGI automatically prevents value drift but whether AGI that keep humanity alive are that way. We want to build FAI's.

Comment author: mwengler 06 October 2014 11:32:15PM 0 points [-]

The question is not whether every AGI automatically prevents value drift but whether AGI that keep humanity alive are that way. We want to build FAI's.

It seems the more important question is whether AGI that prevent value drift have a survival advantage or disadvantage over AGI that have value drift.

To me it seems almost self-evident that AGI that have value drift will have a survival advantage. They would do this biologically, they would make multiple copies of themselves with variation in their values, and that copies that were fitter would tend to propagate their new drifted values moreso than the copies that were less fit. Further, value drift is likely essential when competing against other non-static (i.e., evolving) AIs, my values must adapt to the new values of and capabilities of up and coming AIs for me to survive, thrive, and spin off (flawed) copies.

Comment author: Gunnar_Zarncke 01 October 2014 05:02:41PM 0 points [-]

Agreed. FAIs must prevent any kind of drift. That comes at a cost which penalizes FAI against other AGIs.

Comment author: ChristianKl 01 October 2014 01:17:01PM -1 points [-]

Changes in human DNA also aren't 100% natural selection. Effects like gene drift also make up a lot. Saying that reproduction is the ultimate goal of biological organisms isn't quite true for conventional definitions of goal. In pop evolutionary psychology that often get's conflated.

But even if it would be true for naturally evolved humans, AGI can be created via intelligent design. That means they can have real goals that are programmed into them. AGI can be created with a goal to do a task and then shut down. Deep Blue doesn't have a goal to exist in the conventional meaning of the word goal.

is quite valid if steelmaned by

Crappy reasoning gets downvoted on LW even if it's possible to argue for the same position with good arguments.