RolfAndreassen comments on Holden's Objection 1: Friendliness is dangerous - Less Wrong

11 Post author: PhilGoetz 18 May 2012 12:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (428)

You are viewing a single comment's thread.

Comment author: RolfAndreassen 18 May 2012 03:58:18AM 13 points [-]

The human problem: This argues that the qualia and values we have now are only the beginning of those that could evolve in the universe, and that ensuring that we maximize human values - or any existing value set - from now on, will stop this process in its tracks, and prevent anything better from ever evolving. This is the most-important objection of all.

Better by which set of, ahem, values? And anyway, if evolution of values is a value, then maximising overall value will by construction take that into account.

Comment author: PhilGoetz 18 May 2012 04:16:06AM *  0 points [-]

Yes, I object less to CEV if you go one or two levels meta. But if evolution of values is your core value, you find that it's pretty hard to do better than to just not interfere except to keep the ecosystem from collapsing. See John Holland's book and its theorems showing that an evolutionary algorithm as described does optimal search.

Comment author: Wei_Dai 18 May 2012 10:34:45AM *  10 points [-]

Presumably, values will evolve differently depending on future contingencies. For example, a future with a world government that imposes universal birth control to limit population growth would probably evolve different values compared to a future that has no such global Singleton. Do you agree, and if so do you think the values evolved in different possible futures are all equivalent as far as you are concerned? If not, what criteria are you using to judge between them?

ETA: Can you explain John Holland's theorems, or at least link to the book you're talking about (Wikipedia says he wrote three). If you think allowing values to evolve is the right thing to do, I'm surprised you haven't put more effort into making a case for it, as opposed to just criticizing SI's plan.

Comment author: timtyler 18 May 2012 11:58:25PM *  1 point [-]

Probably Adaptation in Natural and Artificial Systems. Here's Holland's most famous theorem in the area. It doesn't suggest genetic algorithms make for some kind of optimal search - indeed, classical genetic algorithms are a pretty stupid sort of search.

Comment author: PhilGoetz 02 July 2012 01:17:27AM *  0 points [-]

That is the book. I"m referring to the entire contents of chapters 5-7. The schema theorem is used in chapter 7, but it's only part of the entire argument, which does show that genetic algorithms approach optimal distribution of trials among the different possibilities, for a specific definition of optimal, which is not easy to parse out of Holland's book, due to his failure to give an overview or decent summary of what he is doing. It doesn't say anything about other forms of search that proceed other than by taking a big set of possible answers, which give stochastic results when tested, and allocating trials among them.

Comment author: RolfAndreassen 18 May 2012 06:06:44PM 0 points [-]

CEV is not any old set of evolved values. It is the optimal set of evolved values; the set you get when everything goes exactly right. Of your two proposed futures, one of them is a better approximation to this than the other; I just can't say which one, at this time, because of lack of computational power. That's what we want a FAI for. :)

Comment author: Wei_Dai 18 May 2012 06:31:28PM *  2 points [-]

Instead of pushing Phil to accept the entirety of your position at once, it seems better to introduce some doubt first: Is it really very hard to do better than to just not interfere? If I have other values besides evolution, should I give them up so quickly?

Also, if Phil has already thought a lot about these questions and thinks he is justified in being pretty certain about his answers, then I'd be genuinely curious what his reasons are.

Comment author: RolfAndreassen 18 May 2012 08:06:29PM 3 points [-]

I misread the nesting, and responded as though your comment were a critique of CEV, rather than Phil's objection to CEV. So I talked a bit past you.

Comment author: TheOtherDave 18 May 2012 06:15:06PM 1 point [-]

But you're evading Wei_Dai's question here.

What criteria does the CEV-calculator use to choose among those options? I agree that significant computational power is also required, but it's not sufficient.

Comment author: RolfAndreassen 18 May 2012 08:09:17PM 1 point [-]

If we were able to formally specify the algorithm by which a CEV calculator should extrapolate our values, we would already have solved the Friendliness problem; your query is FAI-complete. But informally, we can say that the CEV evaluates by whatever values it has at a given step in its algorithm, and that the initial values are the ones held by the programmers.

Comment author: DanArmak 19 May 2012 03:45:09PM 1 point [-]

The problem with this kind of reasoning (as the OP makes plain) is that there's no good reason to think such CEV maximization is even logically possible. Not only do we not have a solution, we don't have a well-defined problem.

Comment author: TheOtherDave 18 May 2012 09:10:41PM 0 points [-]

(nods) Fair enough. I don't especially endorse that, but at least it's cogent.

Comment author: RolfAndreassen 18 May 2012 06:04:47PM 6 points [-]

The whole point of CEV is that it goes as many levels meta as necessary! And the other whole point of CEV is that it is better at coming up with strategies than you are.

Comment author: PhilGoetz 02 July 2012 01:23:07AM -1 points [-]

Please explain either one of your claims. For the first, show me where something Eliezer has written indicates CEV has some notion of how meta it is going, or how meta it "should" go, or anything at all relating to your claim. The second appears to merely be a claim that CEV is effective, so its use in any argument can only be presuming your conclusion.

Comment author: RolfAndreassen 02 July 2012 04:41:57AM -1 points [-]

In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.

My emphasis. Or to paraphrase, "as meta as we require."

Comment author: PhilGoetz 26 August 2012 10:07:33PM 0 points [-]

Writing "I define my algorithm for problem X to be that algorithm which solves problem X" is unhelpful. Quoting said definition, doubly so.

In any case, the passage you quote says nothing about how meta to go. There's nothing meta in that entire passage.

Comment author: Armok_GoB 20 May 2012 12:07:53AM 0 points [-]

CEV goes infinite levels meta, that's what the "extrapolated" part means.

Comment author: CronoDAS 21 May 2012 05:33:49AM 1 point [-]

Countably infinite levels or uncountably infinite levels? ;)

Comment author: Armok_GoB 21 May 2012 08:29:25PM 1 point [-]

Countably I think, since computing power is presumably finite so the infinity argument relies on the series being convergent.

Comment author: PhilGoetz 02 July 2012 01:21:12AM 0 points [-]

No, that isn't what the "extrapolated" part means. The "extrapolated" part means closure and consistency over inference. This says nothing at all about the level of abstraction used for setting goals.

Comment author: gRR 18 May 2012 01:07:37PM -1 points [-]

it's pretty hard to do better than to just not interfere except to keep the ecosystem from collapsing

Isn't this exactly what we wish FAI to do - interfere the least while keeping everything alive?

Comment author: thomblake 18 May 2012 01:49:03PM 1 point [-]

Isn't this exactly what we wish FAI to do - interfere the least while keeping everything alive?

Almost certainly not. We'd have massive overpopulation in no time. I remember someone did this analysis, I think it was insects that cover the Earth in days.