You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on r/HPMOR on heroic responsibility - Less Wrong Discussion

9 Post author: Eliezer_Yudkowsky 21 August 2012 11:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread.

Comment author: cousin_it 21 August 2012 12:18:21PM *  16 points [-]

It seems to me that if many people adopt heroic responsibility to their own values, then a handful of people with destructive values might screw up everyone else, because destroying is easier than helping people.

Comment author: RolfAndreassen 21 August 2012 07:03:22PM 3 points [-]

Well yes, but what exactly are you going to substitute for individual judgement of what actions to take? If you decide that following code X, and enforcing that others do so as well, even when it seems like breaking the code this one time would give better results, is the best course of action (perhaps for reasons of precommitment or average-case utility or what-have-you)... then oops, you seem to have made a decision in accordance with your best judgement, there.

Comment author: V_V 21 August 2012 10:04:55PM -1 points [-]

That's the Übermensch morality of dictators and totalitarian regimes. The problem is that every dictator thinks of themselves as a benevolent dictator, but it turns out that they are often mistaken.

Even when there are multiple aspiring dictators, all with essentially benevolent values, conflict on who should rule can be very much destructive.

Of course this presupposes consequentialism. In deontologism morality is typically viewed as a social contract where each party has well defined responsabilities.

I don't think it's an accident that violent totalitarian ideologies tend to be consequentialist: these innocent heretics/Jews/bourgeois stand between us and our utopia. Kill'em all!

Comment author: amcknight 21 August 2012 10:41:20PM 10 points [-]

citations please! I doubt that most dictators think they are benevolent and are consequentialists.

Comment author: Decius 22 August 2012 01:51:10PM 2 points [-]

I think that most dictators who make it into history books think about benevolence differently than most people.

Comment author: wedrifid 17 September 2012 06:03:12AM *  0 points [-]

citations please! I doubt that most dictators think they are benevolent and are consequentialists.

Thankyou! I get tired of the whole "everybody thinks they are good" nonsense I hear all the time. I call it mind-projection. Some people just don't care.

Comment author: shminux 21 August 2012 04:13:27PM *  0 points [-]

There is an obvious parallel between HJPEV and AGI: he can do (and does) stuff no (other) human can even conceive of doing.

How do you know if your values and goals are constructive or destructive? It all comes down to the same hard question of FAI (well, FHHI, friendly human hero intelligence, in this case): are your values CEV-aligned? So, the first thing Harry should do is stop running around saving people and derive the CEV :)

One can, of course, argue that, as a human hero, HJPEV has the right CEV built-in and should just run around implementing it. However, given that even his friends and allies disagree (Dumbledore is a deathist, McGonagall is a disciplinarian, Hermione thinks they are too young, Weasley twins just want to have fun), and he gets the most help from the anti-hero Quirrell, this point of view is hard to defend.

The situation is much worse outside of fiction, where buggy or limited wetware constantly leads would-be heroes (Lenin, Castro, Lincoln or even Hitler) to commit or cause suffering or destruction of the same group of people they aspired to help.

So the first question a would-be hero should ask herself is whether she is prepared to live with the consequences of her actions if they backfire. (And if the answer is yes, she is clearly a villain.)

Comment author: Eliezer_Yudkowsky 21 August 2012 08:37:49PM 12 points [-]

CEV is a construct for AI purposes that actual human beings can't eval - I don't think I've ever seen a human discussion that was helped by invoking it. It's not like Solomonoff Induction where sometimes you really can be helped by thinking formally about Occam's Razor. In practice, human beings arguing about ethics are either already approximating their part of the 'good' as best they can, or they're confused about something much simpler than CEV, like consequentialism. If you should never use the word 'truth' when you can just talk about the object level, and never say 'because it's not optimal!' when that just means 'I don't think you should do that', then there's basically never a good time to talk about CEV - it always deflates out of the sentence unless you're talking directly about FAI.

Comment author: shminux 22 August 2012 12:11:51AM *  5 points [-]

I suppose my point is that, if you adopt "heroic responsibility", you ought to put in the correspondingly heroic amount of effort into figuring out what a hero ought to do. And given that your Harry plans to take over the world and then radically change it, he ought to do an awful lot of figuring out first. Probably of the same order of magnitude an FAI would.

Comment author: V_V 21 August 2012 10:15:41PM 2 points [-]

Solomonoff Induction where sometimes you really can be helped by thinking formally about Occam's Razor.

That's curious, because Solomonoff Induction is something not even an enormously powerful (but computable) AI can evaluate.

Comment author: Eliezer_Yudkowsky 22 August 2012 01:49:36AM 5 points [-]

Yes, but my point is that thinking about SI or MML in the abstract helps because people sometimes gain insight from asking "How complex is that computer program?" I haven't seen appeal-to-CEV produce much insight in practice, and any insight it could produce can probably be better produced by appealing to the relevant component principle of CEV instead. (Nor yet is this a critique of CEV, because it's meant as an AI design, not as a moral intuition pump.)

Comment author: V_V 22 August 2012 10:04:26AM 2 points [-]

Can you provde an example where Solomonoff Induction can be used to gain insight that Occam's razor doesn't help to gain?

Comment author: Kawoomba 22 August 2012 01:38:40PM 0 points [-]

How else can you impartially wield Occam's Razor than with a formal model, and what convincing formalization is there other than Kolmogorov Complexity (and assorted variants), which SI in a way extends?

Comment author: V_V 22 August 2012 04:04:13PM 1 point [-]

Setting aside the theoretical objections to Solomonoff induction (a priori assumption of computability of the hypotheses, disregard of logical depth, dependance on the details of the computational model, normalization issues), even if you accept it as a proper formalization of Occam's Razor, in order to apply it in a formal argument, you would have to perform an uncomputable calculation.

Since you can't do that, what's left of it?

Comment author: Kawoomba 22 August 2012 05:10:13PM 0 points [-]

Besides noting that there are computable versions of Kolmogorov Complexity (such as MML), in your parent comment you contrasted the use of SI with using Occam's Razor itself.

That's what I was asking about, and it doesn't seem like you answered it:

How do you use Occam's Razor, what formalizations do you perceive as "proper", or if you're just intuiting the heuristic, guesstimating the complexity, what is the formal principle that your intuition derives from / approximates and how does it differ from e.g. Kolmogorov Complexity?

Comment author: V_V 22 August 2012 07:13:29PM 0 points [-]

Besides noting that there are computable versions of Kolmogorov Complexity (such as MML)

If by MML you mean Minimum message length, then I don't think that's correct. This paper compares Minimum message length with Kolmogorov Complexity but it doesn't seem to make that claim.

How do you use Occam's Razor, what formalizations do you perceive as "proper", or if you're just intuiting the heuristic, guesstimating the complexity, what is the formal principle that your intuition derives from / approximates and how does it differ from e.g. Kolmogorov complexity

My point is that Kolmogorov complexity, Solomonoff induction, etc., are matematical constructions with a formal semantics. Talking about "informal" Kolmogorov complexity is pseudo-mathematics, which is usually an attempt to make your arguments sound more compelling than they are by dressing them in mathematical language.

If there is a disagreement about which hypothesis is simpler, trying to introduce concepts such as ill-defined program lengths that can't be computed, can only obscure the terms of the debate, rather than clarifying them.

Comment author: Manfred 22 August 2012 01:14:50PM 0 points [-]

Willam of Ockham originally used his principle to argue for the existence of God (God is the only necessary entity, therefore the simplest explanation).

Comment author: V_V 22 August 2012 03:45:18PM 2 points [-]

That's a truly epic fail, since Occam's razor is the strongest argument against the existence of God.

It's worth noting that the current formulation "entities must not be multiplied beyond necessity" is much more recent than Ockham's original formulation "For nothing ought to be posited without a reason given, unless it is self-evident (literally, known through itself) or known by experience or proved by the authority of Sacred Scripture."

I suppose that he included the reference to the Sacred Scripture specifically because he realized that without it, God would be the first thing to fly out of the window.

Comment author: CarlShulman 17 September 2012 04:09:54AM *  0 points [-]

I sometimes wish I knew which philosophers of the time were sincere in their religious disclaimers.

Comment author: Jayson_Virissimo 17 September 2012 05:26:31AM 0 points [-]

That should go in a quotes thread.

Consider it done.

Comment author: Will_Newsome 22 August 2012 02:33:14AM *  0 points [-]

It's not like Solomonoff Induction where sometimes you really can be helped by thinking formally about Occam's Razor. In practice, human beings arguing about ethics are either already approximating their part of the 'good' as best they can, or they're confused about something much simpler than CEV, like consequentialism.

It's exactly like Solomonoff Induction where most of the time you really can't be helped by thinking formally about Occam's Razor. In practice, human beings arguing about probabilities are either already approximating their part of the 'simple' as best they can, or they're confused about something much simpler (haha) than Solomonoff Induction, like Bayesianism.

Comment author: kilobug 21 August 2012 07:05:00PM 0 points [-]

Nitpicking note : I don't think the Weasley twins just want to have fun. They are in Griffyndor, in the Order of the Phoenix, they fight the Death Eaters to the end, ... they want to have fun, but they also want others to have fun.

Comment author: shminux 21 August 2012 07:11:45PM 1 point [-]

In the canon, sure. Their HPMOR characters are not nearly as nuanced.

Comment author: Desrtopa 22 August 2012 02:02:09AM 2 points [-]

Well, it's not as if they've been given nearly as much opportunity to characterize themselves by anything else.

Comment author: shminux 22 August 2012 02:55:56AM 1 point [-]

Oh, I did not mean this in a negative way.