Bugmaster comments on Welcome to Less Wrong! (July 2012) - Less Wrong

20 Post author: ciphergoth 18 July 2012 05:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (843)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 28 February 2013 05:52:19PM 3 points [-]

However, imagine if we ran two copies of Clippy in a grand paperclipping race: one that consumed entertainment by preference, and one that did not. The non-entertainment version would win every time.

This is proving the conclusion by assuming it.

Similarly, if you want to make the world a better place (whatever that means for you), every minute you spend on doing other things is a minute wasted (unless they are explicitly included in your goals). This includes watching TV, eating, sleeping, and being dead. Some (if not all) of such activities are unavoidable, but as I said, I'm not sure whether it's a bug or a feature.

The words make a perfectly logical pattern, but I find that the picture they make is absurd. The ontology has gone wrong.

Some businessman wrote a book of advice called "Never Eat Alone", the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying "he who would be Pope must think of nothing else" looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder.

BTW, for what it's worth, I do not watch TV. And now I am imagining a chapter of that book entitled "Never Sleep Alone".

Comment author: Bugmaster 28 February 2013 07:58:45PM 0 points [-]

This is proving the conclusion by assuming it.

How so ? Imagine that you have two identical paperclip maximizers; for simplicity's sake, let's assume that they are not capable of radical self-modification (though the results would be similar if they were). Each agent is capable of converting raw titanium to paperclips at the same rate. Agent A spends 100% of its time on making paperclips. Agent B spends 80% of its time on paperclips, and 20% of its time on watching TV. If we gave A and B two identical blocks of titanium, which agent would finish converting all of it to paperclips first ?

That is what the saying "he who would be Pope must think of nothing else" looks like in practice.

FeepingCreature addressed this better than I could in this comment . I understand that you find the idea of making paperclips (or political movements, or software, or whatever) all day every day with no breaks abhorrent, and so do I. But then, some people find polyamory abhorrent as well, and then they "polyhack" themselves and grow to enjoy it. Is entertainment your terminal value, or a mental bias ? And if it is a terminal value, is it the best terminal value that you could possibly have ?

Comment author: RichardKennaway 01 March 2013 12:00:10AM *  1 point [-]

WARNING: This comment contains explicit discussion of an information hazard.

Imagine that you have two identical paperclip maximizers

I decline to do so. What imaginary creatures would choose whose choice has been written into their definition is of no significance. (This is also a reply to the comment of FeepingCreature you referenced.) I'm more interested in the practical question of how actual human beings, which this discussion began with, can avoid the pitfall of being taken over by a utility monster they've created in their own heads.

This is a basilisk problem. Unlike Roko's, which depends on exotic decision theory, this one involves nothing more than plain utilitarianism. Unlike the standard Utility Monster scenario, this one involves no imaginary entities or hypothetical situations. You just have to look at the actual world around you through the eyes of utilitarianism. It's a very short road from the innocent-sounding "the greatest good for the greatest number" to this: There are seven billion people on this planet. How can the good you could do them possibly be outweighed by any amount of your own happiness? Just by sitting there reading LessWrong you're killing babies! Having a beer? You're drinking dead babies. Own a car? You're driving on a carpet of dead babies! Murderer! Murderer! Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours.

But even Peter Singer doesn't go that far, continuing to be an academic professor and paying his utilitarian obligations by preaching utilitarianism and donating twenty percent of his salary to charity.

This is such an obvious failure mode for utilitarianism, a philosophy at least two centuries old, that surely philosophers must have addressed it. But I don't know what their responses are.

Christianity has the same problem, and handles it in practice by testing the vocation of those who come to it seeking to devote their whole life to the service of God, to determine whether they are truly called by God. For it is written that many are called, yet few are chosen. In non-supernatural terms, that means determining whether the applicant is psychologically fitted for the life they feel called to, and if not, deflecting their mania into some more productive route.

Comment author: TheOtherDave 01 March 2013 03:30:12AM 3 points [-]

Consider two humans, H1 and H2, both utilitarians.

H1 looks at the world the way you describe Peter Singer here.
H2 looks at the world "through the eyes of utilitarianism" as you describe it here.

My expectation is that H1 will do more good in their lifetime than H2.
What's your expectation?

Comment author: [deleted] 09 March 2013 11:54:47AM 0 points [-]

And then you have people like H0, who notices H2 is crazy, decides that that means that they shouldn't even try to be altruistic, and accuses H1 of hypocrisy because she's not like H2. (Exhibit A)

Comment author: RichardKennaway 01 March 2013 09:57:06AM 0 points [-]

That is my expectation also. However, persuading H2 of that ("but dead babies!") is likely to be a work of counselling or spiritual guidance rather than reason.

Comment author: TheOtherDave 01 March 2013 10:11:52PM 2 points [-]

Well... so, if we both expect H1 to do more good than H2, it seems that if we were to look at them through the eyes of utilitarianism, we would endorse being H1 over being H2.
But you seem to be saying that H2, looking through the eyes of utilitarianism, endorses being H2 over being H1.
I am therefore deeply confused by your model of what's going on here.

Comment author: RichardKennaway 08 March 2013 11:23:51PM 0 points [-]

Oh yes, H1 is more effective, heathier, saner, more rational, etc. than H2. H2 is experiencing existential panic and cannot relinquish his death-grip on the idea.

Comment author: TheOtherDave 08 March 2013 11:42:39PM 2 points [-]

You confuse me further with every post.

Do you think being a utilitarian makes someone less effective, healthy, sane, rational etc.?
Or do you think H2 has these various traits independent of them being a utilitarian?

Comment author: whowhowho 09 March 2013 12:48:43AM 1 point [-]

There's a lot of different kinds of utilitarian.

Comment author: RichardKennaway 08 March 2013 11:50:05PM 0 points [-]

WARNING: More discussion of a basilisk, with a link to a real-world example.

It's a possible failure mode of utilitarianism. Some people succumb to it (see George Price for an actual example of a similar failure) and some don't.

I don't understand your confusion and this pair of questions just seems misconceived.

Comment author: TheOtherDave 09 March 2013 12:59:41AM 1 point [-]

(shrug) OK.
I certainly agree with you that some utilitarians suffer from the existential panic and inability to relinquish their death-grips on unhealthy ideas, while others don't.
I'm tapping out here.

Comment author: whowhowho 09 March 2013 12:47:11AM 1 point [-]

One could reason that one is better placed to do good effectively when focussing on oneself, ones family, one's community, etc, simply because one understands them better.

Comment author: Eliezer_Yudkowsky 01 March 2013 06:52:18PM 0 points [-]

Infohazard reference with no warning sign. Edit and reply to this so I can restore.

Comment author: RichardKennaway 08 March 2013 11:18:33PM 1 point [-]

Done. Sorry this took so long, I've been taken mostly offline by a biohazard for the last week.

Comment author: [deleted] 09 March 2013 11:39:26AM *  0 points [-]

(Warning: replying to discussion of a potential information hazard.)

Whfg ol fvggvat gurer ernqvat YrffJebat lbh'er xvyyvat onovrf! Univat n orre? Lbh'er qevaxvat qrnq onovrf.

Gung'f na rknttrengvba (tvira gung ng gung cbvag lbh unqa'g nqqrq zragvbarq genafuhznavfz lrg) -- nf bs abj, vg'f rfgvzngrq gb gnxr zber guna gjb gubhfnaq qbyynef gb fnir bar puvyq'f yvsr jvgu Tvirjryy'f gbc-engrq punevgl. (Be vf ryrpgevpvgl naq orre zhpu zber rkcrafvir jurer lbh'er sebz?)

Comment author: Bugmaster 01 March 2013 01:04:25AM 0 points [-]

What imaginary creatures would choose whose choice has been written into their definition is of no significance.

Are you saying that human choices are not "written into their definition" in some measure ?

Also, keep in mind that a goal like "make more paperclips" does leave a lot of room for other choices. The agent could spend its time studying metallurgy, or buying existing paperclip factories, or experimenting with alloys, or attempting to invent nanotechnology, or some combination of these and many more activities. It's not constrained to just a single path.

Just by sitting there reading LessWrong you're killing babies! ... Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours.

On the one hand, I do agree with you, and I can't wait to see your proposed solution. On the other hand, I'm not sure what this has to do with the topic. I wasn't talking about billions of future humans or anything of the sort, merely about a single (semi-hypothetical) human and his goals; whether entertainment is a terminal or instrumental goal; and whether it is a good goal to have.

Let me put it in a different way: if you could take a magic pill which would remove (or, at the very least, greatly reduce) your desire for passive entertainment, would you do it ? People with extremely low preferences for passive entertainment do exist, after all, so this scenario isn't entirely fantastic (other than for the magic pill part, of course).

Comment author: whowhowho 09 March 2013 04:21:41PM 0 points [-]

Are you saying that human choices are not "written into their definition" in some measure ?

What is written in to humans by evolution is hardly relevant. The point is that you can't prove anything about humansby drawing a comparison with imaginary creatures that have had something potentially quite different written into them by their creator.

Comment author: RichardKennaway 08 March 2013 11:43:56PM *  0 points [-]

Are you saying that human choices are not "written into their definition" in some measure ?

I have no idea what that even means.

On the one hand, I do agree with you, and I can't wait to see your proposed solution.

My only solution is "don't do that then". It's a broken thought process, and my interest in it ends with that recognition. Am I a soul doctor? I am not. I seem to be naturally resistant to that failure, but I don't know how to fix anyone who isn't.

Let me put it in a different way: if you could take a magic pill which would remove (or, at the very least, greatly reduce) your desire for passive entertainment, would you do it ?

What desire for passive entertainment? For that matter, what is this "passive entertainment"? I am not getting a clear idea of what we are talking about. At any rate, I can't imagine "entertainment" in the ordinary meaning of that word being a terminal goal.

FWIW, I do not watch television, and have never attended spectator sports.

People with extremely low preferences for passive entertainment do exist, after all

Quite.

Comment author: Bugmaster 09 March 2013 02:48:00AM *  0 points [-]

Are you saying that human choices are not "written into their definition" in some measure ?

I have no idea what that even means.

To rephrase: do you believe that all choices made by humans are completely under the humans' conscious control ? If not, what proportion of our choices is under our control, and what proportion is written into our genes and is thus difficult, if not impossible, to change (given our present level of technology) ?

You objected to my using Clippy as an analogy to human behaviour, on the grounds that Clippy's choices are "written into its definition". My point is that a). Clippy is free to make whatever choices it wants, as long as it believes (correctly or erroneously) such choices would lead to more paperclips, and b). we humans operate in a similar way, only we care about things other than paperclips, and therefore c). Clippy is a valid analogy.

My only solution is "don't do that then".

Don't do what ? Do you have a moral theory which works better than utilitarianism/consequentialism ?

What desire for passive entertainment? For that matter, what is this "passive entertainment"?

You don't watch TV or attend sports, but do you read any fiction books ? Listen to music ? Look at paintings or sculptures (on your own initiative, that is, and not as part of a job) ? Enjoy listening to some small subclass of jokes ? Watch any movies ? Play video games ? Stare at a fire at night ? I'm just trying to pinpoint your general level of interest in entertainment.

At any rate, I can't imagine "entertainment" in the ordinary meaning of that word being a terminal goal.

Just because you personally can't imagine something, doesn't mean it's not true. For example, art and music -- both of which are forms of passive entertainment -- has been a part of human history ever since the caveman days, and continue to flourish today. There may be something hardcoded in our genes (maybe not yours personally, but on average) that makes us enjoy art and music. On the other hand, there are lots of things hardcoded in our genes that we'd be better off without...

Comment author: RichardKennaway 09 March 2013 03:08:31PM *  0 points [-]

To rephrase: do you believe that all choices made by humans are completely under the humans' conscious control ? If not, what proportion of our choices is under our control, and what proportion is written into our genes and is thus difficult, if not impossible, to change (given our present level of technology) ?

The whole language is wrong here.

What does it mean to talk about a choice being "completely under the humans' conscious control"? Obviously, the causal connections wind through and through all manner of things that are outside consciousness as well as inside. When could you ever say that a decision is "completely under conscious control"?

Then you talk as if a decision not "completely under conscious control" must be "written into the genes". Where does that come from?

do you read any fiction books?

Why do you specify fiction? Is fiction "passive entertainment" but non-fiction something else?

There may be something hardcoded in our genes (maybe not yours personally, but on average) that makes us enjoy art and music.

What is this "us" that is separate from and acted upon by our genes? Mentalistic dualism?

My only solution is "don't do that then".

Don't do what ? Do you have a moral theory which works better than utilitarianism/consequentialism ?

Don't crash and burn. I have no moral theory and am not impressed by anything on offer from the philosophers.

To sum up, there's a large and complex set of assumptions behind everything you're saying here that I don't think I share, but I can only guess at from glimpsing the shadowy outlines. I doubt further discussion will get anywhere useful.

Comment author: whowhowho 09 March 2013 12:53:10AM 0 points [-]

Are you saying that human choices are not "written into their definition" in some measure ?

I think Bugmaster is equating being "written in" in the sense of a stipulation in a thought experiment with being "written in" in the sense of being the outcome of an evolutionary process.

Comment author: RichardKennaway 09 March 2013 03:14:17PM 0 points [-]

If he is, he shouldn't. These are completely different concepts.

Comment author: whowhowho 09 March 2013 12:55:24AM 0 points [-]

If we gave A and B two identical blocks of titanium, which agent would finish converting all of it to paperclips first ?

That has no relevance to morality. Morality is not winning, is not efficiently fulfilling an arbitrary UF.