Open thread, Oct. 10 - Oct. 16, 2016

3 Post author: MrMind 10 October 2016 07:00AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (115)

Comment author: username2 10 October 2016 09:23:33AM 6 points [-]

Is there something similar to the Library of Scott Alexandria available for The Last Psychiatrist ? I just read "Amy Schumer offers you a look into your soul" and I really liked it but I don't have enough time to read all posts on the blog.

Comment author: 9eB1 10 October 2016 05:01:32PM 3 points [-]

I would be very interested in this as well. In the meantime, there is a subreddit for the site that has a thread with best posts for a new reader, and a thread on people's favorite things from TLP.

Comment author: username2 12 October 2016 08:18:51PM 2 points [-]

Hey thanks for this. I had some time and I compiled this chronologically ordered list of links from those threads for personal use. https://my.mixtape.moe/nrbmyr.html

Comment author: scarcegreengrass 10 October 2016 05:24:47PM *  2 points [-]

This blog is so wordy and cultural that i (unfamiliar with the context) find it actually challenging to figure out what the premise, thesis, or content of the post is. Reminds me of my experience with discovering arcane 'neoreaction' blogs.

Comment author: ChristianKl 10 October 2016 09:51:24PM 2 points [-]

It's certainly not a blog that tries to pander the reader.

Comment author: turchin 10 October 2016 11:13:53AM 5 points [-]

If we knew that AI will be created by Google, and that it will happen in next 5 years, what should we do?

Comment author: James_Miller 10 October 2016 01:59:55PM 10 points [-]

Save less because of the high probability that the AI will (a) kill us, (b) make everyone extremely rich, or (c) make the world weird enough so that money doesn't matter.

Comment author: turchin 10 October 2016 02:28:19PM 3 points [-]

Good point, but my question was about what we can do to raise chances that it will be friendly AI.

Comment author: skeptical_lurker 10 October 2016 06:26:46PM 7 points [-]

Ignore all the stuff about provably friendly AI, because AFAIK its fairly stuck at the fundamental level of theoretical impossibility due to lob's theorem and its prob going to take a lot more than five years. Instead, work on cruder methods which have less chance of working but far more chance of actually being developed in time. Specifically, if Google are developing it in 5 years, then its probably going to be deepmind with DNNs and RL, so work on methods that can fit in with that approach.

Comment author: Houshalter 10 October 2016 08:07:29PM *  4 points [-]

I agree. I think it's very unlikely FAI could be produced from MIRI's very abstract approach. At least anytime soon.

There are some methods that may work on NN based approaches. For instance my idea for an AI that pretends to be human. In general, you can make AIs that do not have long-term goals, only short term ones. Or even AIs that don't have goals at all and just make predictions. E.g., predicting what a human would do. The point is to avoid making them agents that maximize values in the real world.

These ideas don't solve FAI on their own. But they do give a way of getting useful work out of even very powerful AIs. You could task them with coming up with FAI ideas. The AIs could write research papers, review papers, prove theorems, write and review code, etc.

I also think it's possible that RL isn't that dangerous. Reinforcement learners can't model death and don't care about self-preservation. They may try to hijack their own reward signal, but it's difficult to understand what they would do after that. E.g. if they just tweak their own RAM to have reward = +Inf, and then not do anything else. It may be harder to create a working paperclip maximizer than is commonly believed, even if we do get superintelligent AI.

Comment author: turchin 11 October 2016 09:42:39AM 0 points [-]

I agree. FAI somehow should use human upload or human-like architecture for its value core. In this case values will be presented in it in complex and non-ortogonal ways, and at least one human-like creature will survive.

Comment author: turchin 11 October 2016 09:35:41AM *  2 points [-]

Yes. I think that we need not only workable solution, but also implementable. If someone create 800 pages pdf starting with new set theory, solution of Lob theorem problem etc and come to Google with it and say: "Hi, please, switch off all you have and implement this" - it will not work.

But MIRI added in 2016 the line of research for machine learning.

Comment author: James_Miller 11 October 2016 04:10:40AM 1 point [-]

Get a job at Google or seek to influence the people developing the AI. If, say, you were a beautiful woman you could, probably successfully, start a relationship with one of Google's AI developers.

Comment author: username2 11 October 2016 07:24:07PM -1 points [-]

I am confused as to whether I should upvote for "get a job at Google" or downvoter for "prostitute yourself".

Comment author: turchin 11 October 2016 06:21:53AM 1 point [-]

And how she will use this relation to make safer AI?

Comment author: James_Miller 11 October 2016 02:33:02PM 3 points [-]

She could read "The Basic AI Drives" to him at night.

Comment author: turchin 11 October 2016 03:14:53PM 1 point [-]

In hope that he will stop creating AI? But in 6 years it will be Microsoft.

Comment author: Lumifer 10 October 2016 02:48:06PM *  -2 points [-]

Nothing, because we still don't know what a friendly AI is.

Comment author: skeptical_lurker 10 October 2016 06:21:41PM 4 points [-]

That doesn't mean that there is nothing to do - if you don't know what FAI is, then you try to work out what it is.

Comment author: Lumifer 10 October 2016 06:43:42PM -1 points [-]

And how do you find out whether you're right or not?

Comment author: DanArmak 10 October 2016 02:55:47PM 2 points [-]

We do know it isn't an AI that kills us. Options b and c still qualify.

Comment author: Lumifer 10 October 2016 03:10:09PM 1 point [-]

Options (b) and (c) are basically wishes and those are complex X-D

"Not kill us" is an easy criterion, we already have an AI like that, it plays Go well.

Comment author: DanArmak 10 October 2016 04:18:24PM 3 points [-]

We don't have an AGI that doesn't kill us. Having one would be a significant step towards FAI. In fact, "a human-equivalent-or-better AGI that doesn't do anything greatly harmful to humanity" is a pretty good definition of FAI, or maybe "weak FAI".

Comment author: Lumifer 10 October 2016 04:43:09PM 0 points [-]

If it's a tool AGI, I don't see how it would help with friendliness, and if it's an active self-developing AGI, I thought the canonical position of LW was that there could be only one? and it's too late to do anything about friendliness at this point?

Comment author: DanArmak 10 October 2016 09:32:01PM 0 points [-]

I agree there would probably only be one successful AGI, so it's not the first step of many. I meant it would be a step in that direction. Poor phrasing on my part.

Comment author: Houshalter 10 October 2016 08:15:41PM 1 point [-]

Friendly AI is an AI which maximizes human values. We know what it is, we just don't know how to build one. Yet, anyway.

Comment author: Lumifer 11 October 2016 06:38:33PM 2 points [-]

We don't know what an AI which maximizes human values is because we don't know what human values are at the necessary level of precision. Not to mention the assumption that the AI will be a maximizer and that values can be maximized.

Comment author: Houshalter 12 October 2016 07:34:44AM 1 point [-]

Who says we need to hardcode human values though? Any reasonable solution will involve an AI that learns what human values are. Or some other method to the control problem that makes AIs that don't want to harm or defy their creators.

Comment author: Lumifer 12 October 2016 04:35:05PM 1 point [-]

But if you don't know what human values are, how can you be sure that the AI will learn them correctly?

So you make an AI and tell it: "Go forth and learn human values!" It goes and in a while comes back and says "Behold, I have learned them". How do you know this is true?

Comment author: Houshalter 13 October 2016 04:13:14AM 0 points [-]

If I train a neural network to recognize dogs, I have no way of knowing if it learned correctly. I can't look at the weights and see if they are correct dog image recognizing weights and not something else. But I can trust the process of training and validation, that the AI has learned to recognize what dogs look like.

It's a similar principle with learning human values. Of course it's more complicated than just feeding it images of dogs, but the principle of letting AIs learn models from real world data is the important part.

Comment author: Lumifer 13 October 2016 02:22:08PM *  0 points [-]

If I train a neural network to recognize dogs, I have no way of knowing if it learned correctly.

Of course you do. You test it. You show it a lot of images (that it hasn't seen before) of dogs and not-dogs and check how good it is at differentiating them.

How would that process work for an AI and human values?

the principle of letting AIs learn models from real world data

Right, human values: “A man's greatest pleasure is to defeat his enemies, to drive them before him, to take from them that which they possessed, to see those whom they cherished in tears, to ride their horses, and to hold their wives and daughters in his arms.”

Comment author: ChristianKl 10 October 2016 02:02:25PM 4 points [-]

Get employed by Google.

Comment author: ZankerH 10 October 2016 03:56:07PM 1 point [-]

Despair and dedicate your remaining lifespan to maximal hedonism.

Comment author: skeptical_lurker 10 October 2016 05:55:04PM -1 points [-]

Google do not strike me as incompitant, and they do have ethics oversite for AI. Worry, yes, despair, no.

Comment author: Thomas 10 October 2016 01:56:24PM 1 point [-]

First, this is not very unlikely.

Second, be faster than them.

Comment author: username2 11 October 2016 07:20:11PM 0 points [-]

Rejoice because the end is near.

Maybe buy Google stock?

Comment author: niceguyanon 11 October 2016 05:32:28PM 4 points [-]

https://www.quora.com/How-can-I-get-Wi-Fi-for-free-at-a-hotel/answer/Yishan-Wong

Want free wifi when staying at an hotel? Ask for it. Of course!, Duh, seems so obvious now that I think about it.

Comment author: roland 10 October 2016 12:20:15PM 3 points [-]

Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?

Comment author: pcm 10 October 2016 04:44:24PM 3 points [-]

I suspect attempted telekinesis is relevant.

Comment author: Tem42 13 October 2016 11:23:22PM 0 points [-]

If it is severe enough that you are posting here about it making you feel bad, it is worth trying to replace it with a mental habit that works equally well to prevent future errors but feels better.

It is good to gain control over your mental habits in general, and this sounds like a good place to start.

If those statements appear true to you, no other analysis of this behavior is likely necessary.

Comment author: torekp 13 October 2016 12:36:16AM 0 points [-]

Well, unless you're an outlier in rumination and related emotions, you might want to consider how the evolutionary ancestral environment compares to the modern one. It was healthy in the former.

Comment author: skeptical_lurker 10 October 2016 06:14:36PM 2 points [-]

We live in an increasingly globalised world, where moving between countries is both easier in terms of transport costs and more socially acceptable. Once translation reaches near-human levels, language barriers will be far less of a problem. I'm wondering to what extent evaporative cooling might happen to countries, both in terms of values and economically.

I read that France and Greece lost 3 & 5% of their millionaires last year (or possibly the year before), citing economic depression and rising racial/religious tension, with the most popular destination being Australia (as it has the 1st or 2nd highest HDI in the world). 3-5% may not seem like a lot, but if it were sustained for several years it quickly piles up. The feedback effects are obvious - the wealthier members of society find it easier to leave and perhaps have more of a motive to leave an economic collapse, which decreases tax revenue, which increases collapse etc. On the flip side, Australia attracts these people and its economy grows more making it even more attractive...

Socially, the same effect as described in EY's essay I linked happens on a national scale - if the 'blue' people leave, the country becomes 'greener' which attracts more greens and forces out more blues. And social/economic factors feed into each other too - economic collapses cause extremism of all sorts, while I imagine a wealthy society attracting elites would be more able to handle or avoid conflicts.

Now, this is not automatically a bad thing, or at least it might be bad locally for some people, but perhaps not globally. Any thoughts as to what sort of outcomes there might be? And incidentally, how many people can you fit in Australia? I know its very big, but also has a lot of desert.

Comment author: Lumifer 10 October 2016 06:42:04PM 3 points [-]

Brain drain has been a concern of some for a long time.

Comment author: Houshalter 10 October 2016 08:17:50PM 1 point [-]

And also competitive tax rates have been a popular subject in politics for a long long time. "If we tax millionaires/businesses, what stops them from just leaving to another country/state/city?"

Comment author: skeptical_lurker 15 October 2016 01:35:03PM 0 points [-]

Indeed, but I was wondering whether modern social and technological changes will accelerate this.

Comment author: ChristianKl 10 October 2016 09:46:15PM 1 point [-]

And incidentally, how many people can you fit in Australia? I know its very big, but also has a lot of desert.

You can fit many people in California despite it being desert.

Comment author: username2 11 October 2016 07:18:35PM 0 points [-]

*Southern California

Comment author: Daniel_Burfoot 11 October 2016 07:34:51PM 0 points [-]

In my view, segregating the world by values would actually be really good. People who have very different belief systems should not try or be forced to live in the same country.

Comment author: WalterL 12 October 2016 02:46:05PM -1 points [-]

Yes, those with my values will live here, in Gondor. Your folks can live other there, in Mordor. Our citizens will no longer come into contact and conflict with one another, and peace will reign forever.

What, these segregated regions THEMSELVES come into conflict? Absurd. What would you even call a conflict that was between large groups of people? That could never happen. Everyone who shares my value system knows that lots of people would die, and we all agree that nothing could be worth that.

Comment author: Daniel_Burfoot 12 October 2016 06:49:03PM 1 point [-]

Downvoted for making a flippant, argument-based-on-fiction response to serious comment.

Comment author: WalterL 12 October 2016 07:53:10PM -1 points [-]

Here's a more serious response.

  1. Segregating the world, period, based on whatever, is impossible without a coercive power that the existing nations of earth would consider illegal. Before you could forcefully migrate a large percentage of the world's humans you'd have to win a war with whatever portion of the UN stood against you.
  2. If you could do it, no one would admit to having any values other than those which got to live in/own the nicest places/stuff/be with their family / not be with their competitors/whatever. The technology to determine everyone's values does not exist.
  3. If you somehow derived everyone's values and split them by these, you would probably be condemning large segments of the population to misery (Lots of people's values are built around living around people who don't share them.), and there would be widespread resentment. The invincible force you used to overcome objection 1 would be tested within a generation.
Comment author: Daniel_Burfoot 13 October 2016 02:27:02AM 2 points [-]

Okay, I obviously don't mean that we should value-segregate people at the point of a gun. I mean that if people naturally want to migrate towards geopolitical communities that better fit their particular value system, this is probably a good thing.

Comment author: WalterL 13 October 2016 03:52:56AM 0 points [-]

Yeah, I agree that people being able to travel freely and choose where they live is good.

Comment author: Houshalter 12 October 2016 07:28:29AM 1 point [-]

But the problem is it's not just by values. It's also by wealth and intelligence and education. If you have half of the world that is really poor, and anyone that is intelligent or wealthy automatically leaves, then they will probably stay poor forever.

Comment author: dhoe 10 October 2016 12:27:50PM 2 points [-]

My partner has requested that I learn to give a good massage. I don't enjoy massages myself and the online resources I find seem to mostly steeped in woo to some degree. Does anybody have some good non-woo resources for learning it?

Comment author: ChristianKl 10 October 2016 01:05:28PM 3 points [-]

The standard way to learn massage is through taking a course.

I would also recommend Betty Martin's 3-Minute game as a secular message like practice: https://www.youtube.com/watch?v=auokDp_EA80

Comment author: turchin 10 October 2016 12:46:06PM 3 points [-]

There is 5 times more members in the group "Voluntary Human Extinction Movement (VHEMT)" (9800) in Facebook than in the group "Existential risks" (1880). What we should conclude from it?

Comment author: ChristianKl 10 October 2016 12:53:15PM 5 points [-]

Nothing. I don't think facebook membership counts are a good measurement.

Comment author: DanArmak 10 October 2016 02:54:19PM 4 points [-]

Or possibly they are accurate measurements of the rates of Facebook use among these two groups. Maybe it's a good thing if people who are concerned about existential risk do serious things about it instead of participating in a Facebook group.

Comment author: ChristianKl 10 October 2016 09:02:41PM 0 points [-]

The success of a Facebook group depends a lot on how it get's promoted and whether there are a few people who care about creating content for it.

Comment author: DanArmak 10 October 2016 09:32:39PM 0 points [-]

Is the 'success' of a group its number of members, regardless of actual activity?

Comment author: ChristianKl 10 October 2016 09:43:51PM 0 points [-]

I don't think I would need to define it that way for the above comment to be coherent.

Comment author: DanArmak 10 October 2016 09:56:24PM 0 points [-]

Of course not. Then you meant simply the success of the goals of the group's creators?

Comment author: ChristianKl 10 October 2016 10:13:27PM 1 point [-]

I think my sentence is true with both definitions of success.

Comment author: Gunnar_Zarncke 10 October 2016 08:54:37PM 2 points [-]

Link: http://www.vhemt.org/

It's very likely much bigger then 9800. It is also very balanced and laid back in its views and methods. I'd think that contributes.

Comment author: MrMind 11 October 2016 07:23:41AM *  1 point [-]

I looked into some of the most obvious objections. Some have reasonable answers (why not just kill yourself?), some others are based on a (to me) crazy assumption: that the original state of the biosphere pre-humans somehow is more valuable than the collective experience of the human race.
To which I don't just disagree, but think it's a logic error, since values exist only in the mind of those who can compute it, whatever it is.

Comment author: WalterL 12 October 2016 02:51:36PM 2 points [-]

grumble grumble...

Look, I'm not pro-"Kill All Humans", but I don't think that last step is correct.

Bob can prefer that the human race die off and the earth spin uninhabited forever. It makes him evil, but there's no "logic error" in that, any more than there is in Al's preference that humanity spread out throughout the stars. They both envision future states and take actions that they believe will cause those states.

Comment author: MrMind 17 October 2016 08:04:23AM 0 points [-]

I think it's a logical error from the point of view of my theory of computational meta-ethics, not from a general absolute point of view.
Indeed, by VNM theorem, any course of action which is self-consistent can be said to have a guiding value.
But, if you see values as something that is calculated inside an agent, as I do, and exists only in the mind of those who do execute that computation, then making a state of the world that terminates your existence is a fallacy: whatever value you are maximizing, you cannot maximize it without anyone who can compute it.
Note that this formulation would allow substituting all humans with computronium devoted to calculating that value, so is still vulnerable to UFAI, but at least it rejects prima facie a simple extinction of all sentient life.

Comment author: WalterL 17 October 2016 01:45:09PM 0 points [-]

Ok, but that sounds like a problem with your theory, not someone's else's logic error.

Like, when you call something a "logic error", my first instinct is to check its logic. Then when you clarify that what you mean is that it didn't meet with your classification system's approval, I feel like you are baiting and switching. Maybe go with "sin", or "perversion", to make clear that your meaning is just "Mr. Mind doesn't like this".

Comment author: Romashka 16 October 2016 09:21:18PM *  1 point [-]

A recommendation from personal experience (n=1 or 2): translating (or proof-reading) articles for a journal specializing in a field close (but not very close) to your own gives you a more-or-less regular opportunity to read reviews of literature which you wouldn't have thought to survey on your own.

I find it cool. One day, I just browse the net, looking at - whatever I look at, the next day, bacteriae developing on industrial wastes come knocking. And the advantage of reading the text in my native tongue is that tiny decrease in cognitive power necessary to process the information (more than made up by the effort of translation, but hey, practice.)

Comment author: MrMind 11 October 2016 01:06:33PM 1 point [-]

Is there a good rebuttal to why we don't donate 100% of our income to charity? I mean, as an explanation tribality / near - far are ok, but is there a good justification post-hoc?

Comment author: turchin 11 October 2016 02:03:47PM *  3 points [-]

Some possible argument against charities. Personally I think that it is normal to donate around 1 per cent of income in form of charity support.

  1. Some can't survive on less or have other obligations that looks like charity (child support)
  2. We would have less initiative to earn more
  3. It would hurt our economy, as it is consumer driven. We must buy Iphones
  4. I do many useful things which intended on helping other people, but I need pleasures to recreate my commitments, so I spend money on myself.
  5. I pay taxes and it is like charity.
  6. I know better how to spent money on my needs.
  7. Human psychology is about summing different values in one brain, so I could spent only part of my energy on charity.
  8. If I buy goods, my money goes to working people, so it is like charity for them. If I stop buying goods, they will be jobless and will need charity money for survive. So the more I give for charity, the more people need it.
  9. If you overdonate, you could flip-flop and start to hate the thing. Especially if you find that your money was not spent effectively.
  10. Donating 100 per cent will make you look crazy in views of some, and their will to donate diminish.
  11. If you spent more on yourself, you could ask higher salary and as result earn more and donate more. Only a homeless and jobless person could donate 100 per cent.
Comment author: gjm 11 October 2016 03:10:30PM -1 points [-]

100%? Well, your future charitable donations will be markedly curtailed after you starve to death.

Comment author: ahbwramc 12 October 2016 02:42:06AM 0 points [-]

I mean, Laffer Curve-type reasons if nothing else.

Comment author: siIver 11 October 2016 03:50:10PM *  0 points [-]

100% doesn't work because then you starve. If I re-formulate your question to "is there any rebuttal to why we don't donate way more to charity than we currently do" then the answer depends on your belief system. If you are utilitarian, the answer is definitive no. You should spend way more on charity.

Comment author: username2 11 October 2016 07:28:09PM *  1 point [-]

Nonsense. I believe my life and the lives of people close to me are more important than someone starving in a place whose name I can't pronounce. I just don't assign the same weight to all people. That is perfectly consistent with utilitarianism.

Comment author: siIver 11 October 2016 07:40:04PM *  0 points [-]

Er... no. Utilitarianism prohibits that exact thing by design. That's one of its most important aspects.

Read the definition. This is unambiguous.

Comment author: username2 11 October 2016 08:45:03PM 3 points [-]

"Utilitarianism is a theory in normative ethics holding that the best moral action is the one that maximizes utility." -Wikipedia

The very next sentence starts with "Utility is defined in various ways..." It is entirely possible for there to be utility functions that treat sentient beings differently. John Stuart Mill may have phrased it as "the greatest good for the greatest number" but the clutch is in the word "good" which is left undefined. This is as opposed to, say, virtue ethics which doesn't care per se about the consequences of actions.

Comment author: Good_Burning_Plastic 12 October 2016 01:09:01PM 0 points [-]

If I re-formulate your question to "is there any rebuttal to why we don't donate way more to charity than we currently do" then the answer depends on your belief system.

(And also on how much money you currently donate to charity.)

Comment author: WalterL 12 October 2016 02:53:52PM 0 points [-]

"Don't wanna", shading into "Make Me" if they press. Anyone trying to tell you what to do isn't your Real Dad! (Unless they are, in which case maybe try and figure out what's going on.)

Comment author: username2 11 October 2016 07:29:31PM 0 points [-]

A mother that followed that logic would push her own baby in front of a trolley to save five random strangers. Ask yourself if that is the moral framework you really want to follow.

Comment author: Lumifer 11 October 2016 07:47:10PM 0 points [-]

Hey, look here, you totally should. All that emotional empathy just gets in the way.

Comment author: username2 11 October 2016 08:48:50PM *  0 points [-]

Read it already. Let's be clear: you think the mother should push her baby in front of a trolley to save five random strangers? If so, why? If not, why not? I don't consider this a loaded question -- it falls directly out of the utilitarian calculus and assumed values that leads to "donate 100% to charities."

[Let's assume the strangers are also same-age babies, so there's no weasel ways out ("baby has more life ahead of it", etc.)]

Comment author: SithLord13 13 October 2016 03:37:16PM 0 points [-]

There are a lot of conflicting aspects to consider here outside of a vacuum. Discounting the unknown unknowns, which could factor heavily here since it's an emotionally biasing topic, you've got the fact that the baby is going to be raised by an assumably attentive mother, as opposed to the 5 who wound up in that situation once, showing at least some increased risk of falling victim to such a situation again. Then you have the psychological damage to the mother, which is going to be even greater because she had to do the act herself. Then you've got the fact that a child raised by a mother who is willing to do it has a greater chance of being raised in such a way as to have a net positive impact on society. Then you have the greater potential for preventing the situation in the future, caused by the increased visibility of the higher death toll. I'm certain there are more aspects I'm failing to note.

But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let's scale the problem up for a moment. Say instead of 5 it's 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.

Comment author: username2 13 October 2016 11:42:29PM *  1 point [-]

But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let's scale the problem up for a moment. Say instead of 5 it's 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.

Never, in my opinion. Put every other human being on the tracks (excluding other close family members to keep this from being a Sophie's choice "would you rather..." game). The mother should still act to protect her child. I'm not joking.

You can post-facto rationalize this by valuing the kind of societies where mothers are ready to sacrifice their kids, and indeed encouraged to save another life, vs. the world where mothers simply always protect their kids no matter what.

But I don't think this is necessary -- you don't need to validate it on utilitarian grounds. Rather it is perfectly okay for one person to value some lives more than others. We shouldn't want to change this, IMHO. And I think the OP's question about donating 100% to charity, at the detriment of themselves, is symptomatic of the problems that arise from utilitarian thinking. After all if OP was not having internal conflict between internal morals and supposedly rational utilitarian thinking, he wouldn't have asked the question...

Comment author: philh 14 October 2016 04:33:07PM 0 points [-]

I think it's okay for one person to value some lives more than others, but not that much more. ("Okay" - not ideal in theory, maybe a good thing given other facts about reality, I wouldn't want to tear it down for multiple reasons.)

Btw, you say the mother should protect her child, but it's okay to value some lives more than others - these seem in conflict. Do you in fact think it's obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?

Comment author: username2 14 October 2016 09:58:31PM *  1 point [-]

We've now delved beyond the topic -- which is okay, I'm just pointing that out.

I think it's okay for one person to value some lives more than others, but not that much more.

I'm not quite sure what you mean by that. I'm a duster, not a torturer, which means that there are some actions I just won't do, no matter how many utilitons get multiplied on the other side. I consider it okay for one person to value another to such a degree that they are literally willing to sacrifice every other person to save the one, as in the mother-and-baby trolly scenario. Is that what you mean?

I also think that these scenarios usually devolve into a "would you rather..." game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.

Btw, you say the mother should protect her child, but it's okay to value some lives more than others - these seem in conflict. Do you in fact think it's obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?

If I can draw a political analogy which may even be more than an analogy, moral decision making via utilitarian calculus with assumed equal weights to (sentient, human) life is analogous to the central planning of communism: from each what they can provide, to each what they need. Maximize happiness. With perfectly rational decision making and everyone sharing common goals, this should work. But of course in reality we end up with at best inefficient distribution of resources due to failures in planning or execution. The pragmatic reality is even worse: people don't on the whole work altruistically for the betterment of society, and so you end up with nepotistic, kleptocratic regimes that exploit the wealth of the country for self-serving purpose of those on top.

Recognizing and embracing the fact that people have conflicting moral values (even if restricted to only the weights they place on other's happiness) is akin to the enlightened self-interest of capitalism. People are given self-agency to seek personal benefits for themselves and those they care about, and societal prosperity follows. Of course in reality all non-libertarians know that there are a wide variety of market failures, and achieving maximum happiness requires careful crafting of incentive structures. It is quite easy to show mathematically and historically that restricting yourself to multi-agent games with Pareto optimal outcomes (capitalism with good incentives) restricts you from being able to craft all possible outcomes. Central planning got us to the Moon. Not-profit-maximizing thinking is getting SpaceX to Mars. It's more profitable to mitigate the symptoms of AIDS with daily antiviral drugs than to cure the disease outright. Etc. But nevertheless it is generally capitalist societies that experience the most prosperity, as measured by quality of life, technological innovation, material wealth, or happiness surveys.

To finally circle back to your question, I'm not saying that it is right or wrong that the mother cares for her child to the exclusion of literally everyone else. Or even that she SHOULD think this way, although I suspect that is a position I could argue for. What I'm saying is that she should embrace the moral intuitions her genes and environment have impressed upon her, and not try to fight them via System 2 thinking. And if everyone does this we can still live in a harmonious and generally good society even though each of our neighbors don't exactly share our values (I value my kids, they value theirs).

I've previously been exposed to the writings and artwork of peasants that lived through the harshest time of Chairman Mao's Great Leap forward, and it remarkable how similar their thoughts, concerns, fears and introspectives can be to those who struggle with LW-style "shut up and multiply" utilitarianism. For example I spoke with someone at a CFAR workshop who has had a real psychological issues for a decade over internal conflict between selfless "save the world" work he feels he SHOULD be doing, or doing more of, and basic fulfillment of Maslow's hierarchy that leaves him feeling guilty and thinking he's a bad person.

My own opinion and advice? Work your way up up Maslow's hierarchy of needs using just your ethical intuitions as a guide. Once you have the luxury of being at the top of the pyramid, then you can start to worry about self-actualization by working to change the underlying incentives that guide the efforts of our society and create our environmentally-driven value functions in the first place.

Comment author: philh 18 October 2016 12:07:27PM 0 points [-]

I think I basically agree with the "embrace existing moral intuitions" bit.

Unpacking my first paragraph in the other post, you might get: I prefer people to have moral intuitions that value their kids equally with others, but if they value their own kids a bit more, that's not terrible; our values are mostly aligned; I expect optimisation power aplied to those values will typically also satisfy my own values. If they value their kids more than literally everyone else, that is terrible; our values diverge too much; I expect optimisation power appied to their values has a good chance of harming my own.

Comment author: SithLord13 15 October 2016 02:50:53AM 0 points [-]

I also think that these scenarios usually devolve into a "would you rather..." game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.

Can you expand on this a bit? (Full disclosure I'm still relatively new to Less Wrong, and still learning quite a bit that I think most people here have a firm grip on.) I would think they illuminate a great deal about our underlying moral values, if we assume they're honest answers and that people are actually bound by their morals (or are at least answering as though they are, which I believe to be implicit in the question).

For example, I'm also a duster, and that "would you rather" taught me a great deal about my morality. (Although to be fair what it taught me is certainly not what was intended, which was that my moral system is not strictly multiplicative but is either logarithmic or exponential or some such function where a non-zero number that is sufficiently small can't be significantly increased simply by having it apply to significantly multiple people.)

Comment author: username2 17 October 2016 05:02:08PM *  2 points [-]

This is deserving of a much longer answer which I have not had the time to write and probably won't any time soon, I'm sorry to say. But in short summary human drives and morals are more behaviorist that utilitarian. The utility function approximation is just that, an approximation.

Imagine you have a shovel, and while digging you hit a large rock and the handle breaks. What that shovel designed to break, in sense that its purpose was to break? No, shovels are designed to dig holes. Breakage, for the most part, is just an unintended side-effect of the materials used. Now in some cases things are intended to fail early for safety reasons, e,g, to have the shovel break before your bones will. But even then this isn't some underlying root purpose. The purpose of the shovel is still to dig holes. The breakage is more a secondary consideration to prevent undesirable side effects in some failure modes.

Does learning that the shovel breaks when it exceeds normal digging stresses tell you anything about the purpose / utility function of the shovel? Pedantically, a little bit if you accept the breaking point being a designed-in safety consideration. But it doesn't enlighten us about the hole digging nature at all.

Would you rather put dust in the eyes of 3^^^3 people, or torture one individual to death? Would you rather push one person onto the trolley tracks to save five others? These are failure mode analysis of edge cases. The real answer is I'd rather have dust in no one's eyes and nobody tortured, and nobody hit by trolleys. Making an arbitrary what-if tradeoff between these scenarios doesn't tell us much about our underlying desires because there isn't some consistent mathematical utility function underlying our responses. At best it just reveals how we've been wired by genetics and upbringing and present environment to prioritize our behaviorist responses. Which is interesting, to be sure. But not very informative, to be honest.

Comment author: MrMind 14 October 2016 08:24:02AM *  0 points [-]

Ah, as it happens, I have none of those conflicts. I asked because I'm preparing an article on utilitarianism, and I happened to bounce on the question I posted as a good proxy of the hard problems in adopting it as a moral theory.
But I can understand that someone who believes this might have a lot of internal struggles.

Full disclosure: I'm a Duster, not a Torturer. But I'm trying to steelman Torture.

Comment author: username2 14 October 2016 06:07:12PM 1 point [-]

Ah, then I look forward to reading your article :)

Comment author: Lumifer 11 October 2016 08:54:08PM 0 points [-]

...and did you read my comments in the thread?

Comment author: username2 11 October 2016 09:15:52PM 0 points [-]

Ah I did (at the time), but forgot it was you that made those comments. So I should direct my question to Jacobian, not you.

In any case I'm certainly not a "save the world" type of person, and find myself thoroughly confused by those who profess to be and enter into self-destructive behavior as a result.

Comment author: Crux 11 October 2016 07:23:52AM *  1 point [-]

Many people who delve into the deep parts of analytical philosophy will end up feeling at times like they can't justify anything, that definite knowledge is impossible to ascertain, and so forth. It's a classic trend. Hume is famous for being a "skeptic", although almost everyone seems to misunderstand what that means within the context of his philosophical system.

See here for a post I wrote which I could have called The Final Antidote to Skepticism.

Comment author: ChristianKl 11 October 2016 01:17:28PM *  0 points [-]

What makes you think that people can pattern-match sociopathy by looking at someone's face? Sociopathy usually doesn't lead to low charisma and people getting the sense not to interact with the person.

Comment author: Crux 11 October 2016 04:11:25PM 0 points [-]

In certain cases people can pattern-match sociopath by looking at someone's face. I didn't mean to suggest the average person can do it on a consistent basis.

Comment author: niceguyanon 11 October 2016 05:08:28PM 0 points [-]

In certain cases people can pattern-match sociopath by looking at someone's face.

Do you have any links, because this is interesting if true. Kinda like human lie detectors. But I am skeptical, because how would such a thing arise?

Why would sociopaths have distinguishing facial markers and what are they?

Comment author: waveman 11 October 2016 11:15:13PM 1 point [-]

Book "Without Conscience" by Robert Hare who is a real psychologist has simple tips on recognizing them. Not purely by photographic appearance but it is not too hard. Example with eye contact they tend to stare too long.

Comment author: morganism 16 October 2016 06:40:23PM 0 points [-]

Learning difficulties linked to winter conception

The article points out the the study was done in Scotland, and may be linked to Vit D uptake

http://questioning-answers.blogspot.com/2016/10/learning-difficulties-linked-with-winter-conception.html

the paper by Daniel Mackay and colleagues [1]

Comment author: Ilverin 11 October 2016 06:13:45PM *  0 points [-]

Is there any product like an adult pacifier that is socially acceptable to use?

I am struggling with self-control to not interrupt people and am afraid for my job.

EDIT: In the meantime (or long-term if it works) I'll use less caffeine (currentlly 400mg daily) to see if that helps.

Comment author: SithLord13 11 October 2016 06:50:06PM 5 points [-]

Could chewing gum serve as a suitable replacement for you?

Comment author: Lumifer 11 October 2016 06:58:47PM *  3 points [-]

It's socially acceptable to twirl and manipulate small objects in your hands, from pens to stress balls. If you need to get your mouth involved, it's mostly socially acceptable to chew on pens. Former smokers used to hold empty pipes in their mouths, just for comfort, but it's hard to pull off nowadays unless you're old or a fully-blown hipster.

Comment author: MrMind 12 October 2016 07:20:20AM *  1 point [-]

How about a lollipop? It's almost the same thing, and since inspector Kojak it's become much more socially acceptable, even cool, if you pull it off well.
If you are a woman, though, you'll likely suffer some sexual objectification (what a news!).