Comment author: username2 14 October 2016 09:58:31PM *  2 points [-]

We've now delved beyond the topic -- which is okay, I'm just pointing that out.

I think it's okay for one person to value some lives more than others, but not that much more.

I'm not quite sure what you mean by that. I'm a duster, not a torturer, which means that there are some actions I just won't do, no matter how many utilitons get multiplied on the other side. I consider it okay for one person to value another to such a degree that they are literally willing to sacrifice every other person to save the one, as in the mother-and-baby trolly scenario. Is that what you mean?

I also think that these scenarios usually devolve into a "would you rather..." game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.

Btw, you say the mother should protect her child, but it's okay to value some lives more than others - these seem in conflict. Do you in fact think it's obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?

If I can draw a political analogy which may even be more than an analogy, moral decision making via utilitarian calculus with assumed equal weights to (sentient, human) life is analogous to the central planning of communism: from each what they can provide, to each what they need. Maximize happiness. With perfectly rational decision making and everyone sharing common goals, this should work. But of course in reality we end up with at best inefficient distribution of resources due to failures in planning or execution. The pragmatic reality is even worse: people don't on the whole work altruistically for the betterment of society, and so you end up with nepotistic, kleptocratic regimes that exploit the wealth of the country for self-serving purpose of those on top.

Recognizing and embracing the fact that people have conflicting moral values (even if restricted to only the weights they place on other's happiness) is akin to the enlightened self-interest of capitalism. People are given self-agency to seek personal benefits for themselves and those they care about, and societal prosperity follows. Of course in reality all non-libertarians know that there are a wide variety of market failures, and achieving maximum happiness requires careful crafting of incentive structures. It is quite easy to show mathematically and historically that restricting yourself to multi-agent games with Pareto optimal outcomes (capitalism with good incentives) restricts you from being able to craft all possible outcomes. Central planning got us to the Moon. Not-profit-maximizing thinking is getting SpaceX to Mars. It's more profitable to mitigate the symptoms of AIDS with daily antiviral drugs than to cure the disease outright. Etc. But nevertheless it is generally capitalist societies that experience the most prosperity, as measured by quality of life, technological innovation, material wealth, or happiness surveys.

To finally circle back to your question, I'm not saying that it is right or wrong that the mother cares for her child to the exclusion of literally everyone else. Or even that she SHOULD think this way, although I suspect that is a position I could argue for. What I'm saying is that she should embrace the moral intuitions her genes and environment have impressed upon her, and not try to fight them via System 2 thinking. And if everyone does this we can still live in a harmonious and generally good society even though each of our neighbors don't exactly share our values (I value my kids, they value theirs).

I've previously been exposed to the writings and artwork of peasants that lived through the harshest time of Chairman Mao's Great Leap forward, and it remarkable how similar their thoughts, concerns, fears and introspectives can be to those who struggle with LW-style "shut up and multiply" utilitarianism. For example I spoke with someone at a CFAR workshop who has had a real psychological issues for a decade over internal conflict between selfless "save the world" work he feels he SHOULD be doing, or doing more of, and basic fulfillment of Maslow's hierarchy that leaves him feeling guilty and thinking he's a bad person.

My own opinion and advice? Work your way up up Maslow's hierarchy of needs using just your ethical intuitions as a guide. Once you have the luxury of being at the top of the pyramid, then you can start to worry about self-actualization by working to change the underlying incentives that guide the efforts of our society and create our environmentally-driven value functions in the first place.

Comment author: philh 18 October 2016 12:07:27PM 0 points [-]

I think I basically agree with the "embrace existing moral intuitions" bit.

Unpacking my first paragraph in the other post, you might get: I prefer people to have moral intuitions that value their kids equally with others, but if they value their own kids a bit more, that's not terrible; our values are mostly aligned; I expect optimisation power aplied to those values will typically also satisfy my own values. If they value their kids more than literally everyone else, that is terrible; our values diverge too much; I expect optimisation power appied to their values has a good chance of harming my own.

Comment author: username2 13 October 2016 11:42:29PM *  1 point [-]

But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let's scale the problem up for a moment. Say instead of 5 it's 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.

Never, in my opinion. Put every other human being on the tracks (excluding other close family members to keep this from being a Sophie's choice "would you rather..." game). The mother should still act to protect her child. I'm not joking.

You can post-facto rationalize this by valuing the kind of societies where mothers are ready to sacrifice their kids, and indeed encouraged to save another life, vs. the world where mothers simply always protect their kids no matter what.

But I don't think this is necessary -- you don't need to validate it on utilitarian grounds. Rather it is perfectly okay for one person to value some lives more than others. We shouldn't want to change this, IMHO. And I think the OP's question about donating 100% to charity, at the detriment of themselves, is symptomatic of the problems that arise from utilitarian thinking. After all if OP was not having internal conflict between internal morals and supposedly rational utilitarian thinking, he wouldn't have asked the question...

Comment author: philh 14 October 2016 04:33:07PM 0 points [-]

I think it's okay for one person to value some lives more than others, but not that much more. ("Okay" - not ideal in theory, maybe a good thing given other facts about reality, I wouldn't want to tear it down for multiple reasons.)

Btw, you say the mother should protect her child, but it's okay to value some lives more than others - these seem in conflict. Do you in fact think it's obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?

[Link] GiveWell: A case study in effective altruism, part 1

0 philh 14 October 2016 10:46AM
Comment author: DanArmak 08 October 2016 09:44:11PM *  4 points [-]

These six principles are true as far as they go, but I feel they're so weak so not to be very useful. I'd like to offer a more cynical view.

The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents. This has a name: Defense Against the Dark Arts. And I feel like these six principles are about as effective in real life as taking the canonical DADA first year class and then going up against HPMOR Voldemort.

With today's information technology and globalization, we're all exposed to world-class Dark Arts practitioners. Not being vulnerable to Cialdini's principles might help defend you in an argument with your coworker. But it won't serve you well when doubting something you read in the news or in an FDA-endorsed study.

And whatever your coworker or your favorite blog was arguing probably derives from such a curated source to begin with. All arguments rest on factual beliefs - outside of math anyway - and most of us are very far from being able to verify the facts we believe. And your own prior beliefs need to be well supported, to avoid being rejected on the same basis.

Comment author: philh 09 October 2016 07:56:56PM 2 points [-]

The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents.

I think the article is trying to help groups set up discourse norms that help people find the truth. (The update uses the phrase "socioepistemic virtue".) It's not so much about helping individuals defend against other individuals, as about helping groups defend their members against bad agents.

[Link] Six principles of a truth-friendly discourse

4 philh 08 October 2016 04:56PM
Comment author: CellBioGuy 04 October 2016 10:00:49PM *  11 points [-]

Advice solicited. Topics of interest I have lined up for upcoming posts include:

  • The history of life on Earth and its important developments
  • The nature of the last universal common ancestor (REALLY good new research on this just came out)
  • The origin of life and the different schools of thought on it
  • Another exploration of time in which I go over a paper that came out this summer that basically did exactly what I did a few months earlier with my "Space and Time Part II" calculations of our point in star and planet order that showed we are not early and are right around when you would expect to find the average biosphere, but extended it to types of stars and their lifetimes in a way I think I can improve upon.
  • My thoughs on how and why SETI has been sidetracked away from activities that are more likely to be productive towards activities that are all but doomed to fail, with a few theoretical case studies
  • My thoughts on how the Fermi paradox / 'great filter' is an ill-posed concept
  • Interesting recent research on the apparent evolutionary prerequisites for primate intelligence

Any thoughts on which of these are of particular interest, or other ideas to delve into?

Comment author: philh 05 October 2016 10:40:48AM 2 points [-]

I'd find all of these interesting, particularly the first three and the last.

I'm glad you're back.

Comment author: username2 04 October 2016 08:33:17PM 0 points [-]

Maybe? But consider that the opposite of what you just claimed sounds just as plausible to an outside observer. "Do what I mean" doesn't sound all that complicated -- even to someone with a background in computer science or AI specifically. "Do what I mean" translates as "accurately determine the principles which constrain my own actions and use those to constrain the AI's, or otherwise build a model of my thinking which the AI can use to evaluate options." Sub-goals such as verifying that the model matches reality fall easily out of this definition.

It's not at all clear, even to a practitioner within the field, that this expansion doesn't work, if in fact it does not.

Comment author: philh 05 October 2016 09:25:15AM 0 points [-]

It's not necessarily that the AI would have difficulty understanding what "do what humans mean" means, even before being told to do what humans mean.

It just has no reason to obey "do what humans mean" unless we program it to do what humans mean.

"Do what humans mean" is telling the AI to do something that we can currently only specify vaguely. "Figure out what we intend by "do what humans mean", and then do that" is also vaguely specified. It doesn't solve the problem.

Comment author: philh 13 September 2016 07:29:28PM 3 points [-]

As I understand it, if I buy a chicken in a supermarket, this causes approximately one chicken to be killed; and similar for beef etc, adjusting for the amount of meat versus the size of the animal.

Does anyone know how this number changes with discounts? I'm specifically thinking of the thing where my local supermarket reduces the price of things when they're approaching their expiry date.

Comment author: DataPacRat 10 September 2016 02:30:07AM 3 points [-]

Matrix multiplication

Could somebody explain to me, in a way I'd actually understand, how to (remember how to) go about multiplying a pair of matrixes? I've looked at Wikipedia, I've read linear algebra books up to where they supposedly explain matrixes, and I keep bouncing up against a mental wall where I can't seem to remember how to figure out how to get the answer.

Comment author: philh 12 September 2016 10:44:49AM *  3 points [-]

Low confidence that this will help, but my approach: I mentally move the right-hand matrix up, so that the space "in between" them (right of the first, below the second) is the right shape for the result. Each value of the result is the dot product of the vectors to the left and above it. (I don't have a trick for dot products, I just know how to calculate them.)

. . . . g h i
a b c * j k l
d e f . m n o

"becomes"

. . . g h i
. . . j k l
. . . m n o
. . . -----
a b c|S T W
d e f|X Y Z

and e.g. S is (a b c) dot (g j m), Y is (d e f) dot (h k n).

Comment author: James_Miller 06 September 2016 11:45:26PM 0 points [-]

I'm having trouble sending a text message with an ipad and iphone? The recipient isn't receiving the text. What are the sources of error other than wrong number? Also, are you supposed to input the 1 before the 10 digit number? It's U.S. to U.S.

Comment author: philh 08 September 2016 10:31:23AM 2 points [-]

What's the recipient using, and did they previously use something else? If they switched from iPhone to not-iPhone, there's a bug with Apple's iMessage that could prevent them from receiving texts. https://support.apple.com/en-gb/HT204270

View more: Next