Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: jkaufman 13 December 2014 12:09:43PM 11 points [-]

its keynote would be delivered by Ray Kurzweil, Google’s director of engineering

Standard correction: Kurzweil is one of many directors of engineering at Google. It's unfortunate that the name of his title makes it sound like he's the only one.

Comment author: Dagon 09 December 2014 07:52:46PM 4 points [-]

Huh? So your view of a moral theory is that it ranks your options, but there's no implication that a moral agent should pick the best known option?

What purpose does such a theory serve? Why would you classify it as a "moral theory" rather than "an interesting numeric excercise"?

Comment author: jkaufman 09 December 2014 08:51:48PM *  1 point [-]

An agent should pick the best options they can get themselves to pick. In practice this will not be the ones that maximizes utility as they understand it, but it will be ones with higher utility than if they just did whatever they felt like. And, more strongly, it this gives higher utility than if they tried to do as many good things as possible without prioritizing the really important ones.

Comment author: Lukas_Gloor 09 December 2014 02:52:12PM -1 points [-]

This sounds like preference utilitarianism, the view that what matters for a person is the extent to which her utility function ("preferences") is fulfilled. In academic ethics outside of Lesswrong, "utilitarianism" refers to a family of ethical views, of which the most commonly associated one is Bentham's "classical utilitarianism", where "utility" is very specifically defined as "suffering minus happiness" that a person experiences over time.

Comment author: jkaufman 09 December 2014 08:47:47PM 4 points [-]

I'm not seeing where in Dagon's comment they indicate preference utilitarianism vs (ex) hedonic?

Comment author: James_Miller 09 December 2014 04:53:30PM 2 points [-]

For me utilitarianism means maximizing a weighted sum of everyone's utility, but the weights don't have to be equal. If you give yourself a high enough weight, no extreme self-sacrifice is necessary. The reason to be a utilitarian is that if some outcome is not consistent with it, it should be possible to make some people better off without making anyone worse off.

Comment author: jkaufman 09 December 2014 08:42:53PM 4 points [-]

This is not a standard usage of the term "utilitarianism". You can have a weighting, for example based on capacity for suffering, but you can't weight yourself more just because you're you and call it utilitarianism.

Comment author: Bugmaster 09 December 2014 08:58:10AM -3 points [-]

The word "utilitarianism" technically means something like, "an algorithm for determining whether any given action should or should not be undertaken, given some predetermined utility function". However, when most people think of utilitarianism, they usually have a very specific utility function in mind. Taken together, the algorithm and the function do indeed imply certain "ethical obligations", which are somewhat tautologically defined as "doing whatever maximizes this utility function".

In general, the word "utilitarian" has been effectively re-defined in common speech as something like, "ruthlessly efficient to the point of extreme ugliness", so utilitarianism gets the horns effect from that.

Comment author: jkaufman 09 December 2014 01:52:27PM 7 points [-]

an algorithm for determining whether any given action should or should not be undertaken, given some predetermined utility function

That's not how the term "utilitarianism" is used in philosophy. The utility function has to be agent neutral. So a utility function where your welfare counts 10x as much as everyone else's wouldn't be utilitarian.

Comment author: Gondolinian 08 December 2014 02:22:48PM 13 points [-]

I motion to make the Stupid Questions threads monthly.

Comment author: jkaufman 08 December 2014 02:58:19PM *  18 points [-]

Start posting it monthly then.

Comment author: SodaPopinski 01 December 2014 10:25:59PM 5 points [-]

Elon Musk often advocates looking at problems from a first principles calculation rather than by analogy. My question is what does this kind of thinking imply for cryonics. Currently, the cost of full body preservation is around 80k. What could be done in principle with scale?

Ralph Merkle put out a plan (although lacking in details) for cryopreservation at around 4k. This doesn't seem to account for paying the staff or transportation. The basic idea is that one can reduce the marginal cost by preserving a huge number of people in one vat. There is some discussion of this going on at Longecity, but the details are still lacking.

Comment author: jkaufman 01 December 2014 11:18:44PM *  4 points [-]

The basic idea is that one can reduce the marginal cost by preserving a huge number of people in one vat.

Currently the main cost in cryonics is getting you frozen, not keeping you frozen. For example, Alcor gives these costs for neuropreservation:

  • $25k -- Comprehensive Member Standby (CMS) Fund
  • $30k -- Cryopreservation
  • $25k -- Patient Care Trust (PCT)
  • $80k -- Total

The CMS fund is what covers the Alcor team being ready to stabilize you as soon as you die, and transporting you to their facility. Then your cryopreservation fee covers filling you with cryoprotectants and slowly cooling you. Then the PCT covers your long term care. So 69% of your money goes to getting you frozen, and 31% goes to keeping you like that.

(Additionally I don't think it's likely that current freezing procedures are sufficient to preserve what makes you be you, and that better procedures would be more expensive, once we knew what they were.)

EDIT: To be fair, CMS would be much cheaper if it were something every hospital offered, because you're not paying for people to be on deathbed standby.

Comment author: ZankerH 01 December 2014 10:28:46PM 6 points [-]

Why would an effective altruist (or anyone wanting their donations to have a genuine beneficial effect) consider donating to animal charities? Isn't the whole premise of EA that everyone should donate to the highest utilon/$ charities, all of which happen to be directed at helping humans?

Just curiosity from someone uninterested in altruism. Why even bring this up here?

Comment author: jkaufman 01 December 2014 11:05:00PM *  12 points [-]

We don't all agree on what a utilon is. I think a year of human suffering is very bad, while a year of animal suffering is nearly irrelevant by comparison, so I think charities aimed at helping humans are where we get the most utility for our money. Other people's sense of the relative weight of humans and animals is different, however, and some value animals about the same as humans or only somewhat below.

To take a toy example, imagine there are two charities: one that averts a year of human suffering for $200 and one that averts a year of chicken suffering for $2. If I think human suffering is 1000x as bad as chicken suffering and you think human suffering is only 10x as bad, then even though we both agree on the facts of what will happen in response to our donations, we'll give to different charities because of our disagreement over values.

In reality, however, it's more complicated. The facts of what will happen in response to a donation are uncertain even in the best of times, but because a lot of people care about humans the various ways of helping them are much better researched. GiveWell's recommendations are all human-helping charities because of a combination of "they think humans matter more" and "the research on helping humans is better". Figuring out how to effectively help animals is hard, and while ACE has good people working on it, they're a small organization with limited funding and their recommendations are still much less robust than GiveWell's.

Comment author: Torgo 01 December 2014 02:55:21PM 0 points [-]

Thanks.

At this point, I'm leaning towards CSER. Do you happen to know how it compares to other X-risk organizations in terms of room for more funding?

Comment author: jkaufman 01 December 2014 04:33:52PM 1 point [-]

I don't know, sorry! Without someone like GiveWell looking into these groups individuals need to be doing a lot of research on their own. Write to them and ask? And then share back what you learn?

(Lack of vetting and the general difficulty of evaluating X-risk charities is part of why I'm currently not giving to any.)

Comment author: Torgo 24 November 2014 11:19:18AM 9 points [-]

I've long been convinced that donating all the income I can is the morally right thing to do. However, so far this has only taken the form of reduced consumption to save for donations down the road. Now that I have a level of savings I feel comfortable with and expect to start making more money next year, I no longer feel I have any excuse; I aim to start donating by the end of this year.

I’m increasingly convinced that existential risk reduction carries the largest expected value; however, I don’t feel like I have a good sense of where my donations would have the greatest impact. From what I have read, I am leaning towards movement building as the best instrumental goal, but I am far from sure. I’ll also mention that at this point I’m a bit skeptical that human ethics can be solved and then programmed into an FAI, but I also may be misunderstanding MIRI’s approach. I would hope that by increasing the focus on the existential risks of AI in elite/academic circles, more researchers could eventually begin pursuing a variety of possibilities for reducing AI risk.

At this point, I am primarily considering donating to FHI, CSER, MIRI or FLI, since they are ER focused. However, I am open to alternatives. What are others’ thoughts? Thanks a lot for the advice.

Comment author: jkaufman 01 December 2014 12:45:45PM 1 point [-]

If you think general EA movement building is what makes the most sense currently, then funding the Centre for Effective Altruism (the people who run GWWC and 80k) is probably best.

If you think X-risk specific movement building is better, then CSER and FLI seem like they make the most sense to me: they're both very new, and spreading the ideas into new communities is very valuable.

(And congratulations on getting to where you're ready to start donating!)

View more: Next