Recommendations for donating to an anti-death cause

20 fowlertm 09 April 2014 02:56AM

I've recently had the bad luck of having numerous people close to me die. Though I've wanted to contribute to anti-aging and anti-death research for a while, I'm only now in the position of being stable and materially well-off enough to throw around semi-serious cash.

Who should I donate to? I don't want to do anything with cryonics yet; I haven't given cryonics enough thought to be convinced it'd be worth the money. But I was considering the Methuselah foundation.

Suggestions?

What are some science mistakes you made in college?

5 aarongertler 23 March 2014 05:28AM

Hello, Less Wrong!

This seems like a community with a relatively high density of people who have worked in labs, so I'm posting here.

I recently finished the first draft of something I'm calling "The Hapless Undergraduate's Guide to Research" (HUGR). (Yes, "HUGS" would be a good acronym, but "science" isn't specific enough.) Not sure if it will ever be released, or what the final format will be, but I'll need more things to put in it whatever happens.

Basically, this is meant to be an ever-growing collection of mistakes that new researchers (grad or undergrad) have made while working in labs. Hundreds of thousands of students around the English-speaking world do lab work, and based on my own experiences in a neuroscience lab, it seems like things can easily go wrong, especially when rookie researchers are involved. There's nothing wrong with making mistakes, but it would be nice to have a source of information around that people (especially students) might read, and which might help them watch out for some of the problems with the biggest pain-to-ease-of-avoidance ratios.

Since my experience is specifically in neuroscience, and even more specifically in "phone screening and research and data entry", I'd like to draw from a broad collection of perspectives. And, come to think of it, there's no reason to limit this to research assistants--all scientists, from CS to anthropology, are welcome!

So--what are some science mistakes you have made? What should you have done to prevent them, in terms of "simple habits/heuristics other people can apply"? Feel free to mention mistakes from other people that you've seen, as long as you're not naming names in a damaging way. Thanks for any help you can provide!

 

And here are a couple of examples of mistakes I've gathered so far:

--Research done with elderly subjects. On a snowy day, the sidewalk froze, so subjects couldn't be screened for a day, because no one thought to salt the sidewalks in advance. Lots of scheduling chaos.

--Data entry being done for papers with certain characteristics. Research assistants and principal investigator were not on the same page regarding which data was worth collecting. Each paper had to be read 7 or 8 times by the time all was said and done, and constructing the database took six extra weeks.

--A research assistant clamped a special glass tube too tight, broke it, and found that replacements would take weeks to come in... well, there may not be much of a lesson in that, but maybe knowing equipment is hard to replace cold subconsciously induce more caring.

Lifestyle interventions to increase longevity

120 RomeoStevens 28 February 2014 06:28AM

There is a lot of bad science and controversy in the realm of how to have a healthy lifestyle. Every week we are bombarded with new studies conflicting older studies telling us X is good or Y is bad. Eventually we reach our psychological limit, throw up our hands, and give up. I used to do this a lot. I knew exercise was good, I knew flossing was good, and I wanted to eat better. But I never acted on any of that knowledge. I would feel guilty when I thought about this stuff and go back to what I was doing. Unsurprisingly, this didn't really cause me to make any positive lifestyle changes.

Instead of vaguely guilt-tripping you with potentially unreliable science news, this post aims to provide an overview of lifestyle interventions that have very strong evidence behind them and concrete ways to implement them.

continue reading »

Proportional Giving

10 gjm 02 March 2014 09:09PM

Executive summary: The practice of giving a fixed fraction of one's income to charity is near-universal but possibly indefensible. I describe one approach that certainly doesn't defend it, speculate vaguely about a possible way of fixing it up, and invite better ideas from others.


Many of us give a certain fraction of our income to charitable causes. This sort of practice has a long history:

Deuteronomy 14:22 Thou shalt truly tithe all the increase of thy seed, that the field bringeth forth year by year.

(note that "tithe" here means "give one-tenth of") and is widely practised today:

GWWC Pledge: I recognise that I can use part of my income to do a significant amount of good in the developing world. Since I can live well enough on a smaller income, I pledge that from today until the day I retire, I shall give at least ten percent of what I earn to whichever organizations can most effectively use it to help people in developing countries. I make this pledge freely, openly, and without regret.

And of course it's roughly how typical taxation systems (which are kinda-sorta like charitable donation, if you squint) operate. But does it make sense? Is there some underlying principle from which a policy of giving away a certain fraction of one's income (not necessarily the traditional 10%, of course) follows?

The most obvious candidate for such a principle would be what we might call

Weighted Utilitarianism: Act so as to maximize a weighted sum of utility, where (e.g.) one's own utility may be weighted much higher than that of random far-away people.

But this can't produce anything remotely like a policy of proportional giving. Assuming you aren't giving away many millions per year (which is a fair assumption if you're thinking in terms of a fraction of your salary) then the level of utility-per-unit-money achievable by your giving is basically independent of what you give, and so is the weight you attach to the utility of the beneficiaries.

So suppose that when your income, after taking out donations, is $X, your utility (all else equal) is u(X), so that your utility per marginal dollar is u'(X); and suppose you attach weight 1 to your own utility and weight w to that of the people who'd benefit from your donations; and suppose their gain in utility per marginal dollar given is t. Then when your income is S you will set your giving g so that u'(S-g) = wt.

What this says is that a weighted-utilitarian should keep a fixed absolute amount S-g of his or her income, and give all the rest away. The fixed absolute amount will depend on the weight w (hence, on exactly which people are benefited by the donations) and on the utility per dollar given t (hence, on exactly what charities are serving them and how severe their need is), but not on the person's pre-donation income S.

(Here's a quick oversimplified example. Suppose that utility is proportional to log(income), that the people your donations will help have an income equivalent to $1k/year, that you care 100x more about your utility than about theirs, and that your donations are the equivalent of direct cash transfers to those people. Then u' = 1/income, so you should keep everything up to $100k/year and give the rest away. The generalization to other weighting factors and beneficiary incomes should be obvious.)

This argument seems reasonably watertight given its premises, but proportional giving is so well-established a phenomenon that we might reasonably trust our predisposition in its favour more than our arguments against. Can we salvage it somehow?

Here's one possibility. One effect of income is (supposedly) to incentivize work, and maybe (mumble near mode mumble) this effect is governed entirely by anticipated personal utility and not by any benefit conferred on others. Then the policy derived above, which above the threshold makes personal utility independent of effort, would lead to minimum effort and hence maybe less net weighted utility than could be attained with a different policy. Does this lead to anything like proportional giving, at least for some semi-plausible assumptions about the relationship between effort and income?

At the moment, I don't know. I have a page full of scribbled attempts to derive something of the kind, but they didn't work out. And of course there might be some better way to get proportional giving out of plausible ethical principles. Anyone want to do better?

Polling Thread

5 Gunnar_Zarncke 01 March 2014 11:57PM

This is the second installment of the Polling Thread.

This is your chance to ask your multiple choice question you always wanted to throw in. Get qualified numeric feedback to your comments. Post fun polls.

There are some rules:

  1. Each poll goes into its own top level comment and may be commented there.
  2. You must at least vote all polls that were posted earlier than you own. This ensures participation in all polls and also limits the total number of polls. You may of course vote without posting a poll.
  3. Your poll should include a 'don't know' option (to avoid conflict with 2). I don't know whether we need to add a troll catch option here but we will see.

If you don't know how to make a poll in a comment look at the Poll Markup Help.


This is not (yet?) a regular thread. If it is successful I may post again. Or you may. In that case do the following :

  • Use "Polling Thread" in the title.
  • Copy the rules.
  • Add the tag "poll".
  • Link to this Thread or a previous Thread.
  • Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be polls or similar'
  • Add a second top-level comment with an initial poll to start participation.

Self-Congratulatory Rationalism

51 ChrisHallquist 01 March 2014 08:52AM

Quite a few people complain about the atheist/skeptic/rationalist communities being self-congratulatory. I used to dismiss this as a sign of people's unwillingness to admit that rejecting religion, or astrology, or whatever, was any more rational than accepting those things. Lately, though, I've started to worry.

Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.

Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality. The fact that members of the LessWrong community tend to be smart is no guarantee that they will be rational. And we have much reason to fear "rationality" degenerating into signaling games.

continue reading »

Is love a good idea?

1 adamzerner 22 February 2014 06:59AM

I've searched around on LW for this question, and haven't seen it brought up. Which surprises me, because I think it's an important question.

I'm honestly not sure what I think. One one hand, love clearly leads to an element of happiness when done properly. This seems to be inescapable, probably because it's encoded in our DNA or something. But on the other hand, there's two things that really make me question whether or not love is a good idea.

1) I have a very reductionist viewpoint, on everything. So I always ask myself, "What am I really trying to optimize here, and what is the best way to optimize it?". When I think about it, I come to the conclusion that I'm always trying to optimize my happiness. The answer to the question of, "why does this matter?" is always, "because it makes me happy". So then, the idea of love bothers me, because you sort of throw rational thinking out the window, stop asking why something actually matters, and just decide that this significant other intrinsically matters to you. I question whether this type of thinking is optimal, and personally, whether or not I'm even capable of it.

2) It seems so obsessive, and I question whether or not it makes sense to obsess so much over one thing. This article actually explores the brain chemicals involved in love, and suggests that the chemicals are similar to those that appear in OCD.

Finally, there's the issue of permanence. Not all love is intended to be permanent, but a lot of the time it is. How can you commit to something so permanently? This makes me think of the mind projection fallacy. Perhaps people commit it with love. They think that the object of their desire is intrinsically desirable, when in fact it is the properties of this object that make it desirable. These properties are far from permanent (I'd go as far as to say that they're volatile, at least if you take the long view). So how does it make sense to commit to something so permanently?

So my take is that there is probably a form of love that is rational to take. Something along the lines of enjoying each others company, and caring for one another and stuff, but not being blindly committed to one another, and being honest about the fact that you wouldn't do anything for one another, and will in fact probably grow apart at some point. 

What do you guys think? 

Managing your time spent learning

11 JonahSinick 19 February 2014 06:48PM

This article is written for people who are looking for advice on prioritizing activities, in particular, what to spend time learning.

In thinking about how to budget your time, it's helpful to explicitly prioritize the activities that you engage in in terms of their relative importance, and distinguish between what's important and what you find interesting. Sometimes we exaggerate the usefulness of interesting but only slightly useful activities in their minds, on account of wanting to believe that time spent on them is productive. If you think about how useful an activity is and, how interesting the activity is separately, you're less likely to do this. It's helpful to consider the following four categories of activities:

  • Important and interesting: Do, and take your time. Get it right!
  • Important and not interesting: Do as much as necessary, and maybe a bit more; look into ways of overcoming procrastination. Also consider ways to make them more interesting.
  • Not important and interesting: Do only if you feel like it, don't try to press yourself, and consider substituting with activities that are interesting and important.
  • Not important and not interesting: Avoid.

More below

continue reading »

Dr. Jubjub predicts a crisis

50 Apprentice 10 January 2014 03:52PM

Dr. Jubjub: Sir, I have been running some calculations and I’m worried about the way our slithy toves are heading.

Prof. Bandersnatch: Huh? Why? The toves seem fine to me. Just look at them, gyring and gimbling in the wabe over there.

Dr. Jubjub: Yes, but there is a distinct negative trend in my data. The toves are gradually losing their slithiness.

Prof. Bandersnatch: Hmm, okay. That does sound serious. How long until it becomes a problem?

Dr. Jubjub: Well, I’d argue that it’s already having negative effects but I’d say we will reach a real crisis in around 120 years.

Prof. Bandersnatch: Phew, okay, you had me worried there for a moment. But it sounds like this is actually a non-problem. We can carry on working on the important stuff – technology will bail us out here in time.

Dr. Jubjub: Sir! We already have the technology to fix the toves. The most straightforward way would be to whiffle their tulgey wood but we could also...

Prof. Bandersnatch: What?? Whiffle their tulgey wood? Do you have any idea what that would cost? And besides, people won’t stand for it – slithy toves with unwhiffled tulgey wood are a part of our way of life.

Dr. Jubjub: So, when you say technology will bail us out you mean you expect a solution that will be cheap, socially acceptable and developed soon?

Prof. Bandersnatch: Of course! Prof. Jabberwock assures me the singularity will be here around tea-time on Tuesday. That is, if we roll up our sleeves and don’t waste time with trivialities like your tove issue.

Maybe it’s just me but I feel like I run into a lot of conversations like this around here. On any problem that won’t become an absolute crisis in the next few decades, someone will take the Bandersnatch view that it will be more easily solved later (with cheaper or more socially acceptable technology) so we shouldn’t work directly on it now. The way out is forward - let’s step on the gas and get to the finish line before any annoying problems catch up with us.

For all I know, Bandersnatch is absolutely right. But my natural inclination is to take the Jubjub view. I think the chances of a basically business-as-usual future for the next 200 or 300 years are not epsilon. They may not be very high but they seem like they need to be seriously taken into account. Problems may prove harder than they look. Apparently promising technology may not become practical. Maybe we'll have the capacity for AI in 50 years - but need another 500 years to make it friendly. I'd prefer humanity to plan in such a way that things will gradually improve rather than gradually deteriorate, even in a slow-technology scenario.

New (proposal for) monthly thread: Meetup Reports

17 bartimaeus 12 January 2014 03:50PM

If you had an interesting Less Wrong meetup recently, but don't have the time to write up a big report to post to Discussion, feel free to write a comment here.  Even if it's just a couple lines about what you did and how people felt about it, it might encourage some people to attend meetups or start meetups in their area.

If you have the time, you can also describe what types of exercises you did, what worked and what didn't.  This could help inspire meetups to try new things and improve themselves in various ways.

If you're inspired by what's posted below and want to organize a meetup, check out this page for some resources to get started!  You can also check FrankAdamek's weekly post on meetups for the week.

Tell us about your meetup!

View more: Prev | Next