All of bokov's Comments + Replies

I agree. My reason for posting the link here is as reality check-- LW seems to be full of people firmly convinced that brain-uploading is the only only viable path to preserving consciousness, as if the implementation "details" were an almost-solved problem.

3Baughn
Ah, no. I do agree that uploading is probably the best path, but B doesn't follow from A. Just because I think it's the best option, doesn't mean I think it'll be easy.

Finally, someone with a clue about biology tells it like it is about brain uploading

http://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/

In reading this, suggest being on guard against own impulse to find excuses to dismiss the arguments presented because they call into question some beliefs that seem to be deeply held by many in this community.

3username2
If they never studied those things, they would never figure out the answers to those objections. If they already knew about all these things, new studies wouldn't be needed. What else is there to study if not things we don't understand?

It depends. Writing a paper is not a realtime activity. Answering a free-response question can be. Proving a complex theorem is not a realtime activity, solving a basic math problem can be. It's a matter of calibrating the question difficulty so that is can be answered within the (soft) time-limits of an interview. Part of that calibration is letting the applicant "choose their weapon". Another part of it is letting them use the internet to look up anything they need to.

Our lead dev has passed this test, as has my summer grad student. There are t... (read more)

Correct, this is a staff programmer posting. Not faculty or post-doc (though when/if we do open a post-doc position, we'll be doing coding tests for that also, due to recent experiences).

Having a track-record of contributions github/bitbucket/sourceforge/rforge would be a very strong qualification. However, few applicants have this. It's a less stringent requirement that they at least show that they can... you know... program.

0IlyaShpitser
Programming is not a real time activity. Almost anything would be better than a real time test, maybe a provisional hire, or a take home, or asking people to code something in a few hours.

it's not strictly an AI problem-- any sufficiently rapid optimization process bears the risk of irretrievably converging on an optimum nobody likes before anybody can intervene with an updated optimization target.

individual and property rights are not rigorously specified enough to be a sufficient safeguard against bad outcomes even in an economy moving at human speeds

in other words the science of getting what we ask for advances faster than the science of figuring out what to ask for

(Note that transforming a sufficiently well specified statistical model into a lossless data compressor is a solved problem, and the solution is called arithmetic encoding - I can give you my implementation, or you can find one on the web.

The unsolved problems are the ones hiding behind the token "sufficiently well specified statistical model".

That said, thanks for the pointer to arithmetic encoding, that may be useful in the future.

The point isn't understanding Bayes theorem. The point is methods that use Bayes theorem. My own statistics prof said that a lot of medical people don't use Bayes because it usually leads to more complicated math.

To me, the biggest problem with Bayes theorem or any other fundamental statistical concept, frequentist or not, is adapting it to specific, complex, real-life problems and finding ways to test its validity under real-world constraints. This tends to require a thorough understanding of both statistics and the problem domain.

That's not the ski

... (read more)

Also, I'm not sure if this is your intention, but it seems to me that the goal of spending 20 years to slow or prevent aging is a recipe for wasting time. It's such an ambitious goal that so many people are already working on, any one researcher is unlikely to put a measurable dent in it.

In the last five years the NIH (National Institutes of Health) has never spent more than 2% of its budget on aging research. To a first approximation, the availability of grant support is proportional to the number of academic researchers, or at least to the amount of ... (read more)

Secondly, you probably shouldn't worry about pursuing a project in which your already-collected data is useless, especially if that data or similar is also available to most other researchers in your field (if not, it would be very useful for you to try to make that data available to others who could do something with it). You're probably more likely to make progress with interesting new data than interesting old data.

This is 'new' data in the sense that it is only now becoming available for research purposes, and if I have my way, it is going to be in ... (read more)

1BigT
I wasn't arguing whether aging research should receive more attention, just that it receives enough to make a single researcher a drop in the bucket, but you might not be an average researcher. I'm interested in knowing, how likely do you think it is that the life expectancy of some people will be measurably lower if you work as a used-car salesman for the next 20 years rather than a researcher. I'm not suggesting that aging isn't a worthwhile area of research, just that it may be counterproductive for you to be trying to make all the work you do for the next 20 years have some direct bearing on aging. When I say a project is ambitious, I mean that it is very unlikely to return good results, but that the impact of those good results would be enormous. Developing a large number of drugs to increase the life expectancies of terminally ill cancer patients is less ambitious than trying to cure their cancer. You seem to be thinking that we have made so little progress on aging because it hasn't received enough attention. What if it's the other way around, and so few researchers tackle aging head-on because it's hard to make meaningful progress on? I think that for any researcher who wants to provide mechanistic insights into aging, or figure out how the brain works, or create a machine with human-like general intelligence, there's a lrage incentive for success, but almost inevitably such researchers need shorter term results to keep themselves going. If there simply aren't any shorter term opportunities to make meaningful progress on, they run the risk of working on something that seems related to the problem they set out to solve, but in reality contributes only shallowly to their understanding of it. This is how you end up with so many attempts to better understand the brain through brain scans or make progress in machine intelligence by studying an absurdly specific situation. There were probably more meaningful things those researchers could have been doing that did

Great idea! Here's how I can convert your prospective experiment into retrospective ones:

Comparing hazard functions for individuals with diagnoses of infertility versus individuals who originally enter the clinic record system due to a routine checkup.

0Andy_McKenzie
This is interesting, but a clear confound is that people who enter for infertility are likely to be more conscientious, which correlates with lifespan.

Thanks for reminding me about SENS and de Grey, I should email him. I should reach out to all the smart people in the research community I know well enough to randomly pester and collect their opinions on this.

4Gunnar_Zarncke
Please do. And tell us the results. Regarding their replies: I wonder whether you should rather take recommendations appearing most frequently or whether it is a better idea to take the most highly rated single recommendation.

People gain skills by working on hard problems, so it doesn't seem necessary for you to take additional time to explicitly hone your skill set before starting on any project(s) that you want to work on.

The embarrassing truth is I spent so much time cramming stuff into my brain while trying to survive in academia that until now I haven't really had time to think about the big picture. I just vectored toward what at any given point seemed like the direction that would give me the most options for tackling the aging problem. Now I'm finally as close to an optimal starting point as I can reasonably expect and the time has come to confront the question: "now what"?

3Fluttershy
I completely understand and sympathize with that feeling. I am about to graduate with an undergraduate degree in chemistry, and it was not until earlier this semester that I began to realize that I still don't know what type of career path I want to pursue after doing graduate work in operations research, given that I am somewhat more inclined to go to graduate school than I am to go directly into industry.

So, for a retrospective approach with existing data, I could try to find a constellation of proxy variables in the ICD9 V-codes and maybe some lab values suggestive of basically healthy patients who consume a lower-than-typical amount of calories. Not in a very health-conscious part of the country though, so unlikely that a large number of patients would do this on purpose, let alone one specific fasting strategy.

Now, something I could do is team up with a local dietician or endocrinologist and recruit patients to try calorie restriction.

0[anonymous]
Here is what seems like a pretty good overview of intermittent fasting: http://easacademy.org/trainer-resources/article/intermittent-fasting
0Lumifer
Um, calorie restriction in the necessary amounts is quite unpleasant and are you willing to commit to a multi-decade trial anyway..?
5gwern
Why not run a pilot on yourself first? The nice thing about IF is that in many forms, it's dead easy: you eat nothing one day, twice as much the next. Record some data on yourself for a few months (weight? blood glucose*? a full blood panel?), and you'll have solid data about your own reactions to IF and a better idea what to look for. Personally, I would be surprised if you could do worthwhile research on IF by mining research records: 'eating food every day' is nigh-universal, and most datasets are concerned entirely with what people eat, not when. You might have to get creative and do something like look for natural experiments involving fasting such as Ramadan. * and don't write off blood glucose as too painful or messy for non-diabetics to measure! Blood glucose strip testing turns out to be easier than I thought. I used up a package recently: while I nearly fainted the first time as my heart-rate plunged into the mid-50s because of my blood phobia, over the course of 10 strips I progressed to not minding and my heart-rate hardly budging.
2James_Miller
No. With intermittent fasting your total calorie consumption isn't necessarily below average, rather you have periods of time in which you either don't eat or only eat fats. I do what's called bulletproof intermittent fasting. One unexpected result is that I don't get colds anymore because, I think, of autophagy. I used to get about four a year and I have been doing the fasting for a little over two years so this result is significant.

I should clarify something: the types of problems I can most efficiently tackle are retrospective analysis of already-collected data.

Prospective clinical and animal studies are not out of the question, but given the investment in infrastructure and regulatory compliance they would need, these would have to be collaborations with researchers already pursuing such studies. This is on the table, but does not leverage the clinical data I already have (unless, in the case of clinical researchers, they are already at my institution or an affiliated one).

My id... (read more)

1BigT
I have some philosophical objections to your approach. I'm not sure it's such a good idea to focus exclusively on research questions that are explicitly aging-related, just because you'll be limiting yourself to a subset of the promising ideas out there. Secondly, you probably shouldn't worry about pursuing a project in which your already-collected data is useless, especially if that data or similar is also available to most other researchers in your field (if not, it would be very useful for you to try to make that data available to others who could do something with it). You're probably more likely to make progress with interesting new data than interesting old data. Also, I'm not sure if this is your intention, but it seems to me that the goal of spending 20 years to slow or prevent aging is a recipe for wasting time. It's such an ambitious goal that so many people are already working on, any one researcher is unlikely to put a measurable dent in it. It's like getting a math phd and saying "Ok, now I'm going to spend the rest of my life trying to help solve the Riemann Hypothesis." Esepcially when you're just starting out, you may be better-served working on the most promising projects you can find in your general area of interest, even if their goals are less ambitious. P.s. Sorry if a lot of what I've said is naive, I've never worked in academia.
1Fluttershy
Okay, neat! I have an idea, and it might be kind of farfetched, or not amenable to the types of analyses you are best at doing, but I'll share it anyways. Here goes. Given that there is a tradeoff between health and reproduction, I wonder if you could increase the expected lifespan of a healthy human male by having him take anti-androgens on a regular basis. We already know that male eunuchs who are humans live longer than intact male humans. I suspect that most guys wouldn't be willing to become eunuchs even if they valued having a long lifespan very highly, but being able to increase one's expected lifespan by decreasing one's testosterone levels while still remaining intact might be something that a few males would consider, if such a therapy were proven to be effective. Anyways, after taking 10 minutes to look around on Google Scholar, I wasn't able to find any papers suggesting that taking anti-androgens would be an effective anti-aging measure, so maybe this would be a viable project for someone to work on. As an aside, I don't know which mechanisms cause castrated men to live longer, but this seems relevant to the question of why/how castrated men live longer.

If we are in an environment of open conversation and I say something that brings up an emotional trauma in another person and that person doesn't have the self-awareness to know why he's feeling unwell, that's not a good time to leave him alone.

?! Depends. If you don't understand that person intimately or aren't experienced at helping less self-aware (aka neurotypical) people process emotional trauma, it's probably a very good time to leave him alone. Politely.

3ChristianKl
You don't need to understand another person to help them. Even if you do understand another person well enough to know what triggered them, telling them can be invasive and therefore needs some amount of implicit of explicit permission. Being there and being a stable anchor is often better than trying to interfere with their state. That means if you are mentally flexible about changing your state opening up on your side and allowing the emotions to rise in you to a level that similar to the other person but more calm. If you are not flexible and can meditate, that usually a good state to go to. For me the only reason to leave is if I'm myself not in a stable emotional place. But I can certainly understand if other people generally don't see themselves in a position to help.
2TheOtherDave
Interesting. My default move would be to sit quietly in their presence and pay attention, rather than leave. Why would leaving be better?

I was tempted to vote "makes no sense at all". I did not because I've had far too many experiences where I dismiss a colleague's idea as being the product of muddled thinking only to later realize that a) the idea makes sense, they just didn't know how to express it clearly or b) the idea makes practical sense but my profession chooses to sweep it under the rug because it's too inconvenient. On Stackoverflow and LW I see the same tendency to mistake hard/tedious problems for meaningless problems and "solve" the problem by prematurely cl... (read more)

Come to think of it, "Red/Blue makes no sense at all" is not even a valid answer to the question. The question did not ask whether it made sense.

"Red/Blue makes no sense at all" means "I reject the framework within which you are asking this question".

Come to think of it, "Red/Blue makes no sense at all" is not even a valid answer to the question. The question did not ask whether it made sense.

There's such a thing as a question that rests on invalid assumptions — the classic example being "Have you stopped beating your wife?" when addressed to someone who never did (or never married a woman). As in that case, questions can be used to sneak in connotations — the classic example is asked by a politician to his rival in a public debate, for the purpose of planting suspicion. The sage... (read more)

This has taught me that I find it more intuitive to think in terms of conditional probabilities than marginal probabilities.

The tough part will be guarding against Goodhart's Law. I suspect that the current system of publications and grant money as an indicator of ability started out as an attempt to improve the efficiency of scientific progress and has by now been thoroughly Goodharted.

As Lumifer points out, tenure was intended to give productive scientists some protected time so they could think. However, the amount of hoops you jump through on the way to getting there puts you through the opposite of protected time so by the time you get tenure you've gotten jaded, cynical, and acquired some habits useful for academic survival but harmful to academic excellence.

1Stefan_Schubert
I think you implicitly paint a too rosy picture of researchers' psychology. I think that in many cases systems that lack sufficiently strong incentives decrease researchers' productivity because they don't have the same need to publish anymore. Most researchers are tempted by other things than science as well and if they are allowed to get away with it they might prioritize those other things. So the current system is not quite as flawed as many of its detractors claim. We do need incentives, but they need to be tailored in a better way. Specifically, they need to be tailored so that we cram as much as possible out of our most able researchers.

I can offer advice on statistical analysis of data (frequentist, alas, still learning Bayesian methods myself so not ready to advise on that). Unfortunately, right now I have too little spare time to actually analyze it for you, but I can explain to you how you can tackle it using open source tools and try to point you toward further reading focused on the specific problem you're trying to solve. In the medium-future I hope to have my online data analysis app stable enough to post here, but this is not looking like the month when it will happen.

I can proba... (read more)

6gwern
Might want to look at http://lesswrong.com/r/discussion/lw/kez/r_support_group_and_the_benefits_of_applied/ then.

The current 500-year window needs to be be VERY typical if it's the main evidence in support of the statement that "even with no singularity technological advance is a normal part of our society".

This is like someone in the 1990s saying that constantly increasing share price "is a normal part of Microsoft".

I think technological progress is desirable and hope that it will continue for a long time. All I'm saying is that being overconfident about future rates of technological progress is one of this community's most glaring weaknesses.

3[anonymous]
The sheer number of ways the last 500 years are atypical in ways that will never be repeated does boggle the imagination.
0Alsadius
Microsoft quit growing because of market saturation and legal challenges. The former seems unlikely with regards to technology, and the latter nearly impossible. It is possible for tech to stop growing, yes, but the cause of it would need to be either a massive cultural shift across most of the world, or a civilization-collapsing event. It took a very long time to develop a technological mindset, even with its obvious superiority, so I would expect it to take even longer to eliminate it.

Take any 500-year window that contains the year 2014. How typical would you say it is of all 500-year intervals during which tool-using humans existed?

0Alsadius
How typical does it need to be? We generally discount data more the further away from the present it is, for exactly this reason.

even with no singularity technological advance is a normal part of our society

Depends what time scale you're talking about.

1[anonymous]
And on what you mean by 'advance'.
0ChristianKl
The time frames mentioned in the post were 50, 120, 200, 300 and 500 years. Over all of those scales I would expect significant technological advance.

It would look like a failure to adequately discount for inferential chain length.

having people around who give a damn about you

Yes, exactly. I'd add:

...because the best cryopreservation arrangements won't do you much good if nobody notices you died until the neighbors complain about the smell.

Somewhere between an extended family and a kindergarten; like a small private kindergarten where the parents are close friends with the caretakers.

That, right there, is one of my fondest dreams. To get my tiny scientists out of the conformity-factory and someplace where they can flourish (even more). Man, if this was happening in my town, in a heartbeat I'd rearrange my work schedule to spend part of the week being a homeschooler.

dealing with resource scarcity -- and you keep on bringing up how markets don't solve violence and pollution...

Well, they should provide a constructive alternative to the former, and the latter is isomorphous with a scarcity of non-polluted air/water/land.

Here's what I expect someone who seriously believed that markets will handle it would sound like:

"Wow, overpopulation is a threat? Clearly there are inefficiencies the rest of the market is too stupid to exploit. Let's see if I can get rich by figuring out where these inefficiencies are and how to exploit them."

Whereas "the markets will handle it, period, full stop" is not a belief, it's an excuse.

0Lumifer
Here is what I sound like: "Wow, overpopulation is a threat? I don't believe it. Show me."

it's only a question of how many planets we consume before that happens.

Hopefully more than one. There are a lot of underutilized planets out there, even within our own solar system.

The choke point in our Fritz Haber/Norman Borlaug/Edward Jenner pipeline is not the amount of science education out there. It's a combination of the low-hanging fruit being picked, insufficient investment in novel approaches and not enough geniuses.

Very true. Each year we produce thousands of new Ph.D.s and import thousands more, while slowly choking off funding for basic research, so they languish in a post-doc holding pattern until many of them give up and go do something less innovative but safer.

Alternatively tutoring is free and with a similar level of time costs to raising your own children you could tutor a lot of others.

Yes! The school system in my state spends far more on remedial education than on GT. Education is seen as a status symbol instead of a costly investment that should be allocated in a manner that gives the highest returns (in terms of innovation, prosperity, and sane policy decisions).

All of these "what you should do if you are a utilitarian" articles should start with "Assuming you are a being for whom utility matters roughly equally regardless of who experiences it..."

Yes! Thank you for articulating in one sentence what I haven't been able to in a dozen posts.

6juliawise
Isn't that what "utilitarian" means?
0buybuydandavis
Yeah, this is the winner. "Well being" is nebulous enough, but without specifying the relative weighting, it means very little, particularly with the "weight everyone equally" variant finding such strong support, and being so at odds with what people actually do.

You should repeat this at the top level. This changes things quite a bit.

1jefftk
Done.

We should be careful to make the distinction between jkaufman's own opinions and those of the paper they posted a link to.

By the way, it's refreshing to see people be honest with themselves and others about what they value instead of the posturing/kool-aid one often sees around this topic.

0Cyan
Suppose your brain has ceased functioning, been recoverably preserved and scanned, and then revived and copied. The two resulting brains are indistinguishable in the sense that for all possible inputs, they give identical outputs. (Posit that this is a known fact about the processes that generated them in their current states.) What exactly is it that makes the revived brain you and the copied brain not-you?

A witty quote from an great book by a brilliant author is awesome, but does not have the status of any sort of law.

What do we mean by "normality"? What you observe around you every day? If you are wrong about the unobserved causal mechanisms underlying your observations, you will make wrong decisions. If you walk on hot coals because you believe God will not let you burn, the normality that quantum mechanics adds up to diverges enough from your normality that there will be tangible consequences. Are goals part of normality? If not, they certainl... (read more)

0Cyan
My vague notion is that if your goals don't have ramifications in the realm of the normal, you're doing it wrong. If they do, and some aspect of your map upon which goals depend gets altered in a way that invalidates some of your goals, you can still look at the normal-realm ramifications and try to figure out if they are still things you want, and if so, what your goals are now in the new part of your map. Keep in mind that your "map" here is not one fixed notion about the way the world works. It's a probability distribution over all the ways the world could work that are consistent with your knowledge and experience. In particular, if you're not sure whether "patternists" (whatever those are) are correct or not, this is a fact about your map that you can start coping with right now. It might be that the Dark Lords of the Matrix are just messing with you, but really, the unknown unknowns would have to be quite extreme to totally upend your goal system.

So, looking at shminux' post above, you would suggest mandatory insemination of only some fertile females and reducing subsistence to slightly above the minimum acceptable caloric levels..?

I believe that deliberately increasing population growth is specifically the opposite direction of the one we should be aiming toward if we are to maximize any utility function that penalizes die-offs, at least as long as we are strictly confined to one planet. I was just more interested in the more general point shminux raised about repugnant conclusions and wanted t... (read more)

I don't understand the response. Are you saying that the reason you don't have an egocentric world view and I do is in some way because of kin selection?

0Cyan
You said, And why do people generally care more about their families than about other people's families? Kin selection.

How about this as a rule of thumb, pending something more formal:

If a particular reallocation of resources/priority/etc. seems optimal, look for a point in the solution space between there and the status quo that is more optimal than the status quo, go for that point, and re-evaluate from there.

0Lumifer
So, looking at shminux' post above, you would suggest mandatory insemination of only some fertile females and reducing subsistence to slightly above the minimum acceptable caloric levels..? If your seemingly-optimal point is repugnant why would you want to go in that direction anyway?

You might be right. I hope not, though, because that means it will take even longer to escape from the planetary cycle of overshoot and collapse.

Then again, it's good to be ready for the worst and be pleasantly surprised if things turn out better than expected.

Once we've dealt with the mass starvation, vast numbers of deaths from malaria, horrendous poverty, etc., then we can start paying a lot more attention to awesomeness.

What if, for practical purposes, there is an inexhaustible supply of suck? What if we can't deal with it once and for all and then turn our attention to the fun stuff?

So, judging from the reception of my post about the Malthusian Crunch certain Wrongians sense this and have gone into denial (perhaps, if they're honest with themselves, privately admitting the hope that if they ignore the ... (read more)

4gjm
Well, that would be very bad, and it might mean that an altruist of the sort I describe would in fact think the best course of action would be relentless suck-mitigation, for ever. A world of relentless suck-mitigation wouldn't be a lot of fun, but if you're faced with an inexhaustible supply of suck it might be the best you could do. [EDITED to add: I see you've been downvoted. For what it's worth, that wasn't me.]

But, in any case, I would expect this to lead to the Malthusian scenario we should be trying to avoid, not an overall maximization of all humans who have ever lived.

What if the reason repugnant conclusions come up is that we only have estimates of our real utility functions which are an adequate fit over most of the parameter space but diverge from true utility under certain boundary conditions?

If so, either don't feel shame about having to patch your utility function with one that does fit better in that region of the parameter space... or aim for a non-maximal but closer to maximum utility that that is far enough from the boundary that it can be relied on.

2Shmi
Right, this is the issue Eliezer discussed at length in various posts, The Hidden Complexity of Wishes is one of the most illuminating. A utility-maximizing genie will always find a cheap way to maximize a given utility by picking one of the "boundaries" the wisher never thought of. And given that there are no obvious boundaries to begin with and so to stay "far enough from", it would simply pick a convenient point in the parameter space.

So your issue is that a copy of you is not you? And you would treat star trek-like transporter beams as murder?

Nothing so melodramatic, but I wouldn't use them. UNLESS they were in fact manipulating my wave function directly somehow causing my amplitude to increase in one place and decrease in another. Probably not what the screenplay writers had in mind, though.

But you are OK with a gradual replacement of your brain, just not with a complete one?

Maybe even a complete one eventually. If the vast majority of my cognition has migrated to the synthet... (read more)

1Shmi
Ah, ok: So your issue is that a copy of you is not you? And you would treat star trek-like transporter beams as murder? But you are OK with a gradual replacement of your brain, just not with a complete one? How fast would the parts need to be replaced to preserve this "experience of continuity"? Do drugs which knock you unconscious break continuity enough to be counted as making you into not-you? Basically, what I am unclear on is whether your issue is continuity of experience or cloning.

That should be a new discussion.

You claimed that people ignore or outright oppose trying to accelerate the rate of technological advancement. Could it be instead that nobody has any idea how to do it?

Very, very possible.

An independent settlement seems quite beyond the possibilities of present and foreseeable technology.

I'm not saying its easy. I guess I calibrate my concept of foreseeable technology as sleeker, faster mobile devices being trivially predictable, fusion as possible, and general-purpose nanofactories as speculative.

On that scale,... (read more)

0V_V
If the permanent Martian settlements are to do their own manufacturing, it seems that they would need both fusion power and nanofactories, or something equivalent. The type of energy sources and resource ores we use on Earth for manufacturing would probably not be available in any sufficient amount.

Well, that for starters.

Then there is the drive to insure the survival and happiness of your children. I have found that this increases with age. If you don't have that drive yet, simply wait. There's a good chance you will be surprised to see that you will develop one, as I have. I imagine this drive undergoes another burst when one's children have children.

Then there is foreclosing on the possibility of the human race reaching the stars. If that doesn't excite you, what does? Sports? Video games? I'm sure those will also spread through the galaxy if we d... (read more)

0Creutzer
I assume you mean people's drive to have children, since nobody talked about extinction through killing off anyone. But given what you said - that maximising living people's happiness doesn't matter if this is followed by extinction -, one would expect a more principled objection to the latter event. No there isn't, but this is not to the point here. I'm not committing the typical mind fallacy in denying that many people have one. I don't care about spreading anything through the galaxy, actually. I wonder how much the average person does. (I immensely admire certain works of art, for example, and yet, the thought that nobody should be there to enjoy them, or create any more like them, does not bug me in the slightest.)

For the most part, my emphasis is not on limiting population directly. I do believe that charitable efforts have the responsibility to mitigate the risk of a demographic trap in the areas they serve. But I think getting anybody who matters to listen is a lost cause.

My emphasis is on being conscious of the fact that the reason we're still alive and prospering is that we are continuously buying ourselves more time with technology and use this insight to motivate greater investment in research and development. This seems like an easier sell.

0ChristianKl
The Bill and Melinda Gates foundation accounts for a good share of charity spending. They started by being very focused on the issue of reducing population. They spent millions on the issue and have seen the empiric effects of their projects. To the extend that they don't listen anymore to the kind of arguments that you are making is that they updated in face of empiric evidence.

Here is a thought experiment that might not be a thought experiment in the foreseeable future:

Grow some neurons in vitro and implant them in a patient. Over time, will that patient's brain recruit those neurons?

If so, the more far-out experiment I earlier proposed becomes a matter of scaling up this experiment. I'd rather be on a more resilient substrate than neurons, but I'll take what I can get.

I'm betting that the answer to this will be "yes", following a similar line of reasoning that Drexler used to defend the plausibility of nanotech: the ... (read more)

1TheOtherDave
Yes, I agree with all of this.
Load More