Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I think this is a nice introduction to Transhumanism, inspired by the style of many well known Youtube educators. Given how much LessWrong likes these ideas, I thought it was worth sharing.
The group also has a Kickstarter here to fund an entire series of videos of this kind. I think they deserve to be backed, and LW can probably influence the video creators in a useful/helpful way.
The researchers showed monkeys specific images and then trained them to select those images out of a larger set after a time delay. They recorded the monkeys' brain function to determine which signals were important. The experiment tests the monkey's performance on this task in different cases, as described by io9:
Once they were satisfied that the correct mapping had been done, they administered cocaine to the monkeys to impair their performance on the match-to-sample task (seems like a rather severe drug to administer, but there you have it). Immediately, the monkeys' performance fell by a factor of 20%.
It was at this point that the researchers engaged the neural device. Specifically, they deployed a "multi-input multi-output nonlinear" (MIMO) model to stimulate the neurons that the monkeys needed to complete the task. The inputs of this device monitored such things as blood flow, temperature, and the electrical activity of other neurons, while the outputs triggered the individual neurons required for decision making. Taken together, the i/o model was able to predict the output of the cortical neurons — and in turn deliver electrical stimulation to the right neurons at the right time.
And incredibly, it worked. The researchers successfully restored the monkeys' decision-making skills even though they were still dealing with the effects of the cocaine. Moreover, when duplicating the experiment under normal conditions, the monkeys' performance improved beyond the 75% proficiency level shown earlier. In other words, a kind of cognitive enhancement had happened.
This research is a remarkable followup to research that was done in rodents last year.
I might need a better title (It has now been updated), but here goes, anyway:
I've been considering this for a while now. Suppose we reach a point where we can live for centuries, maybe even millenia, then how do we balance? Even assuming we're as efficient as possible, there's a limit for how much resources we can have, meaning an artificial limit at the amount of people that could exist at any given moment even if we explore what we can of the galaxy and use any avaliable resource. There would have to be roughly the same rate of births and deaths in a stable population.
How would this be achieved? Somehow limiting lifespan, or children, assuming it's available to a majority? Or would this lead to a genespliced, technologically augmented and essentially immortal elite that the poor, unaugmented ones would have no chance of measuring up to? I'm sorry if this has already been considered, I'm very uneducated on the topic. If it has, could someone maybe link an analysis of the topic of lifespans and the like?
It was Yudkowsky's Fun Theory sequence that inspired me to undertake the work of writing a novel on a singularitarian society... however, there are gaps I need to fill, and I need all the help I can get. It's mostly book recommendations that I'm asking for.
One of the things I'd like to tackle in it would be the interactions between the modern, geeky Singularitarianisms, and Marxism, which I hold to be somewhat prototypical in that sense, as well as other utopisms. And contrasting them with more down-to-earth ideologies and attitudes, by examining the seriously dangerous bumps of the technological point of transition between "baseline" and "singularity". But I need to do a lot of research before I'm able to write anything good: if I'm not going to have any original ideas, at least I'd like to serve my readers with a collection of well-researched. solid ones.
So I'd like to have everything that is worth reading about the Singularity, specifically the Revolution it entails (in one way or another) and the social aftermath. I'm particularly interested in the consequences of the lag of the spread of the technology from the wealthy to the baselines, and the potential for baselines oppression and other forms of continuation of current forms of social imbalances, as well as suboptimal distribution of wealth. After all, according to many authors, we've had the means to end war, poverty and famine, and most infectious diseases, since the sixties, and it's just our irrational methods of wealth distribution That is, supposing the commonly alleged ideal of total lifespan and material welfare maximization for all humanity is what actually drives the way things are done. But even with other, different premises and axioms, there's much that can be improved and isn't, thanks to basic human irrationality, which is what we combat here.
Also, yes, this post makes my political leanings fairly clear, but I'm open to alternative viewpoints and actively seek them. I also don't intend to write any propaganda, as such. Just to examine ideas, and scenarios, for the sake of writing a compelling story, with wide audience appeal. The idea is to raise awareness of the Singularity as something rather imminent ("Summer's Coming"), and cause (or at least help prepare) normal people to question the wonders and dangers thereof, rationally.
It's a frighteningly ambitious, long-term challenge, I am terribly aware of that. And the first thing I'll need to read is a style-book, to correct my horrendous grasp of standard acceptable writing (and not seem arrogant by doing anything else), so please feel free to recommend as many books and blog articles and other material as you like. I'll take my time going though it all.
I've seen an interesting variety of utopian hopes expressed recently. Raemon's "Ritual" sequence of posts is working to affirm the viability of LW's rationalist-immortalist utopianism, not just in the midst of an indifferent universe, but in the midst of an indifferent society. Leverage Research turn out to be social-psychology utopians, who plan to achieve their world of optimality by unleashing the best in human nature. And Russian life-extension activist Maria Konovalenko just blogged about the difficulty of getting people to adopt anti-aging research as the top priority in life, even though it's so obvious to her that it should be.
This phenomenon of utopian hope - its nature, its causes, its consequences, whether it's ever realistic, whether it ever does any good - certainly deserves attention and analysis, because it affects, and even afflicts, a lot of people, on this site and far beyond. It's a vast topic, with many dimensions. All my examples above have a futurist tinge to them - an AI singularity, and a biotech society where rejuvenation is possible, are clearly futurist concepts; and even the idea of human culture being transformed for the better by new ideas about the mind, belongs within the same broad scientific-technological current of Utopia Achieved Through Progress. But if we look at all the manifestations of utopian hope in history, and not just at those which resemble our favorites, other major categories of utopia can be observed - utopia achieved by reaching back to the conditions of a Golden Age; utopia achieved in some other reality, like an afterlife.
The most familiar form of utopia these days is the ideological social utopia, to be achieved once the world is run properly, according to the principles of some political "-ism". This type of utopia can cut across the categories I have mentioned so far; utopian communism, for example, has both futurist and golden-age elements to its thinking. The new society is to be created via new political forms and new philosophies, but the result is a restoration of human solidarity and community that existed before hierarchy and property... The student of utopian thought must also take note of religion, which until technology has been the main avenue through which humans have pursued their most transcendental hopes, like not having to die.
But I'm not setting out to study utopian thought and utopian psychology out of a neutral scholarly interest. I have been a utopian myself and I still am, if utopianism includes belief in the possibility (though not the inevitability) of something much better. And of course, the utopias that I have taken seriously are futurist utopias, like the utopia where we do away with death, and thereby also do away with a lot of other social and psychological pathologies, which are presumed to arise from the crippling futility of the universal death sentence.
However, by now, I have also lived long enough to know that my own hopes were mistaken many times over; long enough to know that sometimes the mistake was in the ideas themselves, and not just the expectation that everyone else would adopt them; and long enough to understand something of the ordinary non-utopian psychology, whose main features I would nominate as reconciliation with work and with death. Everyone experiences the frustration of having to work for a living and the quiet horror of physiological decline, but hardly anyone imagines that there might be an alternative, or rejects such a lifecycle as overall more bad than it is good.
What is the relationship between ordinary psychology and utopian psychology? First, the serious utopians should recognize that they are an extreme minority. Not only has the whole of human history gone by without utopia ever managing to happen, but the majority of people who ever lived were not utopians in the existentially revolutionary sense of thinking that the intolerable yet perennial features of the human condition might be overthrown. The confrontation with the evil aspects of life must usually have proceeded more at an emotional level - for example, terror that something might be true, and horror at the realization that it is true; a growing sense that it is impossible to escape; resignation and defeat; and thereafter a permanently diminished vitality, often compensated by achievement in the spheres of work and family.
The utopian response is typically made possible only because one imagines that there is a specific alternative to this process; and so, as ideas about alternatives are invented and circulated, it becomes easier for people to end up on the track of utopian struggle with life, rather than the track of resignation, which is why we can have enough people to form social movements and fundamentalist religions, and not just isolated weirdos. There is a continuum between full radical utopianism and very watered-down psychological phenomena which hardly deserve that name, but still have something in common - for example, a person who lives an ordinary life but draws some sustenance from the possibility of an afterlife of unspecified nature, where things might be different, and where old wrongs might be righted - but nonetheless, I would claim that the historically dominant temperament in adult human experience has been resignation to hopelessness and helplessness in ultimate matters, and an absorption in affairs where some limited achievement is possible, but which in themselves can never satisfy the utopian impulse.
The new factor in our current situation is science and technology. Our modern history offers evidence that the world really can change fundamentally, and such further explosive possibilities as artificial intelligence and rejuvenation biotechnology are considered possible for good, tough-minded, empirical reasons, not just because they offer a convenient vehicle for our hopes.
Technological utopians often exhibit frustration that their pet technologies and their favorite dreams of existential emancipation aren't being massively prioritized by society, and they don't understand why other people don't just immediately embrace the dream when they first hear about it. (Or they develop painful psychological theories of why the human race is ignoring the great hope.) So let's ask, what are the attitudes towards alleged technological emancipation that a person might adopt?
One is the utopian attitude: the belief that here, finally, one of the perennial dreams of the human race can come true. Another is denial: which is sometimes founded on bitter experience of disappointment, which teaches that the wise thing to do is not to fool yourself when another new hope comes up to you and cheerfully asserts that this time really is different. Another is to accept the possibility but deny the utopian hope. I think this is the most important interpretation to understand.
It is the one that precedent supports. History is full of new things coming to pass, but they have never yet led to utopia. So we might want to scrutinize our technological projections more closely, and see whether the utopian expectation is based on overlooking the downside. For example, let us contrast the idea of rejuvenation and the idea of immortality - not dying, ever. Just because we can take someone who is 80 and make them biologically 20, is not the same thing as making them immortal. It just means that won't die of aging, and that when they do die, it will be in a way befitting someone 20 years old. They'll die in an accident, or a suicide, or a crime. Incidentally, we should also note an element of psychological unrealism in the idea of never wanting to die. Forever is a long time; the whole history of the human race is about 10,000 years long. Just 10,000 years is enough to encompass all the difficulties and disappointments and permutations of outlook that have ever happened. Imagine taking the whole history of the human race into yourself; living through it personally. It's a lot to have endured.
It would be unfair to say that transhumanists as a rule are dominated by utopian thinking. Perhaps just as common is a sort of futurological bipolar disorder, in which the future looks like it will bring "utopia or oblivion", something really good or something really bad. The conservative wisdom of historical experience says that both these expectations are wrong; bad things can happen, even catastrophes, but life keeps going for someone - that is the precedent - and the expectation of total devastating extinction is just a plunge into depression as unrealistic as the utopian hope for a personal eternity; both extremes exhibiting an inflated sense of historical or cosmic self-importance. The end of you is not the end of the world, says this historical wisdom; imagining the end of the whole world is your overdramatic response to imagining the end of you - or the end of your particular civilization.
However, I think we do have some reason to suppose that this time around, the extremes are really possible. I won't go so far as to endorse the idea that (for example) intelligent life in the universe typically turns its home galaxy into one giant mass of computers; that really does look like a case of taking the concept and technology with which our current society is obsessed, and projecting it onto the cosmic unknown. But just the humbler ideas of transhumanity, posthumanity, and a genuine end to the human-dominated era on Earth, whether in extinction or in transformation. The real and verifiable developments of science and technology, and the further scientific and technological developments which they portend, are enough to justify such a radical, if somewhat nebulous, concept of the possible future. And again, while I won't simply endorse the view that of course we shall get to be as gods, and shall get to feel as good as gods might feel, it seems reasonable to suppose that there are possible futures which are genuinely and comprehensively better than anything that history has to offer - as well as futures that are just bizarrely altered, and futures which are empty and dead.
So that is my limited endorsement of utopianism: In principle, there might be a utopianism which is justified. But in practice, what we have are people getting high on hope, emerging fanaticisms, personal dysfunctionality in the present, all the things that come as no surprise to a cynical student of history. The one outcome that would be most surprising to a cynic is for a genuine utopia to arrive. I'm willing to say that this is possible, but I'll also say that almost any existing reference to a better world to come, and any psychological state or social movement which draws sublime happiness from the contemplation of an expected future, has something unrealistic about it.
In this regard, utopian hope is almost always an indicator of something wrong. It can just be naivete, especially in a young person. As I have mentioned, even non-utopian psychology inevitably has those terrible moments when it learns for the first time about the limits of life as we know it. If in your own life you start to enter that territory for the first time, without having been told from an early age that real life is fundamentally limited and frustrating, and perhaps with a few vague promises of hope, absorbed from diverse sources, to sustain you, then it's easy to see your hopes as, not utopian hopes, but simply a hope that life can be worth living. I think this is the experience of many young idealists in "environmental" and "social justice" movements; their culture has always implied to them that life should be a certain way, without also conveying to them that it has never once been that way in reality. The suffering of transhumanist idealists and other radical-futurist idealists, when they begin to run aground on the disjunction between their private subcultural expectations and those of the culture at large, has a lot in common with the suffering of young people whose ideals are more conventionally recognizable; and it is entirely conceivable that for some generation now coming up, rebellion against biological human limitations will be what rebellion against social limitations has been for preceding generations.
I should also mention, in passing, the option of a non-utopian transhumanism, something that is far more common than my discussion so far would mention. This is the choice of people who expect, not utopia, but simply an open future. Many cryonicists would be like this. Sure, they expect the world of tomorrow to be a great place, good enough that they want to get there; but they don't think of it as an eternal paradise of wish-fulfilment that may or may not be achieved, depending on heroic actions in the present. This is simply the familiar non-utopian view that life is overall worth living, combined with the belief that life can now be lived for much longer periods; the future not as utopia, but as more history, history that hasn't happened yet, and which one might get to personally experience. If I was wanting to start a movement in favor of rejuvenation and longevity, this is the outlook I would be promoting, not the idea that abolishing death will cure all evils (and not even the idea that death as such can be abolished; rejuvenation is not immortality, it's just more good life). In the spectrum of future possibilities, it's only the issue of artificial intelligence which lends some plausibility to extreme bipolar futurism, the idea that the future can be very good (by human standards) or very bad (by human standards), depending on what sort of utility functions govern the decision-making of transhuman intelligence.
That's all I have to say for now. It would be unrealistic to think we can completely avoid the pathologies associated with utopian hope, but perhaps we can moderate them, if we pay attention to the psychology involved.
Objections to uploading may be parsed into substrate issues, dealing with the computer platform of upload and personal identity. This paper argues that the personal identity issues of uploading are no more or less challenging than those of bodily transfer often discussed in the philosophical literature. It is argued that what is important in personal identity involves both token and type identity. While uploading does not preserve token identity, it does save type identity; and even qua token, one may have good reason to think that the preservation of the type is worth the cost.
“Misbehaving Machines: The Emulated Brains of Transhumanist Dreams”, by Corry Shores (grad student; Twitter, blog) is another recent JET paper. Abstract:
Enhancement technologies may someday grant us capacities far beyond what we now consider humanly possible. Nick Bostrom and Anders Sandberg suggest that we might survive the deaths of our physical bodies by living as computer emulations. In 2008, they issued a report, or “roadmap,” from a conference where experts in all relevant fields collaborated to determine the path to “whole brain emulation.” Advancing this technology could also aid philosophical research. Their “roadmap” defends certain philosophical assumptions required for this technology’s success, so by determining the reasons why it succeeds or fails, we can obtain empirical data for philosophical debates regarding our mind and selfhood. The scope ranges widely, so I merely survey some possibilities, namely, I argue that this technology could help us determine
- if the mind is an emergent phenomenon,
- if analog technology is necessary for brain emulation, and
- if neural randomness is so wild that a complete emulation is impossible.
Personal Note: I would like to thank Normal Anomaly for beta-ing this for me and providing counter-arguments. It am asking him/her to comment below, so that everyone can give him/her karma for volunteering and helping me out. Even if you dislike the article, I think it's awesome that they were willing to take time out of their day to help someone they've never met.
Imagine that you live in a world where everyone says "AI is a good idea. We need to pursue it."
But what if no one really thought that there was any reason to make sure the AI was friendly. That would be bad, right? You would probably think: "Hey, AI is a great goal and all, but before we start pursuing it and actually developing the technology, we need to make sure that it's not going to blow up in our faces!"
That seems to me to be a rational response.
Yet it seems like most people are not applying the same thought processes to life-extending technology. This website in particular has a habit of using some variant of this argument: "Death is bad. Not dying is good. Therefore life-extending technologies are also good" However this is missing the same level of contemplation that has been given to AI. Like AI, there are considerations that must be made to ensure this technology is "friendly".
Most transhumanists have heard many of these issues before, normally sandwiched inside of a "Death is Bad" conversation. However these important considerations are often hand-waved away, as the conversation tends to stick to the low-hanging fruit. Here, I present them all in one place, so we can tackle them together, and perhaps come up with some solutions:
- Over-population: For example, doubling the life-span of humans would at the very least double the number of people on this planet. If we could double life-spans today, we would go from 7 billion to 14 billion people on Earth in 80 years, not counting regular population growth.
Although currently birthrates are falling, all birthrate information we have is for women being fertile for approximately 25 years. This has not changed much throughout history, so we cannot necessarily extrapolate the current birthrate to what it would be if women were fertile for 50 years instead.
In other words, not only will there be a population explosion due to people living longer, but I'd be willing to bet that if life-extension was available today, birth rates would also go up. Right now, people who like to have kids only have enough money and fertile years to raise on average 2-3 kids. If you doubled the time they would have to reproduce, you will likely double the amount of children that child-rearing families have.
For example, in modern society, by the time a woman's children are out of the house and done with college, the woman is no longer young and/or fertile. Say for example you had a child when you were 25. By the time your children were 20 you would be 45, and therefore not at a comfortable age to have children. However, if 45 becomes a young/fertile age for women, families might likely decide to re-reproduce.
It's one thing to say: "Well, we will develop technology to increase food yields and decrease fossil food consumption", but are you positive we will have those technologies ready to go in time to save us?
- Social Stagnation: Have you ever tried having a long conversation with an elderly person, only to realize that they are bigots/homophobes/racists, etc? We all love Grandpa John and Grammy Sue, but they have to die for society to move forward. If there were 180 year-olds alive today, chances are pretty strong that a good amount of them would think that being anti-slavery is pretty progressive. They would have been about 90 years old when women got the right to vote.
We don't so much change our minds, and we grow new people and the old ones die.
- Life sucks, but at least you die: The world is populated with people suffering with mental disorders like depression, social issues like unemployment, and physical deprivations like poverty and hunger.
It doesn't make sense to extend life until we have made our lives worth extending.
- Unknown Implications: How will this change the way society works? How will it change how people live their lives? We can have some educated guesses, but we won't know for sure what far-spread effects this would have.
I have a friend who is a professional magician and "psychic", and about a month ago I convinced him to read HPMoR. After cursing me for ruining his sleep schedule for two days, we ended up having a discussion about some of the philosophies in there that we agreed and disagreed with. I was brand-new to LW. He had no prior knowledge of "rationality", but like most of his profession was very analytically minded. I would like to share something he wrote:
We have a lot of ancient wisdom telling us that wishes are bad because we aren't wise, and you're saying... that if we could make ourselves wise, then we can have wishes and not have it blow up in our faces.
See the shortest version of Alladin's Tale:
Wish One: "I wish to be wise."
Since... I am NOT mature, fully rational, and wise,
I really think I shouldn't have wishes,
Of which, immortality is an obvious specific example.
Because I'm just not convinced
That I can predict the fallout.
I call this "The CEV of Immortality", although at the time, neither of us had heard of the concept of CEV in the first place. The basic idea being that we are not currently prepared enough to even be experimenting with life-extending technologies. We don't know where it will lead and how we will cope.
However scientists are working on these technologies right now, discovering genes that cause proteins that can be blocked to greatly increase life-spans of worms, mice and flies. Should a breakthrough discovery be made, who knows what will happen? Once it's developed there's no going back. If the technology exists, people will stop at nothing to use it. You won't be able to control it.
Just like AI, life-extending technologies are not inherently "bad". But supporting the development of life-extending technnologies without already answering the above questions is like supporting the development of AI without knowing how to make it friendly. Once it's out of the box, it's too late.
(Provided by Normal Anomaly)
Overpopulation Counter-argument: Birth rates are currently going down, and have fallen below replacement in much of the developed world (including the US). According to an article in The Economist last year, population will peak at about 10-11 billion in about 2050. This UN infographic appears to predict that fewer people will be born in 2020-2050 then were born in 1980-2010. I am skeptical that birth rate will increase with life extension. Space colonization is another way of coping with more people (again on a longer timescale than 40 years.) Finally, life extension will probably become available slowly, at first only a few extra years and only for the wealthy. This last also applies to “unknown implications.”
Social Stagnation Counter-argument: This leads to a slippery slope argument for killing elderly people; it’s very unlikely that our current lifespans are at exactly the right tradeoff between social progress and life. Banning elderly people from voting or holding office would be more humane for the same results. "Life sucks" Counter argument: This is only an argument for working on making life worth extending, or possibly an argument for life extension not having the best marginal return in world-improvement. Also, nobody who doesn’t want to live longer would have to, so life extension technology wouldn’t result in immortal depressed people.
These counter-arguments are very good points, but I do not think it is enough to guarantee a 100% "Friendly" transhumanism. I would love to see some discussions on them.
Like last time I posted, I am making some "root" comments. They are: General comments, Over-population, Social stagnation, Life sucks, Unknown consequences. Please put your comment under the root it belongs to, in order to help keep the threads organized. Thank you!
Transhumanist visions appear to aim at invulnerability. We are invited to fight the dragon of death and disease, to shed our old, human bodies, and to live on as invulnerable minds or cyborgs. This paper argues that even if we managed to enhance humans in one of these ways, we would remain highly vulnerable entities given the fundamentally relational and dependent nature of posthuman existence. After discussing the need for minds to be embodied, the issue of disease and death in the infosphere, and problems of psychological, social and axiological vulnerability, I conclude that transhumanist human enhancement would not erase our current vulnerabilities, but instead transform them. Although the struggle against vulnerability is typically human and would probably continue to mark posthumans, we had better recognize that we can never win that fight and that the many dragons that threaten us are part of us. As vulnerable humans and posthumans, we are at once the hero and the dragon.
Bostrom has written a tale about a dragon that terrorizes a kingdom and people who submit to the dragon rather than fighting it. According to Bostrom, the “moral” of the story is that we should fight the dragon, that is, extend the (healthy) human life span and not accept aging as a fact of life (Bostrom 2005, 277). And in The Singularity is Near (2005) Kurzweil has suggested that following the acceleration of information technology, we will become cyborgs, upload ourselves, have nanobots in our bloodstream, and enjoy nonbiological experience. Although not all transhumanist authors explicitly state it, these ideas seem to aim toward invulnerability and immortality: by means of human enhancement technologies, we can transcend our present limited existence and become strong, invulnerable cyborgs or immortal minds living in an eternal, virtual world.
...However, in this paper, I will ask neither the ethical-normative question (Should we develop human enhancement techniques and should we aim for invulnerability?) nor the hermeneutical question (How can we best interpret and understand transhumanism in the light of cultural, religious, and scientific history?). Instead, I ask the question: If and to the extent that transhumanism aims at invulnerability, can it – in principle – reach that aim? The following discussion offers some obvious and some much less obvious reasons why posthumans would remain vulnerable, and why human vulnerability would be transformed rather than diminished or eliminated...However, to focus only on a defense or rejection of what is valuable in humans would leave out of sight the relation between (in)vulnerability and posthuman possibilities. It would lead us back to the ethical-normative questions (Is human enhancement morally acceptable? Is vulnerability something to be valued? Is the transhumanist project acceptable or desirable?), which is not what I want to do in this paper. Moreover, ethical arguments that present the problem as if we have a choice between “natural” humanity and “artificial” posthumanity are based on essentialist assumptions that make a sharp distinction between “what we are” (the natural) and technology (the artificial), whereas this distinction is at least questionable. Perhaps there is no fixed human nature apart from technology, perhaps we are “artificial by nature” (Plessner 1975). If this is so, then the problem is not whether or not we want to transcend the human but how we want to shape that posthuman existence. Should we aim at invulnerability and if so, can we? As indicated before, here I limit the discussion to the “can” question.
Breaking down the potential improvements:
Not only could human enhancement make us immune to current viruses; it could also offer other “immunities,” broadly understood...However, the project of total vulnerability or even overall reduction of vulnerability is bound to fail. If we consider the history of medical technology, we observe that for every disease new technology helps to prevent or cure, there is at least one new disease that escapes our techno-scientific control. We can win one battle, but we can never win the war. There will be always new diseases, new viruses, and, more generally, new threats to physical vulnerability. Consider also natural disasters caused by floods, earthquakes, volcanic eruptions, and so on.
Moreover, the very means to fight those threats sometimes create new threats themselves. This can happen within the same domain, as is the case with antibiotics that lead to the development of more resistant bacteria, or in another domain, as is the case with new security measures in airports, which are meant as protections against physical harm by terrorism but might pose new (health?) risks. Paradoxically, technologies that are meant to reduce vulnerability often create new ones. This is also true for posthuman technologies. For example, posthumans would also be vulnerable to at least some of the risks Bostrom calls “existential risks” (Bostrom 2002), which could wipe out posthumankind. Nanotechnology or nuclear technology could be misused, a superintelligence could take over and annihilate humankind, or technology could cause (further) resource depletion and ecological destruction. Military technologies are meant to protect us but they can become a threat, making us vulnerable in a new way. We wanted to master nature in order to become less dependent on it, but now we risk destroying the ecology that sustains us. And of course there are many physical threats we cannot foresee – not even in the near future.
Material and immaterial vulnerability
Consider computer viruses. Here the story is similar to the story of biological viruses: there are ongoing cycles of threats, counter-measures, and new threats. We can also consider physical damage to computers, although that is much less common. In any case, if we extend ourselves with software and hardware, this creates additional vulnerabilities. We must cope with “software” vulnerability and “hardware” vulnerability. If humans and posthumans live in an “infosphere” (see for example Floridi 2002), this is not a sphere of immunity. Perhaps our vulnerability becomes less material, but we cannot escape it. For instance, a virtual body in a virtual world may well be shielded from biological viruses, but it is vulnerable to at least three kinds of threats.
- First, there are threats within the virtual world itself (consider for instance virtual rape), which constitutes virtual vulnerability.
- Second, the software programme that provides a platform for the virtual world might be damaged, for example by means of a cyber attack. This can lead to the “death” of the virtual character or entity.
- Third, all these processes depend on (material) hardware. The world wide web and its wired and wireless communications rest on material infrastructures without which the web would be impossible. Therefore, if posthumans uploaded themselves into an infosphere and dispensed with their biological bodies, they would not gain invulnerability and immortality but merely transform their vulnerability.
Minds need bodies. This is in line with contemporary research in cognitive science, which argues that “embodiment” is necessary since minds can develop and function only in interaction with their environment (Lakoff and Johnson 1999 and others). This direction of thought is also taken in contemporary robotics, for example when it recognizes that manipulation plays an important role in the development of cognition (Sandini et al. 2004). In his famous 1988 book on “mind children” Moravec argued that true AI can be achieved only if machines have a body (Moravec 1988)...Thus, uploading and nano-based cyborgization would not dispense with the body but transform it into a virtual body or a nano-body. This would create vulnerabilities that sometimes resemble the vulnerabilities we know today (for instance virtual violence) but also new vulnerabilities.
With this atomism comes that atomist view of death: there is always the possibility of disintegration; neither physical-material objects nor information objects exist forever. Information can disintegrate and the material conditions for information are vulnerable to disintegration as well. Thus, at a fundamental level everything is vulnerable to disintegration, understood by atomism as a re-organization of elementary particles. This “metaphysical” vulnerability is unavoidable for posthumans, whatever the status of their elementary particles and the organs and systems constituted by these particles (biological or not). According to their own metaphysics, the cyborgs and inforgs that transhumanists and their supporters wish to create would be only temporal orders that have only temporary stability – if any.
Note, however, that recently both Floridi and contemporary physics seem to move toward a more ecological, holistic metaphysics, which suggests a different definition of death. In information ecologies, perhaps death means the absence of relations, disconnection. Or it means: deletion, understood ecologically and holistically as the removal out of the whole. But in the light of this metaphysics, too, there seems no reason why posthumans would be able to escape death in this sense.
Existential and psychological vulnerabilities
This gives rise to what we may call “indirect” or “second-order” vulnerabilities. For instance, we can become aware of the possibility of disintegration, the possibility of death. We can also become aware of less threatening risks, such as disease. There are many first-order vulnerabilities. Awareness of them renders us extra vulnerable as opposed to beings who lack such an ability to take distance from ourselves. From an existential-phenomenological point of view (which has its roots in work by Heidegger and others), but also from the point of view of common sense psychology, we must extend the meaning of vulnerability to the sufferings of the mind. Vulnerability awareness itself constitutes a higher-order vulnerability that is typical of humans. In posthumans, we could only erase this vulnerability if we were prepared to abandon the particular higher form of consciousness that we “enjoy.” No transhumanist would seriously consider that solution to the problem.
Social and emotional vulnerability
If I depend on you socially and emotionally, then I am vulnerable to what you say or do. Unless posthumans were to live in complete isolation without any possibility of inter-posthuman communication, they would be as vulnerable as we are to the sufferings created by the social life, although the precise relation between their social life and their emotional make-up might differ...For example, in Houellebecq’s novel the posthumans have a reduced capacity to feel sad, but at the cost of a reduced capacity to desire and to feel joy. More generally, the lesson seems to be: emotional enhancement comes at a high price. Are we prepared to pay it? Even if we succeed in diminishing this kind of vulnerability, we might lose something that is of value to us. This brings me to the next kind of vulnerability.
We value not only people and our relationships with them; we are also attached to many other things in life. Caring makes us vulnerable (Nussbaum 1986). We develop ties out of our engagement with humans, animals, objects, buildings, landscapes, and many other things. This renders us vulnerable since it makes us dependent on (what we experience as) “external” things. We sometimes get emotional about things since we care and since we value. We suffer since we depend on external things...Posthumans could be cognitively equipped to follow this strategy, for instance by means of emotional enhancement that allows more self-control and prevents them forming too strong ties to things. If we really wanted to become invulnerable in this respect, we should create posthumans who no longer care at all about external things – including other posthumans. That would be “posthumans” who no longer have the ability to care and to value. They would “connect” to others and to things, but they would not really engage with them, since that would render them vulnerable. They would be perfectly rational Stoics, perhaps, but it would be odd to call them “posthumans” at all since the term “human” would lose its meaning. It is even doubtful if this extreme form of Stoicism would be possible for any entity that possesses the capacity of valuing and that engages with the world.
'Relational vulnerability'/'Conclusion: Heels and dragons'
The only way to make an entity invulnerable, it turns out, would be to create one that exists in absolute isolation and is absolutely independent of anything else. Such a being seems inconceivable – or would be a particularly strange kind of god. (It would have to be a “philosopher’s” god that could hardly stir any religious feelings. Moreover, the god would not even be a “first mover,” let alone a creator, since that would imply a relation to our world. It is also hard to see how we would be aware of its existence or be able to form an idea about it, given the absence of any relation between us and the god.) Of course we could – if ethically acceptable at all – create posthumans that are less vulnerable in some particular areas, as long as we keep in mind that there are other sources of vulnerability, that new sources of vulnerability will emerge, and that our measure to decrease vulnerability in one area may increase it in another area.
If transhumanists accept the results of this discussion, they should carefully reflect on, and redefine, the aims of human enhancement and avoid confusion about how these aims relate to vulnerability. If the aim is invulnerability, then I have offered some reasons why this aim is problematic. If their project has nothing to do with trying to reach invulnerability, then why should we transcend the human? Of course one could formulate no “ultimate” goals and choose less ambitious goals, such as more health and less suffering. For instance, one could use a utilitarian argument and say that we should avoid overall suffering and pain. Harris seems to have taken these routes (Harris 2007). And Bostrom frequently mentions “life extension” as a goal rather than “invulnerability” or “immortality.” But even in these “weakened” or at least more modest forms, the transhumanist project can be interpreted as a particularly hostile response to (human) vulnerability that probably has no parallel in human history.
...Furthermore, this paper suggests that if we can and must make an ethical choice at all, then it is not a choice between vulnerable humans and invulnerable posthumans, or even between vulnerability and invulnerability, but a choice between different forms of humanity and vulnerability. If implemented, human enhancement technologies such as mind uploading will not cancel vulnerability but transform it. As far as ethics is concerned, then, what we need to ask is which new forms of the human we want and how (in)vulnerable we wish to be. But this inquiry is possible only if we first fine-tune our ideas of what is possible in terms of enhancement and (in)vulnerability. To do this requires stretching our moral and technological imaginations.
Moreover, if I’m right about the different forms of posthuman vulnerability as discussed above, then we must dispense with the dragon metaphor used by Bostrom: vulnerability is not a matter of “external” dangers that threaten or tyrannize us, but that have nothing to do with what we are; instead, it is bound up with our relational, technological and transient kind of being – human or posthuman. If there are dragons, they are part of us. It is our tragic condition that as relational entities we are at once the heel and the arrow, the hero and the dragon.
Before criticizing it, I'd like to point to the introduction where the author lays out his mission: to discuss what problems cannot "in principle" be avoided, what vulnerabilities are "necessary". In other words, he thinks he is laying out fundamental limits, on some level as inexorable and universal as, say, Turing's Halting Theorem.
But he is manifestly doing no such thing! He lists countless 'vulnerabilities' which could easily be circumvented to arbitrary degrees. For example, the computer viruses he puts such stock on: there is no fundamental reason computer viruses must exist. There are many ways they could be eliminated starting from formal static proofs of security and functionality; the only fundamental limit relevant here would be Turing/Rice's theorem, which is applicable only if we wanted to run all possible programs, which we manifestly cannot and do not. Similar points apply to the rest of his software vulnerabilities.
I would also like to single out his 'Metaphysical vulnerability'; physicists, SF authors, and transhumanists have been, for decades, outlining a multitude of models and possibilities for true immortality, ranging from Dyson's eternal intelligences to Tipler's collapse to Omega point to baby blackhole-universes. To appeal to atomism is to already beg the question (why not run intelligence on waves or more exotic forms of existence, why this particle-chauvinism?).
This applies again and again - the author supplies no solid proofs from any field, and apparently lacks the imagination or background to imagine ways to circumvent or dissolve his suggested limits. They may be exotic methods, but they still exist; were the author to reply that to employ such methods would result in intelligences so alien as to no longer be human, then I should accuse him of begging the question on a even larger scale - of defining the human as desirable and, essentially, as that which is compatible with his chosen limits.
Since that question is at the heart of transhumanism, his paper offers nothing of interest to us.
Upon reading Eliezer's possible gender dystopias ([catgirls](http://lesswrong.com/lw/xt/interpersonal_entanglement/), and [verthandi](http://lesswrong.com/lw/xu/failed_utopia_42/) and the other LW comments and posts on the subject of future gender relations, I came to a rather different conclusion than the ones I've seen espoused here. After searching around the internet a bit, I discovered that my ideas tend to fall under the general category of "postgenderism", and I am wondering what my fellow LessWrongians think of it.
This can generally be broken down to the following claims:
- A higher level of egalitarianism between the sexes increases utility. For example, if only men are generally allowed to do Job A, then you are halving the talent pool of people who can do Job A, AND women who would otherwise be happy in Job A lose that utility. It is equally of dis-utility if only women are socially allowed to be emotionally expressive, etc, etc. In other words, Equality = Good.
- The differences between men and women are a mix of environmental factors, such as social conditioning, and biological factors, such as varying levels of hormones.
- Some of these differences are optimal in the current environment, and others are suboptimal. For example-Women have better social skills (good!), but are more prone to depression (bad). Men are better self-promoters (good!), but are more prone to suicide (bad).
- Should transhumanism occur, it will eliminate the suboptimal differences. We can help people become less suicidal and not depressed.
- This will lead to a spiraling effect- Fewer *actual* differences will lead to a lessening of socialized differences, which will lead to less actual differences, etc
EDIT- Due to some really insightful comments;
I replaced men being prone to aggression as a negative, with men being prone to suicide.
I made the verbiage a little more explicit that no one would be *forced* to change, but would seek out the changes that transhumanism would have available.
With apologies to Ed Regis.
Modern science has caused humankind to develop better cures and patches for once-debilitating conditions; people often survive maladies which would have killed them not long ago. In the wake of this and of a recently changing attitude regarding how cognitively disabled people might see the world, a disability rights movement came into swing in the 1970s. Increasingly, the attitude of disabled people was that it wasn't inherently bad to be disabled; a disability could be an intrinsic part of a person's self-image. Some people in wheelchairs, for instance, want badly to be able to walk - but some do not, and the mainstream attitude has historically not validated those people's experiences. This is where disability culture intersects the transhumanist movement. If it is possible to identify so strongly with a physical disability as to not want any cure, how does that mesh with believing that it is desirable to improve one's mind and body? Is it possible to identify as a happily disabled transhumanist?
This does not intend to suggest that transhumanism is a movement of eugenic warriors; it's hard to imagine anyone suggesting that folks who don't sign up for the "Harmless and easy cure for senescence" shot be sterilized. However, despite the fact that hardly anyone would identify emself as an eugenicist (a fine thing to call yourself once-upon-a-time in America, until the Nazis rendered the term unpopular,) literally eugenic attitudes in society prevail, e.g. the prevalent belief that people with Huntington's disease or schizophrenia who reproduce are cruel for hazarding the inheritance of their condition.
One wonders what disability culture would look like if people who are today in wheelchairs had access to technology that could repair their legs and allow them to walk. I wonder if people with congenital disabilities which would today require a wheelchair would have a choice about being cured, or whether the cure would be implemented in infancy. In 2007, a girl named Ashley who has an unknown brain disorder and cannot communicate or move herself effectively was given a series of radical procedures - hysterectomy, mastectomy and high estrogen doses - intended to make her easier to take care of. Was the literally non-consensual hysterectomy an eugenicist procedure? An immoral one? Was it in the spirit of transhumanism? In a future where Down syndrome can be prevented with a prenatal vaccine, would such a vaccine be moral? How about vaccines for "low-functioning" autism? At that rate, surely it would be possible to vaccinate for Asperger syndrome, depression, and ADHD, conditions which many people dislike and/or dislike having. (As an aside, with all the medically-repudiated yet widespread fear about vaccines causing autism, one can only imagine the panic an autism vaccine would cause.)
I don't have answers to these questions. I have feelings and impressions, but those are not very useful. The issue cannot be solved unilaterally by saying that only those who enthusiastically consent to certain medical procedures should be given them, because many people are incapable of giving clear consent, as in the Ashley treatment. Nor can it be clearly solved by suggesting only prophylactic measures against disabling conditions, because certainly some parents would forego those measures. In a transhuman future, is the birth of a nonverbal autistic a preventable tragedy? Is it less of a tragedy if the child is a savant? Nor can one say that only conditions without an accompanying culture should be eradicated. Even if the definition of 'culture' were not elusive, HIV/AIDS has a definite culture about it, and few people would suggest that HIV should not be eradicated.
It is not useful to ignore the role of disabled people and disability culture in the transhumanist movement. I believe that the future has a lot to offer many people with disabilities, including those who do not want a 'cure.' Transhumanism can encompass interest in diverse AAC methods, and I believe it should. Simple keyboard technology has made it possible for many otherwise nonverbal people to communicate eloquently, as have DynaVox devices and various iPad apps. It would delight me to see widespread discussion about more powerful AAC devices, which could enable us to perceive and act on the desires of those who cannot now communicate.
Nor has technology reached its limits in helping those with physical disabilities; wheelchairs are generally clumsy and heavy, and expensive for people without insurance - nearly inaccessible to people who live without insurance in impoverished areas of the world (or of the United States.) People who, like Stephen Hawking, become paralyzed by motor neuron diseases, do not all possess Stephen Hawking's access to high-tech communications devices (for which prices begin at thousands of dollars.) And people with disabilities like epilepsy or cerebral palsy are still often abused for their "demonic possession" or inaccurately stereotyped as mentally disabled. The transhumanist movement tends to advocate augmentation sans cure as far as physical disabilities are concerned, but there are people with mixed feelings about transhumanism as it applies to disability.
Disability is a hot button topic surrounded by widely varying spectra of beliefs. It directly affects humankind and is not often discussed rationally because of the subjective experiences people have had with varying disabilities. (The mother of a nonverbal autistic says, "There should be a cure for autism; I want my son to say he loves me." A nonverbal autistic communicating by AAC says "There shouldn't be a cure for autism; I want people to learn how I communicate my affection." Their conflicting beliefs do not predict radically different anticipated experiences.) So a rational, clear dialogue about disability is vital - for disabled people, their friends and families, and the world at large - in order to integrate these identities and experiences into the future and present of humankind.
Summary: if you could show a page of LW to a random student who was interested in science, but couldn't otherwise communicate with them, which page would you choose?
The Oxford University Transhumanist Society is a student society who arrange speakers on transhumanist topics - ranging from cognitive enhancement to AI to longevity to high-impact careers to Xrisk to brain-machine interfaces. The audience is a mixture of undergraduate and graduate students, mostly scientists, who are interested in the future of science and technology, but by no means self-describe as transhumanists.
This week we're finally getting organised and producing membership cards. We intend to put a URL in a QR code on them, because people expect cool techy stuff from the Transhumanist society. It'd be nice if the link was something slightly more imaginative than just H+ or the facebook page. Naturally, I thought it should point to LW; but where specifically? The About page, a very good article from the Sequences, something from Eliezer's website, MoR...? A well chosen page, showcasing what LW has to offer, could well draw someone into LW.
Suggestions welcome. One article (or very similar set of articles) per top-level comment please, so people can upvote suggestions in a targetted manner.
Today this girl I met comes to my place, allegedly to get some books about her new interests, singularity, immortalism, cryonics.
Actually, she wanted to ask me a question, a question about which I could use some rational opinion.
She says: "So, here is the real reason I came here. I'm thinking of making a documentary, a movie, and it would be about, well.... about you."
(I am shocked)
"So, yes, a movie about you, and the fact that you want to live forever, it would have interviews with friends, parents, girlfriend, and a lot with you" "What do you think?"
(I sit down in the floor to think about it)
The conversation continues and I generally sense she wants to do something interesting, somewhat controversial, kind of humoristic, but at the same time striking some topics that are really unheard of around here (Brazil)
Now, I am looking for opinions. From an utilitarian perspective, and given that I am directing the Humanity+ or Transhumanist group of Brazilians, should I go with it? My concern is basically not about me, but about how will a movie about me influence, positively or negatively, the growing H+ movement in Brazil, given the inferential distances, prejudices, and mysterianism that might surround the whole interaction between the movie's memes, and the spectator's memes.
(from here below, the translation is google tradutor, not mine)
I have put up a poll in the comment section down here, so that I can know your opinion, please take the time to vote, thank you.
Using an electronic system that duplicates the neural signals associated with memory, they managed to replicate the brain function in rats associated with long-term learned behavior, even when the rats had been drugged to forget.
This series of experiments, as described, sounds very well-constructed and thorough. The scientists first recorded specific activity in the hippocampus, where short-term memory becomes long-term memory. They then used drugs to inhibit that activity, preventing the formation of and access to long-term memory. Using the information they had gathered about the hippocampus activity, they constructed an artificial replacement and implanted it into the rats' brains. This successfully restored the rats' ability to store and use long-term memory. Further, they implanted the device into rats without suppressed hippocampal activity, and demonstrated increased memory abilities in those subjects.
"These integrated experimental modeling studies show for the first time that with sufficient information about the neural coding of memories, a neural prosthesis capable of real-time identification and manipulation of the encoding process can restore and even enhance cognitive mnemonic processes," says the paper.
It's a truly impressive result.
I want to make a list of companies that are affiliated with transhumanism. I'm looking for companies pursuing transhumanist goals (ex: VR, gene sequencing) or built by transhumanists themselves (ex: arguably Elon Musk).
What places do you know of? Which look like cool places to work? Extra points if you've worked there.
Aside from my personal reasons for seeking those companies (internships, research for future projects), I think a list of transhumanist companies will be a good thing to keep around for other future endeavors.
Also: The Seasteading Institute, SIAI, BioCurious, etc are all incredibly cool, but I'm not looking for non-profits.
My list so far:
- The obvious ones to start with are anything connected with Peter Thiel and the Founders' Fund.
- Tesla Motors and SpaceX are projects of Elon Musk. Tesla Motors is making electric cars and SpaceX is chasing private space flight.
- Luke Nosek, who worked on PayPal with Elon Musk, has started a company called Halcyon Molecular and is also connected with Pathway Genomics. The first does gene sequencing, the second offers personal genetics reports.
- Novamente LLC is working on "intelligent virtual agents for virtual worlds, computer games, and simulations." Ben Goertzel leads it.
What am I missing?
It's surprised me that there's been very little discussion of The Long Now here on Less Wrong, as there are many similarities between the groups, although the approach and philosophy between them are quite different. At a minimum, I believe that a general awareness might be beneficial. I'll use the initials LW and LN below. My perspective on LN is simply that of someone who's kept an eye on their website from time to time and read a few of their articles, so I'd also like to admit that my knowledge is a bit shallow (a reason, in fact, I bring the topic up for discussion).
Most critically, long-term thinking appears as a cornerstone of both the LW and LN thought, explicitly as the goal for LN, and implicitly here on LW whenever we talk about existential risk or decades-away or longer technology. It's not clear if there's an overlap between the commenters at LW and the membership of LN or not, but there's definitely a large number of people "between" the two groups -- statements by Peter Thiel and Ray Kurzweil have been recent topics on the LN blog and Hillis, who founded LN, has been involved in AI and philosophy of mind. LN has Long Bets, which I would loosely describe as to PredictionBook as InTrade is to Foresight Exchange. LN apparently had a presence at some of the past SIAI's Singularity Summits.
Signaling: LN embraces signaling like there's no tomorrow (ha!) -- their flagship project, after all, is a monumental clock to last thousands of years, the goal of which is to "lend itself to good storytelling and myth" about long-term thought. Their membership cards are stainless steel. Some of the projects LN are pursuing seem to have been chosen mostly because they sound awesome, and even those that aren't are done with some flair, IMHO. In contrast, the view among LW posts seems to be that signaling is in many cases a necessary evil, in some cases just an evolutionary leftover, and reducing signaling a potential source for efficiency gains. There may be something to be learned here -- we already know FAI would be an easier sell if we described it as project to create robots that are Presidents of the United States by day, crime-fighters by night, and cat-people by late-night.
Structure: While LW is a project of SIAI, they're not the same, so by extension the comparison between LN and LW is just a bit apples-to-kumquats. It'd be a lot easier to compare LW to a LN discussion board, if it existed.
The Future: Here on LW, we want our nuclear-powered flying cars, dammit! Bad future scenarios that are discussed on LW tend to be irrevocably and undeniably bad -- the world is turned into tang or paperclips and no life exists anymore, for example. LN seems more concerned with recovery from, rather than prevention of, "collapse of civilization" scenarios. Many of the projects both undertaken and linked to by LN focus on preserving knowledge in a such a scenario. Between the overlap in the LW community and cryonics, SENS, etc, the mental relationship between the median LW poster and the future seems more personal and less abstract.
Politics: The predominant thinking on LW seems to be a (very slightly left-leaning) technolibertarianism, although since it's open to anyone who wanders in from the Internet, there's a lot of variation (if either SIAI or FHI have an especially strong political stance per se, I've not noticed it). There's also a general skepticism here regarding the soundness of most political thought and of many political processes. LN seems further left on average and more comfortable with politics in general (although calling it a political organization would be a bit of a stretch). Keeping with this, LW seems to have more emphasis on individual decision making and improvement than LN.
...according to this front-page Reddit headline I just saw, which links to this Guardian article. I wonder if he's heard of KrioRus, whether he's signed up (Wikipedia says they offer services "to clients from Russia, CIS and EU"), and what his odds would be if he were (would it be possible to emigrate to Russia to be closer to the facility, and if not, what would be the best possible option?). Given his being a head of state, presumably it'd be pretty tough for an advocate to even get close enough to try to make the case.
Searching the Reddit comment thread for "cryo" turned up nothing.
The BBC News recently ran an interesting piece on living forever. They discuss some of the standard arguments against cryonics and transhumanism; overall, the article is pretty critical of both. I suspect most LessWrong readers won't find it convincing, but it's still worth a quick read.
There's a good song by Eminem - If I had a million dollars. So, if I had a hypothetical task to give away $30 million to different foundations without having a right to influence the projects, I would distribute them as follows, $3 million for each organization:
1. Nanofactory collaboration, Robert Freitas, Ralph Merkle – developers of molecular nanotechnology and nanomedicine. Robert Freitas is the author of the monography Nanomedicine.
2. Singularity institute, Michael Vassar, Eliezer Yudkowsky – developers and ideologists of the friendly Artificial Intelligence
3. SENS Foundation, Aubrey de Grey – the most active engineering project in life extension, focused on the most promising underfunded areas
4. Cryonics Institute – one of the biggest cryonics firms in the US, they are able to use the additional funding more effectively as compared to Alcor
5. Advanced Neural Biosciences, Aschwin de Wolf – an independent cryonics research center created by ex-researchers from Suspended Animation
6. Brain observatory – brain scanning
7. University Hospital Careggi in Florence, Paolo Macchiarini – growing organs (not an American medical school, because this amount of money won’t make any difference to the leading American centers)
8. Immortality institute – advocating for immortalism, selected experiments
9. IEET – institute of ethics and emerging technologies – promotion of transhumanist ideas
10. Small research grants of $50-300 thousand
Now, if the task is to most effectively invest $30 million dollars, what projects would be chosen? (By effectiveness here I mean increasing the chances of radical life extension)
Well, off the top of my head:
1. The project: “Creation of technologies to grow a human liver” – $7 million. The project itself costs approximately $30-50 million, but $7 million is enough to achieve some significant intermediate results and will definitely attract more funds from potential investors.
2. Break the world record in sustaining viability of a mammalian head separate from the body - $0.7 million
3. Creation of an information system, which characterizes data on changes during aging in humans, integrates biomarkers of aging, and evaluates the role of pharmacological and other interventions in aging processes – $3 million
4. Research in increasing cryoprotectors efficacy - $3 million
5. Creation and realization of a program “Regulation of epigenome” - $5 million
6. Creation, promotion and lobbying of the program on research and fighting aging - $2 million
7. Educational programs in the fields of biogerontology, neuromodelling, regenerative medicine, engineered organs - $1.5 million
8. “Artificial blood” project - $2 million
9. Grants for authors, script writers, and art representatives for creation of pieces promoting transhumanism - $0.5 million
10. SENS Foundation project of removing senescent cells - $2 million
11. Creation of a US-based non-profit, which would protect and lobby the right to live and scientific research in life extension - $2 million
11. Participation of “H+ managers” in conferences, forums and social events - $1 million
12. Advocacy and creating content in social media - $0.3 million