- Why I Grew Skeptical of Transhumanism
- Why I Grew Skeptical of Immortalism
- Why I Grew Skeptical of Effective Altruism
- Only Game in Town
Wonderland’s rabbit said it best: The hurrier I go, the behinder I get.
We approach 2016, and the more I see light, the more I see brilliance popping everywhere, the Effective Altruism movement growing, TEDs and Elons spreading the word, the more we switch our heroes in the right direction, the behinder I get. But why? - you say.
Clarity, precision, I am tempted to reply. I have left the intellectual suburbs of Brazil, straight into the strongest hub of production of things that matter, The Bay Area, via Oxford’s FHI office, I now split my time between UC Berkeley, and the CFAR/MIRI office. In the process, I have navigated an ocean of information, read hundreds of books, papers, saw thousands of classes, became proficient in a handful of languages and a handful of intellectual disciplines. I’ve visited the Olympus and I met our living demigods in person as well.
Against the overwhelming forces of an extremely upbeat personality surfing a hyper base-level happiness, these three forces: approaching the center, learning voraciously, and meeting the so-called heroes, have brought me to the current state of pessimism.
I was a transhumanist, an immortalist, and an effective altruist.
Why I Grew Skeptical of Transhumanism
The transhumanist in me is skeptical of technological development fast enough for improving the human condition to be worth it now, he sees most technologies as fancy toys that don’t get us there. Our technologies can’t and won’t for a while lead our minds to peaks anywhere near the peaks we found by simply introducing weirdly shaped molecules into our brains. The strangeness of Salvia, the beauty of LSD, the love of MDMA are orders and orders of magnitude beyond what we know how to change from an engineering perspective. We can induce a rainbow, but we don’t even have the concept of force yet. Our knowledge about the brain, given our goals about the brain, is at the level of knowledge of physics of someone who found out that spraying water on a sunny day causes the rainbow. It’s not even physics yet.
Believe me, I have read thousands of pages of papers in the most advanced topics in cognitive neuroscience, my advisor spent his entire career, from Harvard to Tenure, doing neuroscience, and was the first person to implant neurons that actually healed a brain to the point of recovering functionality by using non-human neurons. As Marvin Minsky, who invented the multi-agent computational theory of mind, told me: I don’t recommend entering a field where every four years all knowledge is obsolete, they just don’t know it yet.
Why I Grew Skeptical of Immortalism
The immortalist in me is skeptical because he understands the complexity of biology from conversations with the centimillionaires and with the chief scientists of anti-aging research facilities worldwide, he met the bio-startup founders and gets that the structure of incentives does not look good for bio-startups anyway, so although he was once very excited about the prospect of defeating the mechanisms of ageing, back when less than 300 thousand dollars were directly invested in it, he is now, with billions pledged against ageing, confident that the problem is substantially harder to surmount than the number of man-hours left to be invested in the problem, at least during my lifetime, or before the Intelligence Explosion.
Believe me, I was the first cryonicist among the 200 million people striding my country, won a prize for anti-ageing research at the bright young age of 17, and hang out on a regular basis with all the people in this world who want to beat death that still share in our privilege of living, just in case some new insight comes that changes the tides, but none has come in the last ten years, as our friend Aubrey will be keen to tell you in detail.
Why I Grew Skeptical of Effective Altruism
The Effective Altruist is skeptical too, although less so, I’m still founding an EA research institute, keeping a loving eye on the one I left behind, living with EAs, working at EA offices and mostly broadcasting ideas and researching with EAs. Here are some problems with EA which make me skeptical after being shook around by the three forces:
-
The Status Games: Signalling, countersignalling, going one more meta-level up, outsmarting your opponent, seeing others as opponents, my cause is the only true cause, zero-sum mating scarcity, pretending that poly eliminates mating scarcity, founders X joiners, researchers X executives, us institutions versus them institutions, cheap individuals versus expensive institutional salaries, it's gore all the way up and down.
-
Reasoning by Analogy: Few EAs are able to and doing their due intellectual diligence. I don’t blame them, the space of Crucial Considerations is not only very large, but extremely uncomfortable to look at, who wants to know our species has not even found the stepping stones to make sure that what matters is preserved and guaranteed at the end of the day? It is a hefty ordeal. Nevertheless, it is problematic that fewer than 20 EAs (one in 300?) are actually reasoning from first principles, thinking all things through from the very beginning. Most of us are looking away from at least some philosophical assumption or technological prediction. Most of us are cooks and not yet chefs. Some of us have not even waken up yet.
-
Babies with a Detonator: Most EAs still carry their transitional objects around, clinging desperately to an idea or a person they think more guaranteed to be true, be it hardcore patternism about philosophy of mind, global aggregative utilitarianism, veganism, or the expectation of immortality.
-
The Size of the Problem: No matter if you are fighting suffering, Nature, Chronos (death), Azathoth (evolutionary forces) or Moloch (deranged emergent structures of incentives), the size of the problem is just tremendous. One completely ordinary reason to not want to face the problem, or to be in denial, is the problem’s enormity.
-
The Complexity of The Solution: Let me spell this out, the nature of the solution is not simple in the least. It’s possible that we luck out and it turns out the Orthogonality Thesis and the Doomsday Argument and Mind Crime are just philosophical curiosities that have no practical bearing in our earthly engineering efforts, that the AGI or Emulation will by default fall into an attractor basin which implements some form of MaxiPok with details that it only grasps after CEV or the Crypto, and we will be Ok. It is possible, and it is more likely than that our efforts will end up being the decisive factor. We need to focus our actions in the branches where they matter though.
-
The Nature of the Solution: So let’s sit down side by side and stare at the void together for a bit. The nature of the solution is getting a group of apes who just invented the internet from everywhere around the world, and get them to coordinate an effort that fills in the entire box of Crucial Considerations yet unknown - this is the goal of Convergence Analysis, by the way - find every single last one of them to the point where the box is filled, then, once we have all the Crucial Considerations available, develop, faster than anyone else trying, a translation scheme that translates our values to a machine or emulation, in a physically sound and technically robust way (that’s if we don’t find a Crucial Consideration otherwise which, say, steers our course towards Mars). Then we need to develop the engineering prerequisites to implement a thinking being smarter than all our scientists together who can reflect philosophically better than the last two thousand years of effort while becoming the most powerful entity in the universe’s history, that will fall into the right attractor basin within mindspace. That’s if Superintelligences are even possible technically. Add to that we or it have to guess correctly all the philosophical problems that are A)Relevant B)Unsolvable within physics (if any) or by computers, all of this has to happen while the most powerful corporations, States, armies and individuals attempt to seize control of the smart systems themselves. without being curtailed by the hindrance counter incentive of not destroying the world either because they don’t realize it, or because the first mover advantage seems worth the risk, or because they are about to die anyway so there’s not much to lose.
-
How Large an Uncertainty: Our uncertainties loom large. We have some technical but not much philosophical understanding of suffering, and our technical understanding is insufficient to confidently assign moral status to other entities, specially if they diverge in more dimensions than brain size and architecture. We’ve barely scratched the surface of technical understanding on happiness increase, and philosophical understanding is also in its first steps.
-
Macrostrategy is Hard: A Chess Grandmaster usually takes many years to acquire sufficient strategic skill to command the title. It takes a deep and profound understanding of unfolding structures to grasp how to beam a message or a change into the future. We are attempting to beam a complete value lock-in in the right basin, which is proportionally harder.
-
Probabilistic Reasoning = Reasoning by Analogy: We need a community that at once understands probability theory, doesn’t play reference class tennis, and doesn’t lose motivation by considering the base rates of other people trying to do something, because the other people were cooks, not chefs, and also because sometimes you actually need to try a one in ten thousand chance. But people are too proud of their command of Bayes to let go of the easy chance of showing off their ability to find mathematically sound reasons not to try.
-
Excessive Trust in Institutions: Very often people go through a simplifying set of assumptions that collapses a brilliant idea into an awful donation, when they reason:
I have concluded that cause X is the most relevant
Institution A is an EA organization fighting for cause X
Therefore I donate to institution A to fight for cause X.
To begin with, this is very expensive compared to donating to any of the three P’s: projects, people or prizes. Furthermore, the crucial points to fund institutions are when they are about to die, just starting, or building a type of momentum that has a narrow window of opportunity where the derivative gains are particularly large or you have private information about their current value. To agree with you about a cause being important is far from sufficient to assess the expected value of your donation. -
Delusional Optimism: Everyone who like past-me moves in with delusional optimism will always have a blind spot in the feature of reality about which they are in denial. It is not a problem to have some individuals with a blind spot, as long as the rate doesn’t surpass some group sanity threshold, yet, on an individual level, it is often the case that those who can gaze into the void a little longer than the rest end up being the ones who accomplish things. Staring into the void makes people show up.
-
Convergence of opinions may strengthen separation within EA: Thus far, the longer someone is an EA for, the more likely they are to transition to an opinion in the subsequent boxes in this flowchart from whichever box they are at at the time. There are still people in all the opinion boxes, but the trend has been to move in that flow. Institutions however have a harder time escaping being locked into a specific opinion. As FHI moves deeper into AI, and GWWC into poverty, 80k into career selection etc… they become more congealed. People’s opinions are still changing, and some of the money follows, but institutions are crystallizing into some opinions, and in the future they might prevent transition between opinion clusters and free mobility of individuals, like national frontiers already do. Once institutions, which in theory are commanded by people who agree with institutional values, notice that their rate of loss towards the EA movement is higher than their rate of gain, they will have incentives to prevent the flow of talent, ideas and resources that has so far been a hallmark of Effective Altruism and why many of us find it impressive, it’s being an intensional movement. Any part that congeals or becomes extensional will drift off behind, and this may create unsurmountable separation between groups that want to claim ‘EA’ for themselves.
Only Game in Town
The reasons above have transformed a pathological optimist into a wary skeptical about our future, and the value of our plans to get there. And yet, I don’t see other option than to continue the battle. I wake up in the morning and consider my alternatives: Hedonism, well, that is fun for a while, and I could try a quantitative approach to guarantee maximal happiness over the course of the 300 000 hours I have left. But all things considered, anyone reading this is already too close to the epicenter of something that can become extremely important and change the world to have the affordance to wander off indeterminately. I look at my high base-happiness and don’t feel justified in maximizing it up to the point of no marginal return, there clearly is value elsewhere than here (points inwards), clearly the self of which I am made has strong altruistic urges anyway, so at least above a threshold of happiness, has reason to purchase the extremely good deals in expected value happiness of others that seem to be on the market. Other alternatives? Existentialism? Well, yes, we always have a fundamental choice and I feel the thrownness into this world as much as any Kierkegaard does. Power? When we read Nietzsche it gives that fantasy impression that power is really interesting and worth fighting for, but at the end of the day we still live in a universe where the wealthy are often reduced to having to spend their power in pathetic signalling games and zero sum disputes or coercing minds to act against their will. Nihilism and Moral Fictionalism, like Existentialism all collapse into having a choice, and if I have a choice my choice is always going to be the choice to, most of the time, care, try and do.
Ideally, I am still a transhumanist and an immortalist. But in practice, I have abandoned those noble ideals, and pragmatically only continue to be an EA.
It is the only game in town.
The technology does exist. In hypnosis, we do party tricks including the effects of the weirdly shaped molecules. Think about this redirect. We do lucid dreaming. We do all the cool stuff from eastern meditations and some that probably haven't been done before ("ultra-height"). We can do everything the human mind is capable of experiencing. We produce usable social/dating/relationship advice, sports training, sales training, therapy, anything that involves using your brain better. We can redesign our own personality.
It sounds like magic, but it's just sufficiently advanced. When installing a Death Star power core in the root chakra does exactly what I expect, the only observation of objective reality is that the brain figured out what I'm trying to do and made it happen somehow. It could be a fun research topic to find out the neurology of different techniques, but the current dominant scientific theory says hypnosis is a form of placebo.
That sounds like a wildly overreaching claim. We can do that now / in the near future? I don't think so.
/blinks. What do you expect installing a Death Star power core in the root chakra to do?
(will it let you shoot death rays out of your ass?)