This text has many, many hyperlinks, it is useful to at least glance at frontpage of the linked material to get it. It is an expression of me thinking so it has many community jargon terms. Thank Oliver Habryka, Daniel Kokotajlo and James Norris for comments. No, really, check the front page of the hyperlinks. 
  • Why I Grew Skeptical of Transhumanism
  • Why I Grew Skeptical of Immortalism
  • Why I Grew Skeptical of Effective Altruism
  • Only Game in Town

 

Wonderland’s rabbit said it best: The hurrier I go, the behinder I get.

 

We approach 2016, and the more I see light, the more I see brilliance popping everywhere, the Effective Altruism movement growing, TEDs and Elons spreading the word, the more we switch our heroes in the right direction, the behinder I get. But why? - you say.

Clarity, precision, I am tempted to reply. I have left the intellectual suburbs of Brazil, straight into the strongest hub of production of things that matter, The Bay Area, via Oxford’s FHI office, I now split my time between UC Berkeley, and the CFAR/MIRI office. In the process, I have navigated an ocean of information, read hundreds of books, papers, saw thousands of classes, became proficient in a handful of languages and a handful of intellectual disciplines. I’ve visited the Olympus and I met our living demigods in person as well.

Against the overwhelming forces of an extremely upbeat personality surfing a hyper base-level happiness, these three forces: approaching the center, learning voraciously, and meeting the so-called heroes, have brought me to the current state of pessimism.

I was a transhumanist, an immortalist, and an effective altruist.

 

Why I Grew Skeptical of Transhumanism

The transhumanist in me is skeptical of technological development fast enough for improving the human condition to be worth it now, he sees most technologies as fancy toys that don’t get us there. Our technologies can’t and won’t for a while lead our minds to peaks anywhere near the peaks we found by simply introducing weirdly shaped molecules into our brains. The strangeness of Salvia, the beauty of LSD, the love of MDMA are orders and orders of magnitude beyond what we know how to change from an engineering perspective. We can induce a rainbow, but we don’t even have the concept of force yet. Our knowledge about the brain, given our goals about the brain, is at the level of knowledge of physics of someone who found out that spraying water on a sunny day causes the rainbow. It’s not even physics yet.

Believe me, I have read thousands of pages of papers in the most advanced topics in cognitive neuroscience, my advisor spent his entire career, from Harvard to Tenure, doing neuroscience, and was the first person to implant neurons that actually healed a brain to the point of recovering functionality by using non-human neurons. As Marvin Minsky, who invented the multi-agent computational theory of mind, told me: I don’t recommend entering a field where every four years all knowledge is obsolete, they just don’t know it yet.

 

Why I Grew Skeptical of Immortalism

The immortalist in me is skeptical because he understands the complexity of biology from conversations with the centimillionaires and with the chief scientists of anti-aging research facilities worldwide, he met the bio-startup founders and gets that the structure of incentives does not look good for bio-startups anyway, so although he was once very excited about the prospect of defeating the mechanisms of ageing, back when less than 300 thousand dollars were directly invested in it, he is now, with billions pledged against ageing, confident that the problem is substantially harder to surmount than the number of man-hours left to be invested in the problem, at least during my lifetime, or before the Intelligence Explosion.

Believe me, I was the first cryonicist among the 200 million people striding my country, won a prize for anti-ageing research at the bright young age of 17, and hang out on a regular basis with all the people in this world who want to beat death that still share in our privilege of living, just in case some new insight comes that changes the tides, but none has come in the last ten years, as our friend Aubrey will be keen to tell you in detail.

 

Why I Grew Skeptical of Effective Altruism

The Effective Altruist is skeptical too, although less so, I’m still founding an EA research institute, keeping a loving eye on the one I left behind, living with EAs, working at EA offices and mostly broadcasting ideas and researching with EAs. Here are some problems with EA which make me skeptical after being shook around by the three forces:

  1. The Status Games: Signalling, countersignalling, going one more meta-level up, outsmarting your opponent, seeing others as opponents, my cause is the only true cause, zero-sum mating scarcity, pretending that poly eliminates mating scarcity, founders X joiners, researchers X executives, us institutions versus them institutions, cheap individuals versus expensive institutional salaries, it's gore all the way up and down.

  2. Reasoning by Analogy: Few EAs are able to and doing their due intellectual diligence. I don’t blame them, the space of Crucial Considerations is not only very large, but extremely uncomfortable to look at, who wants to know our species has not even found the stepping stones to make sure that what matters is preserved and guaranteed at the end of the day? It is a hefty ordeal. Nevertheless, it is problematic that fewer than 20 EAs (one in 300?) are actually reasoning from first principles, thinking all things through from the very beginning. Most of us are looking away from at least some philosophical assumption or technological prediction. Most of us are cooks and not yet chefs. Some of us have not even waken up yet.

  3. Babies with a Detonator: Most EAs still carry their transitional objects around, clinging desperately to an idea or a person they think more guaranteed to be true, be it hardcore patternism about philosophy of mind, global aggregative utilitarianism, veganism, or the expectation of immortality.

  4. The Size of the Problem: No matter if you are fighting suffering, Nature, Chronos (death), Azathoth (evolutionary forces) or Moloch (deranged emergent structures of incentives), the size of the problem is just tremendous. One completely ordinary reason to not want to face the problem, or to be in denial, is the problem’s enormity.

  5. The Complexity of The Solution: Let me spell this out, the nature of the solution is not simple in the least. It’s possible that we luck out and it turns out the Orthogonality Thesis and the Doomsday Argument and Mind Crime are just philosophical curiosities that have no practical bearing in our earthly engineering efforts, that the AGI or Emulation will by default fall into an attractor basin which implements some form of MaxiPok with details that it only grasps after CEV or the Crypto, and we will be Ok. It is possible, and it is more likely than that our efforts will end up being the decisive factor. We need to focus our actions in the branches where they matter though.

  6. The Nature of the Solution: So let’s sit down side by side and stare at the void together for a bit. The nature of the solution is getting a group of apes who just invented the internet from everywhere around the world, and get them to coordinate an effort that fills in the entire box of Crucial Considerations yet unknown - this is the goal of Convergence Analysis, by the way - find every single last one of them to the point where the box is filled, then, once we have all the Crucial Considerations available, develop, faster than anyone else trying, a translation scheme that translates our values to a machine or emulation, in a physically sound and technically robust way (that’s if we don’t find a Crucial Consideration otherwise which, say, steers our course towards Mars). Then we need to develop the engineering prerequisites to implement a thinking being smarter than all our scientists together who can reflect philosophically better than the last two thousand years of effort while becoming the most powerful entity in the universe’s history, that will fall into the right attractor basin within mindspace. That’s if Superintelligences are even possible technically. Add to that we or it have to guess correctly all the philosophical problems that are A)Relevant B)Unsolvable within physics (if any) or by computers, all of this has to happen while the most powerful corporations, States, armies and individuals attempt to seize control of the smart systems themselves. without being curtailed by the hindrance counter incentive of not destroying the world either because they don’t realize it, or because the first mover advantage seems worth the risk, or because they are about to die anyway so there’s not much to lose.

  7. How Large an Uncertainty: Our uncertainties loom large. We have some technical but not much philosophical understanding of suffering, and our technical understanding is insufficient to confidently assign moral status to other entities, specially if they diverge in more dimensions than brain size and architecture. We’ve barely scratched the surface of technical understanding on happiness increase, and philosophical understanding is also in its first steps.

  8. Macrostrategy is Hard: A Chess Grandmaster usually takes many years to acquire sufficient strategic skill to command the title. It takes a deep and profound understanding of unfolding structures to grasp how to beam a message or a change into the future. We are attempting to beam a complete value lock-in in the right basin, which is proportionally harder.

  9. Probabilistic Reasoning = Reasoning by Analogy: We need a community that at once understands probability theory, doesn’t play reference class tennis, and doesn’t lose motivation by considering the base rates of other people trying to do something, because the other people were cooks, not chefs, and also because sometimes you actually need to try a one in ten thousand chance. But people are too proud of their command of Bayes to let go of the easy chance of showing off their ability to find mathematically sound reasons not to try.

  10. Excessive Trust in Institutions: Very often people go through a simplifying set of assumptions that collapses a brilliant idea into an awful donation, when they reason:
    I have concluded that cause X is the most relevant
    Institution A is an EA organization fighting for cause X
    Therefore I donate to institution A to fight for cause X.
    To begin with, this is very expensive compared to donating to any of the three P’s: projects, people or prizes. Furthermore, the crucial points to fund institutions are when they are about to die, just starting, or building a type of momentum that has a narrow window of opportunity where the derivative gains are particularly large or you have private information about their current value. To agree with you about a cause being important is far from sufficient to assess the expected value of your donation.

  11. Delusional Optimism: Everyone who like past-me moves in with delusional optimism will always have a blind spot in the feature of reality about which they are in denial. It is not a problem to have some individuals with a blind spot, as long as the rate doesn’t surpass some group sanity threshold, yet, on an individual level, it is often the case that those who can gaze into the void a little longer than the rest end up being the ones who accomplish things. Staring into the void makes people show up.

  12. Convergence of opinions may strengthen separation within EA:  Thus far, the longer someone is an EA for, the more likely they are to transition to an opinion in the subsequent boxes in this flowchart from whichever box they are at at the time. There are still people in all the opinion boxes, but the trend has been to move in that flow. Institutions however have a harder time escaping being locked into a specific opinion. As FHI moves deeper into AI, and GWWC into poverty, 80k into career selection etc… they become more congealed. People’s opinions are still changing, and some of the money follows, but institutions are crystallizing into some opinions, and in the future they might prevent transition between opinion clusters and free mobility of individuals, like national frontiers already do. Once institutions, which in theory are commanded by people who agree with institutional values, notice that their rate of loss towards the EA movement is higher than their rate of gain, they will have incentives to prevent the flow of talent, ideas and resources that has so far been a hallmark of Effective Altruism and why many of us find it impressive, it’s being an intensional movement. Any part that congeals or becomes extensional will drift off behind, and this may create unsurmountable separation between groups that want to claim ‘EA’ for themselves.

 

Only Game in Town

 

The reasons above have transformed a pathological optimist into a wary skeptical about our future, and the value of our plans to get there. And yet, I don’t see other option than to continue the battle. I wake up in the morning and consider my alternatives: Hedonism, well, that is fun for a while, and I could try a quantitative approach to guarantee maximal happiness over the course of the 300 000 hours I have left. But all things considered, anyone reading this is already too close to the epicenter of something that can become extremely important and change the world to have the affordance to wander off indeterminately. I look at my high base-happiness and don’t feel justified in maximizing it up to the point of no marginal return, there clearly is value elsewhere than here (points inwards), clearly the self of which I am made has strong altruistic urges anyway, so at least above a threshold of happiness, has reason to purchase the extremely good deals in expected value happiness of others that seem to be on the market. Other alternatives? Existentialism? Well, yes, we always have a fundamental choice and I feel the thrownness into this world as much as any Kierkegaard does. Power? When we read Nietzsche it gives that fantasy impression that power is really interesting and worth fighting for, but at the end of the day we still live in a universe where the wealthy are often reduced to having to spend their power in pathetic signalling games and zero sum disputes or coercing minds to act against their will. Nihilism and Moral Fictionalism, like Existentialism all collapse into having a choice, and if I have a choice my choice is always going to be the choice to, most of the time, care, try and do.

Ideally, I am still a transhumanist and an immortalist. But in practice, I have abandoned those noble ideals, and pragmatically only continue to be an EA.

It is the only game in town.

New Comment
74 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

You sound unhappy. Do you still hold these conclusions when you are very happy?

5diegocaleiro
You have correctly identified that I wrote this post while very unhappy. The comments, as you can see by their lighthearted tone, I wrote pretty happy. Yes, I stand by those words even now (that I am happy).
[-][anonymous]70

We need a community that at once understands probability theory, doesn’t play reference class tennis, and doesn’t lose motivation by considering the base rates of other people trying to do something, because the other people were cooks, not chefs, and also because sometimes you actually need to try a one in ten thousand chance. But people are too proud of their command of Bayes to let go of the easy chance of showing off their ability to find mathematically sound reasons not to try.

Are you saying don't think probabilistically here? I'd love a specific post on just your thoughts on this.

6diegocaleiro
Yes I am. Step 1: Learn Bayes Step 2: Learn reference class Step 3: Read 0 to 1 Step 4: Read The Cook and the Chef Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it. Step 7: Talk to Michael Valentine about it, who has been reasoning about this recently and how to impart it at CFAR workshops. Step 8: Find someone who can give you a recording of Geoff Anders' presentation at EAGlobal. Step 9: Notice how all those steps above were connected, become a Chef, set out to save the world. Good luck!
6[anonymous]
Note that the billionaires disagree on this. Thiel says that people should think more like calculus and less like probability, while Musk(the inspiration for the cook and the chef) says that people think in certainties while they should think in probabilities.
0diegocaleiro
Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.
4[anonymous]
First, I think we should seperate two ideas. 1. Creating a reference class. 2. Thinking in probabilities. "Thinking in probabilities" is a consistent talking point for Musk - every interview where's he asked how he's able to do what he does, he mentions this. Here's an example I found with a quick Google search: So that covers probability. In terms of reference class, I think what Thiel and Musk are both saying is that previous startups are really bad to use as a reference class for new startups. I don't know if that means they generally reject the idea of reference classes, but it does give me pause in using them to figure out the chances of my company succeeding based on other similar companies.
3endoself
I model probabilistic thinking as something you build on top of all this. First you learn to model the world at all (your steps 3-8), then you learn the mathematical description of part of what your brain is doing when it does all this. There are many aspects of normative cognition that Bayes doesn't have anything to say about, but there are also places where you come to understand what your thinking is aiming at. It's a gears model of cognition rather than the object-level phenomenon. If you don't have gears models at all, then yes, it's just another way to spout nonsense. This isn't because it's useless, it's because people cargo-cult it. Why do people cargo-cult Bayesianism so much? It's not the only thing in the sequences. The first post, The Simple Truth, big parts of Mysterious Answers to Mysterious Questions, and basically all of Reductionism are about the gears-model skill. Even the name rationalism evokes Descartes and Leibniz, who were all about this skill. My own guess is that Eliezer argued more forcefully for Bayesianism than for gears models in the sequences because, of the two, it is the skill that came less naturally to him, and that stuck. What would cargo-cult gears models look like? Presumably, scientism, physics envy, building big complicated models with no grounding in reality. This too is a failure mode visible in our community.
1UmamiSalami
So for us to understand what you're even trying to say, you want us to read a bunch of articles, talk to one of your friends, listen to a speech, and only then will we become EAs good enough for you? No thanks.
8John_Maxwell
Diego points to a variety of resources that all make approximately the same point, which I'll attempt to summarize: If you apply probabilistic "outside view" reasoning to your projects and your career, in practice this means copying approaches that have worked well for other people. But if it's clear that an approach is working well, then others will be copying it too, and you won't outperform. So your only realistic shot at outperforming is to find a useful and underused "inside view" way of looking at things. (FYI, I've found that keeping a notebook has been very useful for generating & recording interesting new ideas. If you do it for long enough you can start to develop your own ontology for understanding areas you're interested in. Don't worry too much about your notebook's structure & organization: embrace that it will grow organically & unpredictably.)
1The_Jaded_One
This is wrong. Human beings are not a pool of identical rational agents competing in the same game from the same starting point aiming for the same endpoint. * people make mistakes, systematically. * most people start with less IQ than you, dear reader. You have an unfair advantage, so go exploit it using perfectly standard methods like getting a technology job. * if you have particular tastes, ambitions or goals (you might not even know about them, some self exploration is required) then you may be aiming for a prize that few other people are trying to claim
0John_Maxwell
If someone took the time to analyze lots of historically important inventors, entrepreneurs, and thinkers, I doubt the important common factor would be that they made fewer mistakes than others. Yes, you can "outperform" without much difficulty if you consider getting a nice job to be "outperforming" or you change the goalposts so you're no longer trying to do something hard.
4Vaniver
I think this depends on reference class and what one means by 'mistakes'. The richest financier is someone whose strategy is explicitly 'don't make mistakes.' (Really, it's "never lose money" plus the emotional willingness to do the right thing, even if it's boring instead of clever.) I think the heart of the disagreement here is the separation between things that are 'known to someone' and 'known to no one'--the strategies one needs to discover what other people have already found are often different from the strategies one needs to discover what no one knows yet, and both of them are paths to success of varying usefulness for various tasks.
2John_Maxwell
Depends on the investment class. Even Charlie Munger (Warren Buffet's partner) says "If you took our top fifteen decisions out, we'd have a pretty average record." Yes, even if success in the domain is basically about avoiding mistakes, I imagine that if there are huge winners in the domain they got there by finding some new innovative way to get their rate of mistakes down.
-2Lumifer
Nope, finance doesn't work like that. The richest financier is one who (1) has excellent risk management; and (2) got lucky. Notably, risk management is not about avoiding risks (and so, possible mistakes). It's about managing risk -- acknowledging that mistakes will be made and making sure they won't kill you.
2Vaniver
So, obviously 'never' is hyperbole on Buffett's part. I'll buy that value investing stopped working as well because of increased investor sophistication and a general increase in asset prices. As a somewhat related example, daily S&P 500 momentum investing worked up until 1980, and now you need to track more sophisticated momentum measurements. But to quote Cliff Asness (talking about momentum investing, not value investing):
0The_Jaded_One
Getting a nice job with a stable relationship, raising children well and having a good circle of friends that you like, indulging your particular tastes is outperforming the average person. Perhaps what you're talking about is radical outperformance - "being Steve Jobs", changing the world etc. In my opinion seriously aiming for that kind of life is a massive mistake - there is no recipe for it, those who achieve it do so through extraordinary luck + skill + genetic advantages which cannot be reliably replicated by any method whatsoever.
2John_Maxwell
There are lots of bits and pieces--e.g. the notes outlined above that two billionaires have signed on to. Since when is a high probability of failure by itself a good reason not to do anything? If you're a rational expected utility maximizer you do things according to their expected value, which means in some cases it makes sense to do things that initially seem impossible. If you want to wuss out on life and take the path of least resistance, avoid all the biggest and most interesting bosses in the game, and live a life that has little greater challenge or purpose--fine by me. But frankly if that's the case I'll have to tap out out of this conversation, since it's a bad use of my time and I don't really want to absorb the attitudes of people like you, who explicitly state that they're totally uninterested in accomplishing anything meaningful.
1Lumifer
You can't reload.
0UmamiSalami
Thanks. I will give some of those articles a look when I have the chance. However, it isn't true that every activity is competitive in nature. Many projects are cooperative, in which case it's not necessarily a problem if you and other people are taking similar approaches and doing them well. We also shouldn't overestimate the competition and assume that they are going to be applying probabilistic reasoning, when in reality we can still outperform by applying basic rules of rationality.
0diegocaleiro
No, that's if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don't care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them. But the reference class of Diego's thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p
[-][anonymous]60

although he was once very excited about the prospect of defeating the mechanisms of ageing, back when less than 300 thousand dollars were directly invested in it, he is now, with billions pledged against ageing, confident that the problem is substantially harder to surmount than the number of man-hours left to be invested in the problem, at least during my lifetime, or before the Intelligence Explosion.

So, after all this learning about all the niggling details that keep frustrating all these grand designs, you still think an intelligence explosion is so... (read more)

0diegocaleiro
Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I've merely read all of FHI, most of MIRI, half of AIMA, Paul's blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don't code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system's cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff. The fact that we have not fully conceptually disentangled the dimensions of which intelligence is composed is mildly embarassing though, and it may be that AGI is a Deus ex-machina because actually, more as Minsky or Goertzel, less as MIRI or Lesswrong, General Intelligence will turn out to be a plethora of abilities that don't have a single denominator, ofter superimposed in a robust way. But for now, nobody who is publishing seems to know for sure.
4V_V
Beware the Dunning–Kruger effect. Looking at the big picture, you could also say that there convincing evidence for a cap on the lifespan of a biological organism. Heck, some trees have been alive for over 10,000 years! Yet, once you look at the nitty-gritty details of biomedical research, it becomes clear that even adding just a few decades to the human lifespan is a very hard problem and researchers still largely don't know how to solve it. It's the same for AGI. Maybe truly super-human AGI is physically impossible due to complexity reasons, but even if it is possible, developing it is a very hard problem and researchers still largely don't know how to solve it.
1diegocaleiro
I think you misunderstood my claim for sarcasm. I actually think I don`t know much about AI (not nearly enough to make a robust assessment).

Our knowledge about the brain, given our goals about the brain, is at the level of knowledge of physics of someone who found out that spraying water on a sunny day causes the rainbow. It’s not even physics yet.

I'd tend to disagree with this; we have a pretty good idea of how some areas of the brain work (V1 cortex), we are making good progress in understanding how other parts work (cortical microcircuits, etc.) and we haven't seen anything to indicate that other areas of the brain work using extremely far-fetched and alien principles to what we already ... (read more)

3diegocaleiro
He have non-confirmed simplified hypothesis with nice drawings for how microcircuits in the brain work. The ignore more than a million things (literally, they just have to ignore specific synapses, the multiplicity of synaptic connection etc... if you sum those things up, and look at the model, I would say it ignores about that many things). I'm fine with simplifying assumptions, but the cortical microcircuit models are a butterfly flying in a hurricane. The only reason we understand V1 is because it is a retinotopic inverted map that has been through very few non-linear transformations - same for the tonotopic auditory areas - as soon as V4, we are already completely lost (for those who don't know, the brain has between 100-500 areas depending on how you count, and we have a medium guess of a simplified model that applies well to two of them, and medium to some 10-25). And even if you could say which functions V4 participates more in, this would not tell you how it does it.
4passive_fist
All true points, but consider your V4 example. We have software that is gradually approaching mammalian-level ability for visual information processing (not human-level just yet, but our visual cortex is larger than most animals' entire cortices, so that's not surprising). So, as far as building AI is concerned, so what if we don't understand V4 yet, if we can produce software that is that good at image processing?
1diegocaleiro
I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That's what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly - if they actually needed it - AI money should reach Nick, Paul and Stuart before our team.) We'll be presenting it in Oxford, tomorrow?? Shhh, don't tell anyone, here, just between us, you get it before the Oxford professors ;) https://docs.google.com/document/d/1D67pMbhOQKUWCQ6FdhYbyXSndonk9LumFZ-6K6Y73zo/edit

I think one possible key to regaining your motivation would be to apply the counter-objections of point 9 to the overall objections of the entire post.

Our technologies can’t and won’t for a while lead our minds to peaks anywhere near the peaks we found by simply introducing weirdly shaped molecules into our brains. The strangeness of Salvia, the beauty of LSD, the love of MDMA are orders and orders of magnitude beyond what we know how to change from an engineering perspective.

The technology does exist. In hypnosis, we do party tricks including the effects of the weirdly shaped molecules. Think about this redirect. We do lucid dreaming. We do all the cool stuff from eastern meditations and some that pr... (read more)

4Lumifer
That sounds like a wildly overreaching claim. We can do that now / in the near future? I don't think so. /blinks. What do you expect installing a Death Star power core in the root chakra to do? (will it let you shoot death rays out of your ass?)
0ChristianKl
At a local NLP seminar everything was fun, while they reinduced drug experiences. They went on building and increase the intensity. After a while the intensity reached the point that a person dropped unconscious and stayed that way for 10 minutes and the fun was over. To me it seems possible to raise the intensity of experiences via hypnosis very far.
0Jurily
Getting people drunk/high is one of the classics of stage hypnosis. What steps have you taken to observe reality before reaching that conclusion? Establish and maintain a higher baseline of subjective well-being. People already have concepts like "chi" or "mental energy"; a generator produces more energy; and the "root chakra" is "where energy enters the body". I know that last one because I decided it sounds good. These concepts are "real" in the same sense as a programming language. There is no inheritance in the transistors, but you can pretend as long as the compiler does the right thing with your code. Apparently the human brain is intelligent enough that we can simply make shit up.
2Lumifer
Ah, well, good luck with that.
-6Jurily
[-]gjm20

This is entirely peripheral to any point you're actually making, but: In what possible sense is it true that Marvin Minsky "invented the computer"?

0diegocaleiro
Very sorry about that, I thought he held the patent for some aspect of computers that had become widespread, in the same way Wozniak holds the patent for personal computers. This was incorrect. I'll fix it.
0gjm
What patent for personal computers does Wozniak hold?
1gjm
Possibly answering my own question, I see that Woz is sole inventor on a patent with the impressively broad-sounding title "Microcomputer for use with video display", US4136359, and that this fact is remained on in various places on the web. But if you look at the patent's actual claims, it's not so general after all -- they're all about details of controlling the timing of the video signals. [EDITED to fix an autocorrect typo.]
0diegocaleiro
US Patent No. 4,136,359: "Microcomputer for use with video display"[38]—for which he was inducted into the National Inventors Hall of Fame. US Patent No. 4,210,959: "Controller for magnetic disc, recorder, or the like"[39] US Patent No. 4,217,604: "Apparatus for digitally controlling PAL color display"[40] US Patent No. 4,278,972: "Digitally-controlled color signal generation means for use with display"[41]
1gjm
Yeah, as I said above US4136359 is doubtless an excellent patent but it really isn't in any useful sense a patent on the personal computer. It's a patent on a way of getting better raster graphics out of a microcomputer connected to a television.

he met the bio-startup founders and gets that the structure of incentives does not look good for bio-startups anyway

YCombinator now consider it to be a great time to invest in biotech startups. Sam Altman says that the industry changed in a way that makes biotech startups possible.

The situation surely isn't perfect but it's better than it's used to be.

In the section on EA, you include discussion of AGI, existential risk, and the existential risk of an AGI, which seem to me different subjects. Can you clarify what you see as the relation between these things and EA?

My picture of EA is distributing anti-malarial bed nets, or trying to improve clean water supplies. While some in the EA movement may judge existential risk or AGI to be the area they should direct their vocation towards (whether because of their rating of the risk itself or their own comparative advantage), they are not listed among, for example, Givewell's recommended charities.

0diegocaleiro
EA is an intensional movement. http://effective-altruism.com/ea/j7/effective_altruism_as_an_intensional_movement/ I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course: That's how I see it anyway. Most of the arguments for it are in "Superintelligence" if you disagree with that, then you probably do disagree with me.
0Richard_Kennaway
Not particularly disagreeing, I just found it odd in comparison to other EA writings. Thanks for the clarification.
1Raemon
It's actually fairly common in EA circles by now to acknowledge AI as an issue. The disagreements tend to be more about whether there are useful things to be done about it, or whether there are specific nonprofits worth supporting. (Givewell has a blogpost in that direction)

Ideally, I am still a transhumanist and an immortalist. But in practice, I have abandoned those noble ideals, and pragmatically only continue to be an EA.

Ok, so some of the things that you value are hard to work towards, but as you say, working towards those things is still worth your while. When I've been in similar situations, pretending to be a new homunculus has helped, and I'm sure that you've figured out other brilliant coping strategies on your own.

I see that you've become less interested in transhumanism, though, and your post doesn't give me a ... (read more)

0Raemon
He says at the end he's still a transhumanist. I think the point was that, in practice, it seemed difficult to work directly towards transhumanism/immortalism (and perhaps less likely that such a thing will be achieved in our lifetimes, although I'm less sure about that) (Diego, curious if my model of you is accurate here)
0diegocaleiro
I am particularly skeptical of transhumanism when it is described as changing the human condition, and the human condition is considered to be the mental condition of humans as seen from the human's point of view. We can make the rainbow, but we can't do physics yet. We can glimpse at where minds can go, but we have no idea how to precisely engineer them to get there. We also know that happiness seems tighly connected to this area called the NAcc of the brain, but evolution doesn't want you to hack happiness, so it put the damn NAcc right in the medial slightly frontal area of the brain, deep inside, where fMRI is really bad, where you can't insert electrodes correctly. Also, evolution made sure that each person's NAcc develops epigenetically into different target areas, making it very, very hard to tamper with it to make you smile. And boy, do I want to make you smile.

You didn't explain anything about the evolution of your thoughts related to cryonics/brain preservation in particular. Why is that?

0diegocaleiro
Basically because I never cared much for cryonics, even with the movie about me being done about it. Trailer: https://www.youtube.com/watch?v=w-7KAOOvhAk For me cryonics is like soap bubbles and contact improv. I like it, but you don't need to waste your time knowing about it. But since you asked: I've tried to get rich people in contact with Robert McIntyre, because he is doing a great job and someone should throw money at him. And me, for that matter. All my donors stopped earning to give, so I'm with no donor cashflow now, I might have to "retire" soon - Brazilian economy collapsed and they may cut my below life cost scholarship.EDIT: Yes, my scholarship was just suspended :( So I won't be just losing money, I'll be basically out of it, unfortunately. I also remind people that donating to individuals is way cheaper than to institutions - yes I think so even now that I'm launching another institution. The truth doesn't change, even if it becomes disadvantageous to me.
0diegocaleiro
Wow, that's so cool! My message was censored and altered. Lesswrong is growing an intelligentsia of it's own. (To be fair to the censoring part, the message contained a link directly to my Patreon, which could count as advertising? Anyway, the alteration was interesting, it just made it more formal. Maybe I should write books here, and they'll sound as formal as the ones I read!) Also fascinating that it was near instantaneous.
2gjm
What happened? That sounds very weird.
4diegocaleiro
Oh, so boring..... It was actually me myself screwing up a link I think :( Skill: being censored by people who hate censorship. Status: not yet accomplished.

The immortalist in me is skeptical because he understands the complexity of biology from conversations with the centimillionaires and with the chief scientists of anti-aging research facilities worldwide, he met the bio-startup founders and gets that the structure of incentives does not look good for bio-startups anyway, so although he was once very excited about the prospect of defeating the mechanisms of ageing, back when less than 300 thousand dollars were directly invested in it,

I'm not sure whether "money directly invested in anti-aging"... (read more)

4Vaniver
They've had a very bad six weeks.
0ChristianKl
There a lot money at stake in fighting them. She lobbies for Medicare paying companies less money for tests. There are companies losing billions of dollars, so they hire PR to fight Theranos. That's not an unexpected development. You don't change the status quo without making enemies.
2[anonymous]
They're honestly quite possibly just bullshitting about their grand plans. Theranos, that is. I wouldn't be surprised if they had some interesting ideas that they are utterly unprepared to follow through on and that they are massively overselling the importance and quality of.
4ChristianKl
Why would a company who doesn't trust into it's tech working explicitely lobby the FDA to test their products to make sure that the marketplace trusts them? I don't think there a good reason that blood testing roughly didn't change much in price in the last 10 years while DNA sequencing got 10,000 as cheap. We good cheaper DNA sequencing because multiple companies focused on radically improving sequencing technology. Having a direct to consumer marketplace where people know the price of testing before they buy it is likely very useful for the field to produce price competition that leads to the development of cheaper testing. We don't need the price improvement of sequencing for blood test but having moore's law for it or half of moore's law would be a game changer. Do you think there are basic reasons why blood testing shouldn't be able to radically improve in price over time?

Is it reasonable to say that what really matters is whether there's a fast or slow takeoff? A slow takeoff or no takeoff may limit us to EA for the indefinite future, and fast takeoff means transhumanism and immortality are probably conditional on and subsequent to threading the narrow eye of the FAI needle.

0diegocaleiro
See the link with a flowchart on 12.
0Pentashagon
I looked at the flowchart and saw the divergence between the two opinions into mostly separate ends: settling exoplanets and solving sociopolitical problems on Earth on the slow-takeoff path, vs focusing heavily on how to build FAI on the fast-takeoff path, but then I saw your name in the fast-takeoff bucket for conveying concepts to AI and was confused that your article was mostly about practically abandoning the fast-takeoff things and focusing on slow-takeoff things like EA. Or is the point that 2014!diego has significantly different beliefs about fast vs. slow than 2015!diego?
0diegocaleiro
Interesting that I conveyed that. I agree with Owen Cotton Barratt that we ought to focus efforts now into sooner paths (fast takeoff soon) and not in the other paths because more resources will be allocated to FAI in the future, even if fast takeoff soon is a low probability. I personally work on inserting concepts and moral concepts on AGI because almost any other thing I could do there are people who will do better already, and this is an area that interpolates with a lot of my knowledge areas, while still being AGI relevant. See link in the comment above with my proposal.

I'd recheck your links to the EA forum; this one was a LW link, for example.

0diegocaleiro
The text is posted at the EA forum too here, there all the links work.