All of bdelloidea's Comments + Replies

bdelloidea7-14

Starting your introduction with

A few million years ago, something very strange happened. 

seems likely to turn away roughly 40% of the US population, along with any leaders who need the goodwill of that 40% to keep their positions.

The point I understand you to be making (tripling the brain size of a chimp gives more than triple the impact) could be easily made without this sentence to introduce it. Given the importance of the US in addressing the existential threat of AI, and assuming one of the goals of this article is to be a general call to action, ... (read more)

3Haiku
I used to be a creationist, and I have put some thought into this stumbling block. I came to the conclusion that it isn't worth leaving out analogies to evolution, because the style of argument that would work best for most creationists is completely different to begin with. Creationism is correlated with religious conservatism, and most religious conservatives outright deny that human extinction is a possibility. The Compendium isn't meant for that audience, because it explicitly presents a worldview, and religious conservatives tend to strongly resist shifts to their worldviews or the adoption of new worldviews (moreso than others already do). I think it is best left to other orgs to make arguments about AI Risk that are specifically friendly to religious conservatism. (This isn't entirely hypothetical. PauseAI US has recently begun to make inroads with religious organizations.)
habryka2015

I don't think this kind of surface-level naive popularity optimization gives rise to a good comms strategy. Evolution is true, and mostly we should focus on making arguments based on true premises. 

This definitely helps clarify, thank you very much. I suspect it will take me some time to fully understand your ideas, but my current best stab at a (probably overcompressed) summary would be:

Our usual state of mind consists of experiencing a profusion of thoughts and inner sensations. These thoughts interact with each other, and generate further thoughts. We may experience a causal connection between thoughts, leading to the experience of “trains of thought”. This experience of causal connection may or may not accurately reflect the causal process giving

... (read more)
1Valentine
That seems pretty darn good to me!

Yes, this feels much clearer now, thank you.

2Valentine
Quite welcome. Glad that helped. :-)

Really enjoyed this article! Your comment here was also helpful, but left me with a couple questions.

The concept of goals gets pretty slippery as you do this because being takes precedence over doing.

How do you see motivation working once you start abandoning the concept of goals?

What if something you "terminally" desire in the world isn't a fit for reality? Would you rather discover that and grieve, or not look and keep trying?

Could you give a specific example of a terminal value failing to fit reality, and what abandoning it/changing it to fit reality would look like?

3Valentine
Reply part 2: I can answer what I think is the spirit of this question. I've been playing along with the "terminal value" frame, but honestly I think it confuses things. Rather than trying to stick to the formal idea of a terminal value in humans, I'll just point at what I'm talking about. One example: deconversion. If you believe in God and love Him and this brings you tremendous meaning and orientation in your life, dare you take seriously the arguments that He doesn't exist? Dare you even look? This isn't just a matter of flipping a mental "god_exists" Boolean variable from "true" to "false"; for many people this can be on the level of losing God's love and approval, and like the very force of gravity is no longer His will but is instead some kind of dead monstrosity. That's something you risk if you're more interested in truth than in being close to Him. What in you would need to shift so that your inner answer is "Yes, yes, a thousand times yes, let me see the truth"? Another example: breaking up with a friend. Maybe you've known someone since childhood… but some of this Drama Triangle stuff starts to click and you see that actually everything about your connection is based on (say) them Rescuing you and you playing Victim. When you try to talk to them about this, they brush it off, maybe even playing the Victim card themselves ("I just care about you! Don't you appreciate all that I do for you?"). You could just keep playing along… or you could notice that you're actually a "no" for playing this dynamic with anyone anymore, even your old friend. But maybe there's nothing deeper than the Drama dynamic, and maybe they won't be available for building something more. So what do you do? What resource in you do you call upon in order to choose to prefer truth even to this long-standing friendship? Are you willing to grieve, and have your old friend feel hurt at you (the shift to Persecutor), and practice standing your ground (i.e., deepening your devotion to trut
3Valentine
I'm glad to hear it. :-)    It's not really that one abandons the concept of goals. It's that doing serves being, so goals arise and fade within a larger context. What's your motivation for continuing to live? If presented with two buttons, one of which will let you leave the button situation & continue your life while the other one has you die right on the spot, I imagine you have little difficulty choosing the first one. You might be able to justify your choice afterwards as "survival instinct" or "net positive expected global utility from your remaining life" or whatever… but I'm guessing the clear knowing of the choice comes before all that. Your choice probably wouldn't change whatsoever if you spent a while meditating and calming your reactions, for instance. (Said differently: the clarity arises from the Void.) The word "motivation" has a common linguistic root with "motor". It's that which causes movement. So the "motivation" of a stone rolling downhill is gravity. The motivation of a high school student attending college is (often) a whole social atmosphere that acts something like a gravitational field (what I've occasionally heard termed "an incentive landscape" in rationalist circles). There's something very mechanical about the whole thing. But when we talk about "being motivated" or epic feats like "shut up and do the impossible", particularly when there's any hint of "should" attached to them (like "I should shut up & do the impossible"), there's usually an implication of free will. As though beyond all causes is some kind of power of choice. It's obviously a bit batty when said that way, but we mostly agree not to pay attention to that. …with the result that we have bizarre statements like "We should end racism." What exactly is that as a choice? It's not at all of the same type as "We should turn off the stove." In practice it's an application of a social force meant to shift the incentive landscape (usually via Drama Triangle dynamics, I'll

I'm having difficulty understanding exactly what an answer of "such a probability does not exist" means in this context. Assuming we both were subjected to the same experiment, but I then assigned a 50% probability to being the Original, how would our future behaviour differ? In what concrete scenario (other than answering questions about the probability we were the Original) would you predict us to act differently as a result of this specific difference in belief?

1dadadarren
Our behavior should be different in many cases. However, base on my past experience, people who accept self-locating probabilities would often find various explanations so our decisions would still be the same.  For example, in "Repeating the Experiment" the relative frequency of Me being the Original won't converge on any particular value. If we bet on that, I will say there is no strategy to maximize My personal gain. (There is a strategy to max the combined gain of all copies if everyone abides by it. As reflected by the probability of a randomly sampled copy being Original is 1/2) On the other hand, you would say if I repeat the experiment long enough the relative frequency of me being the Original would converge on 50%, and the best strategy to max my personal gain is to bet accordingly.  The problem of this example is that personal gain can only be verified by the first-person perspective of the subject. A verifiable example would be this: change the original experiment slightly. The Mad scientist would only perform the cloning if a fair coin toss landed on Tails. Then after waking up how should you guess the probability of Heads? What's the probability of Heads if you learn you are the Original? (Essentially the sleeping beauty problem). If you endorse self-locating probability, then there are two options. First, the thirder. After waking up the probability of I am the Original is 2/3. The probability of Heads is 1/3. After learning I am the Original the probability of Heads updates to 1/2.  The other option is to say after waking the probability of Heads is 1/2, the probability of I am the Original is 3/4. After learning I am the Orignal the probability of Heads needs to be updated. (How to do this update is very problematic, but let's skip it for now. The main point is the probability for Heads would have to be smaller than 1/2. And this is a very weak camp compare to the thirders) Because I reject self-locating probability, I would say the probabilit