All of Vakus Drake's Comments + Replies

This is one of those areas where I think the AI alignment frame can do a lot to clear up underlying confusion. Which I suspect stems from you not taking the thought experiment far enough for you to no longer be willing to bite the bullet. Since it encourages AI aligned this way to either:

  •  Care about itself more than all of humanity (if total pleasure/pain and not the number of minds is what matters), since it can turn itself into a utility monster whose pleasure and pain just dwarf humanity.
  • Alternately if all minds get more equal consideration it enco
... (read more)

I think the whole point of a guardian angel AI only really makes sense if it isn't an offshoot of the central AGI.  After all if you trusted the singleton enough to want a guardian angel AI, then you will want it to be as independent from the singleton as is allowed.  Whereas if you do trust the singleton AI (because say you grew up after the singularity) then I don't really see the point of a guardian angel AI.

>I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slow... (read more)

I've had similar ideas but my conception of such a utopia would differ slightly in that: 

  • This early on (at least given how long the OC has been subjectively experiencing) I wouldn't expect one to want to spend most time experiencing simulations stripped of their memory.  As I'd expect a simulation with perfect accuracy to initially be if anything easier to enjoy if you could relax knowing it wasn't actually real (plus people will want simulations where they can kill simulated villains guilt free).
  • I personally could never be totally comfortable be
... (read more)
2Michael Soareverix
Yeah, this makes sense. However, I can honestly see myself reverting my intelligence a bit at different junctures, the same way I like to replay video games at greater difficulty. The main reason I am scared of reverting my intelligence now is that I have no guarantee of security that something awful won't happen to me. With my current ability, I can be pretty confident that no one is going to really take advantage of me. If I were a child again, with no protection or less intelligence, I can easily imagine coming to harm because of my naivete. I also think singleton AI is inevitable (and desirable). This is simply because it is stable. There's no conflict between superintelligences. I do agree with the idea of a Guardian Angel type AI, but I think it would still be an offshoot of that greater singleton entity. For the most part, I think most people would forget about the singleton AI and just perceive it as part of the universe the same way gravity is part of the universe. Guardian Angels could be a useful construct, but I don't see why they wouldn't be part of the central system. Finally, I do think you're right about not wanting to erase memories for entering a simulation. I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay. I appreciate the comment. You've made me think a lot. The key idea behind this utopia is the idea of choice. You can basically go anywhere, do anything. Everyone will have different levels of comfort with the idea of altering their identity, experience, or impact. If you'd want to live exactly in the year 2023 again, there would be a physical, earth-like planet where you could do that! I think this sets a good baseline so that no one is unhappy.

This kind of issue (among many, many others) is why I don't think the kind of utilitarianism that this applies to is viable. 

My moral position only necessitates extending consideration to beings who might in principle extend similar consideration to oneself. So one has no moral obligations to all but the smartest animals, but also your moral obligations to other humans scale in a way which I think matches most people's moral intuitions. So one genuinely does have a greater moral obligation to loved ones, and this isn't just some nepotistic personal fa... (read more)

I actually think this is plausibly among the most important questions on Lesswrong, thus my strong upvote. As I think the moral utility from having kids pre-singularity may be higher than almost anything else (see my comment).

Answer by Vakus Drake1-2

To argue the pro-natalist position here, I think the facts being considered should actually give having kids (if you're not a terrible parent) potentially a much higher expected moral utility than almost anything else.

The strongest argument for having kids is that the influence they may have on the world (say most obviously by voting on hypothetical future AI policy) even if marginal (which it may not be if you have extremely successful children) becomes unfathomably large when multiplied by the potential outcomes.

From the your hypothetical children's per... (read more)

An irish elk/peacock type scenario is pretty implausible here for a few reasons. 

  • Firstly people care about enough different traits that an obviously bad trade like attractiveness for intelligence wouldn't be adopted by enough people to impact the overall population. 
  • Secondly for traits like attractiveness low mutation load is far more important than any gene variants that could present major tradeoffs. So just selecting for less mutation load will improve most of the polygenetic traits people care about.

Ultimately the polygenetic nature of traits... (read more)

5GeneSmith
Every time I read one of Scott Alexander's posts I lament my own writing abilities. He's said everything I want to say about the tradeoffs in genetic engineering with fewer words and in a more comprehensible manner. I guess my ultimate aim in writing these posts is to convince myself and others that genetic engineering is not only desirable but possible in the near future. I guess maybe what I should be focusing on is less persuasive writing and more HOW to do it. Though part of me despairs at the possibility of us ever pursuing such a path. Cloning is banned in nearly every country in the world in which it might be possible to create clones. This is ostensibly because cloned mammals have a much higher rate of birth defects, yet so far as I can tell there is no effort being made to reduce the likelihood of such errors. Instead it seems like the current technical problems are being used as an excuse to stop research on how to make cloning safer.

>The AI comes up with a compromise. Once a month, you're given the opportunity to video call someone you have a deep disagreement with. At the end of the call, each of you gets to make a choice regarding whether the other should be allowed in Eudaimonia. But there's a twist: Whatever choice you made for the other person is the choice the AI makes for you.

 

This whole plan relies on an utterly implausible conspiracy. There's no way to avoid people knowing how this test actually works just by its nature. So if people know how this test works then there's zero reason to base your response on what you actually want for the person you disagree with.  

>Of course there are probably even bigger risks if we simply allow unlimited engineering of these sorts of zero sum traits by parents thinking only of their own children's success. Everyone would end up losing.

The negative consequences of a world where everybody engineers their children to be tall, charismatic, well endowed, geniuses are almost certain to be far less than the consequences of giving the government the kind of power that would allow them to ban doing this (without banning human GM outright which is clearly an even worse outcome).

1GeneSmith
I'm thinking of something like a fitness trap scenario, where competition to maximize zero sum traits degrades some other key trait in an irreversible way. Not that it would literally be irreversible, but that the degradation of such a trait (perhaps we find a gene that makes you very attractive but dumber) would make the next generation even more likely to sacrifice that key trait etc etc in a vicious cycle. I'm thinking here of the Irish Elk, a huge species of deer whose competition for larger antler size drove it to extinction. See here: https://www.nationalgeographic.com/science/phenomena/2008/09/03/the-allure-of-big-antlers/ Though I agree with you that the danger of banning genetic modification would be much, much greater than the danger of this kind of sexual selection induced extinction. EDIT: After reading the article I linked it looks like there is actually controversy about whether large antlers drove the Irish Elk extinct. The real cause may have been a combination of a reduction in food an predation. So perhaps that's not the best example for the wisdom of banning zero sum trait selection.

>I left this example for last because I do not yet have a specific example of this phenomenon in humans, though I suspect that some exist.

**There's plenty of traits that fit the bill here, they're just not things people would ever think of as being negative.**

Most such traits exist because of sexual selection pressures, the same reasons traits as negative sum as peacock feathers can persist. Human traits which fall under this category (or at least would have in the ancestral environment):

Traits like incredibly oversized penises for a great ap... (read more)

3GeneSmith
I've spent a fair bit of time thinking about the potential implications of a soft or hard ban on these types of zero sum traits. You're probably right that people wouldn't accept mandated downgrades from their current possession of these zero sum traits (shorter, smaller breasts etc), but it seems plausible that at some point we might put a cap on how extreme we're willing to let people engineer themselves. But historical precident has given me pause. One can imagine that the gigantic benefits to the species as a whole of increased intelligence would not at all have been apparent for most of human history. Might we accidentally ban a trait that appears to be zero sum but actually has massive positive externalities that we simply don't foresee? That's one of the things I'm worried might happen with these types of bans. Of course there are probably even bigger risks if we simply allow unlimited engineering of these sorts of zero sum traits by parents thinking only of their own children's success. Everyone would end up losing.

I suspect there's some underlying factor which effects how much psychedelics impact your identity/cognition. Since even on doses of LSD so high that the visuals make me legally blind, I don't experience any amount of ego dissolution and can function fairly well on many tasks.

That doesn't follow from my comment at all.

The fact IQ has plenty of limitations doesn't negate all of the ways in which standard IQ tests have tremendous predictive power.

>Why did Donald Trump decide to take a stressful 12-hour-a-day job in his mid seventies?

This example doesn't work particularly well for a few reasons: Firstly Trump as well as his family and friends have been able to reap tremendous financial benefits from his position (through a variety of means especially corporate capture). Secondly Trump somewhat infamously has been known to take far more vacations and do a lot less actual work than most previous presidents.

>For instance, the person with the highest IQ [2] (about 30% higher than Einstein) lives on a farm in the middle of nowhere and has not done anything or contributed to the world. On the other hand, we have Elon Musk [3] who is smart, but not as smart as having the highest IQ in the world. Yet, Elon is capable to make change happen. 

Essentially every part of this paragraph is wrong or misinformed. Einstein never had an IQ test so estimates of his IQ are little more than baseless speculation (especially if you're trying to compare him to other g... (read more)

-8TAG
Answer by Vakus Drake40

It's worth noting here that human working memory is probably vastly worse than our ancestors in many regards, because chimps outperform us on short memory tests by a massive margin. This is probably because hominids repurposed the relevant hardware towards doing other things.

Answer by Vakus Drake10

I don't expect this to be a problem because by the time humans would be using this much energy we should be easily capable of constructing simple megastructures. One would only need to decrease the amount of IR light that hits the earth with massive but relatively cheap (at least once you have serious space industry) IR filters in order to decrease the earth's temperature without impacting anything dependant on the sun's light.

I'd also like to bring up that the idea you mentioned of having multiple ships in a line so only the first one needs substantial dust shielding, is the same reason it makes sense to make your ships as long and thin as possible.

You're misunderstanding the argument. The article you link is about the aestivation hypothesis which is basically the opposite strategy to the "expand as fast as possible" strategy put forth here. The article you linked doesn't say that some computation _can't_ be done orders of magnitude more efficiently when the universe is in the degenerate era, it just says that there's lots of negentropy that you will never get the chance to exploit if you don't take advantage of it now.

>unless someone revolutionizes space travel by figuring out how to bend spacetime more efficiently than with sheer mass, and makes something like the Alcubierre drive feasible.

The bigger problem here is just that genuine negative inertial mass (which you need for warp drives) is considered to be probably impossible for good reason, since it lets you both violate causality and create perpetual motion machines.

While I consider wireheading only marginally better than oblivion the more general issue is the extent to which you can really call something alignment if it leads to behavior that the overwhelming majority of people consider egregious and terrible in every way. It really doesn't make sense to talk to talk about there being a "best" solution here anyway because that basically begs the question with regards to certain moral philosophy.

>I'm also assuming you think if bacteria somehow became as intelligent as humans, they would also ag... (read more)

It seems like this example would in some ways work better if the model organism was mice not bacteria because bacteria probably do not even have values to begin with (so inconsistency isn't the issue) nor any internal experience.

With say mice though (though perhaps roundworms might work here, since it's more conceivable that they could actually have preferences) the answer to how to satisfy their values seems almost certainly is just wireheading since they don't have a complex enough mind to have preferences about the world distinct... (read more)

1Nebu
I'm assuming you think wireheading is a disastrous outcome for a super intelligent AI to impose on humans. I'm also assuming you think if bacteria somehow became as intelligent as humans, they would also agree that wireheading would be a disastrous outcome for them, despite the fact that wireheading is probably the best solution that can be done given how unsophisticated their brains are. I.e. the best solution for their simple brains would be considered disastrous by our more complex brains. This suggests the possibility that maybe the best solution that can be applied to human brains would be considered disastrous for a more complex brain imagining that humans somehow became as intelligent as them.

In the spirit of reversing all advice your hear, it's worth mentioning that a substantial portion of people genuinely are toxic once you get to know them (just look at the prevalence of abuse as an extreme yet very common example).

One's gut instincts about someone once they open up (or you can start to get a better gauge of who they actually are) are often a pretty guide metric for whether getting close to them (or being around them at all) is a good idea.