Artaxerxes

Wiki Contributions

Comments

Sorted by

But ultimately, for the parts that really matter here, this is a matter of explaining, not of defeating

Of course, defeating people who are mistakenly doing the wrong thing could also work, no? Even if we take the assumption that people doing the wrong thing are merely making a mistake by their own lights to be doing so, it might be practically much more feasible to divert them away from doing it or otherwise prevent them from doing it, rather than to rely on successfully convincing them not to do it. 

Not all people are going to be equally amenable to explanation. It's not obvious to me at least that we should limit ourselves to that tool in the toolbox as a rule, even under an assumption where everyone chasing bad outcomes is simply mistaken/confused.

But I'm pretty sure nobody in charge is on purpose trying to kill everyone; they're just on accident functionally trying to kill everyone.

I'm less sure about this. I've met plenty of human extinctionists. You could argue that they're just making a mistake, that it's just an accident. But I do think it is meaningful that there are people who are willing to profess that they want humanity to go extinct and take actions in the world that they think nudge us towards that direction, and other people that don't do those things. The distinction is a meaningful one, even under a model where you claim that such people are fundamentally confused and that if they were somehow less confused they would pursue better things. 

What kinds of reactions to and thoughts about the post did you have that you got a lot out of observing?

On the other hand, the potential resource imbalance could be ridiculously high, particularly if a rogue AI is caught early on it’s plot, with all the worlds militaries combined against them while they still have to rely on humans for electricity and physical computing servers. It’s somewhat hard to outthink a missile headed for your server farm at 800 km/h. ... I hope this little experiment at least explains why I don’t think the victory of brain over brawn is “obvious”. Intelligence counts for a lot, but it ain’t everything.

While this is a true and important thing to realise, I don't think of it as the kind of information that does much to comfort me with regards to AI risk. Yes, if we catch a misaligned AI sufficicently early enough, such that it is below whatever threshold of combined intelligence and resources that is needed to kill us, then there is a good chance we will choose to prevent it from doing so. But this is something that could happen thousands of times and it would still feel rather besides the point, because it only takes one situation where one isn't below that threshold and therefore does still kill us all. 

If we can identify even roughly where various thresholds are, and find some equivalent of leaving the AI with a king and three pawns where we have a ~100% chance of stopping it, then sure, that information could be useful and perhaps we could coordinate around ensuring that no AI that would kill us all should it get more material from indeed ever getting more than that. But even after clearing the technical challenge of finding such thresholds with much certainty in such a complex world, the coordination challenge of actually getting everyone to stick to them despite incentives to make more useful AI by giving it more capability and resources, would still remain.

Still worthwhile research to do of course, even if it ends up being the kind of thing that only buys some time.

So you are effectively a revolutionary.

I'm not sure about this label, how government/societal structures will react to eventual development of life extension technology remains to be seen, so any revolutionary action may not be necessary. But regardless of which label you pick, it's true that I would prefer not to be killed merely so others can reproduce. I'm more indifferent as to the specifics as to how that should be achieved than you seem to imagine - there are a wide range of possible societies in which I am allowed to survive, not just variations on those you described. 

I think that the next best thing you could do with the resources used to run me if you were to liquidate me would be very likely to be of less moral value than running me, at least to my lights, if not to others'.

The decision is between using those resources to support you vs using those resources to support someone else's child.

That's an example of something the resources could go towards, under some value systems, sure. Different value systems would suggest that different entities or purposes would make best moral use of those resources, of course.

To try and make things clear: yes, what I said is perfectly compatible with what you said. Your reply to this point feels like you're trying to tell me something that you think I'm not aware of, but the point you're replying to encompasses the example you gave - "someone else's child" is potentially a candidate for "the next best thing you could do with the resources to run me" under some value systems.

I don't think you have engaged with my core point so I"ll just state it again in a different way: continuous economic growth can support some mix of both reproduction and immortality, but at some point in the not distant future ease/speed of reproduction may outstrip economic growth, at which point there is a fundemental inescapable choice that societies must make between rentier immortality and full reproduction rights.

I think you may be confusing me for arguing for reproduction over immortality, or arguing against rentier existence - I am not. Instead I'm arguing simply that you haven't yet acknowledged the fundemental tradeoff and its consequences.

I thought I made myself very clear, but if you want I can try to say it again differently. I simply choose myself and my values over values that aren't mine.

The tradeoff between reproduction and immortality is only relevant if reproduction has some kind of benefit - if it doesn't then you're trading off a good with something that has no value. For some, with different values, they might have a difficult choice to make and the tradeoff is real. But for me, not so much. 

As for the consequences, sacrificing immortality for reproduction means I die, which is itself the thing I'm trying to avoid. Sacrificing reproduction for immortality on the other hand seems to get me the thing I care about. The choice is fairly clear on the consequences.

Even on a societal level, I simply wish not to be killed, including for the purpose of allowing for the existence of other entities that I value less than my own existence, and whose values are not mine. I merely don't want the choice to be made for me in my own case, and if that can be guaranteed, I am more than fine with others being allowed to make their own choices for themselves too.

Say you asked me anyway what I would prefer for the rest of society? What I might advocate for others would be highly dependent on individual factors. Maybe I would care about things like how much a particular existing person shares my values, and compare that to how much a new person would share my values. Eventually perhaps I would be happy with the makeup of the society I'm in, and prefer no more reproduction take place. But really it's only an interesting question insofar as it's instrumentally relevant to much more important concerns, and it doesn't seem likely that I will be in a privileged position to affect such decisions in any case.

Of course I have a moral opportunity cost. However, I personally believe that this opportunity cost is low, or at least it seems that way to me. I think that the next best thing you could do with the resources used to run me if you were to liquidate me would be very likely to be of less moral value than running me, at least to my lights, if not to others'.

The question of what to do about scarcity of resources seems like a potentially very scary one then for exactly the reasons that you bring up - I don't particularly think for example that a political zeitgeist that guarantees my death to be one that does a great job of maximizing what I believe to be valuable. 

In the long term the evolution of a civilization does seem to benefit from turnover - ie fresh minds being born - which due to the simple and completely unavoidable physics of energy costs necessarily implies indefinite economic growth or that some other minds must sleep.

I will say that I am skeptical of the idea that what "benefit" here is capturing is what I think we should really care about. Perhaps some amount of turnover will help in order to successfully compete with other alien civilsations that we run across - I can understand that, if hope that it isn't necessary. But absent competitive pressures like this, I think it's okay to take a stand for your own life and values over those of newer, different minds, with new, different values. Their values are not necessarily mine and we should be careful not to sacrifice our own values for some nebulous "benefit" that may never come to be. 

Of course, if it is your preference, if it is genuinely you truthfully pursuing your own values to sleep or die so that some new minds can be born, then I can understand why you might choose to voluntarily do so and sacrifice yourself. But I think it is a decision people should take very carefully, and I certainly don't wish for the civilisation I live in to make the choice for me and sacrifice me for such reasons. 

The "10 years at most" part of the prediction is still open, to be fair.

While this seems to me to be true, as a non-maximally competitive entity by various metrics myself I see it more as an issue to overcome or sidestep somehow, in order to enjoy the relative slack that I would prefer. It would seem distatefully molochian to me if someone were to suggest that I and people like me should be retired/killed in order to use the resources to power some more "efficient" entity, by whatever metrics this efficiency is calculated.

To me it seems likely that pursuing economic efficiencies of this kind could easily wipe out what I personally care about, at the very least. I see Hanson's Em worlds for example as being probably quite hellish as a future, or maybe if luckier closer to a "Disneyland with no Children" style scenario.

I strongly hope that my values and people who share my values aren't outcompeted in this way in the future, as I want to be able to have nice things and enjoy my life. As we may yet succeed in extending the Dream Time, I would urge people to recognize that we still have the power to do so and preserve much of what we care about, and not be too eager to race to the bottom and sacrifice everything we know and love.

You also appeal to just open-ended uncertainty

I think it would be more accurate to say that I'm simply acknowledging the sheer complexity of the world and the massive ramifications that such a large change would have. Hypothesizing about a few possible downstream effects of something like life extension on something as far away from it causally as AI risk is all well and good, but I think you would need to put a lot of time and effort into it in order to be very confident at all about things like directionality of net effects overall. 

I would go as far as to say the implementation details of how we get life extension itself could change the sign of the impact with regards to AI risk - there are enough different possible scenarios as to how it could go that could each amplify different components of its impact on AI risk to produce a different overall net effect.

What are some additional concrete scenarios where longevity research makes things better or worse? 

So first you didn't respond to the example I gave with regards to preventing human capital waste (preventing people with experience/education/knowledge/expertise dying from aging-related disease), and the additional slack from the additional general productive capacity in the economy more broadly that is able to go into AI capabilities research.

Here's another one. Lets say medicine and healthcare becomes a much smaller field after the advent of popularly available regenerative therapies that prevent diseases of old age. In this world people only need to go see a medical professional when they face injury or the increasingly rare infection by a communicable disease. The demand for medical professionals disappears to a massive extent, and the best and brightest (medical programs often have the highest/most competitive entry requirements) that would have gone into medicine are routed elsewhere, including AI which accelerating capabilities and causing faster overall timelines. 

An assumption that much might hinge on is that I expect differential technological development with regards to capability versus safety to be pretty heavily favouring accelerating capabilities over safety in circumstances where additional resources are made available for both. This isn't necessarily going to be the case of course, for example the resources in theory could be exclusively routed towards safety, but I just don't expect most worlds to go that way, or even for the ratio of resources to be allocated towards safety enough such that you get better posistive expected value from the additional resources very often. But even something as basic as this is subject to a lot of uncertainty. 

Load More