Posts

Sorted by New

Wiki Contributions

Comments

What do you think about building legal/technological infrastructure to enable a prompt pause, should it seem necessary?

  1. I agree that potentially the benefits can go to everyone. The point is that as the person pursuing AGI you are making the choice for everyone else.
  2. The asymmetry is that if you do something that creates risk for everyone else, I believe that does single you out as an aggressor? While conversely, enforcing norms that prevent such risky behavior seems justified. The fact that by default people are mortal is tragic, but doesn't have much bearing here. (You'd still be free to pursue life-extension technology in other ways, perhaps including limited AI tools).
  3. Ideally, of course, there'd be some sort of democratic process here that let's people in aggregate make informed (!) choices. In the real world, it's unclear what a good solution here would be. What we have right now is the big labs creating facts that society has trouble catching up with, which I think many people are reasonably uncomfortable with.

I think the perspective that you're missing regarding 2. is that by building AGI one is taking the chance of non-consensually killing vast amounts of people and their children for some chance of improving one's own longevity.

Even if one thinks it's a better deal for them, a key point is that you are making the decision for them by unilaterally building AGI. So in that sense it is quite reasonable to see it as an "evil" action to work towards that outcome.

Somewhat of a nitpick, but the relevant number would be p(doom | strong AGI being built) (maybe contrasted with p(utopia | strong AGI)) , not overall p(doom).

Amalthea11d172

I was down voting this particular post because I perceived it as mostly ideological and making few arguments, only stating strongly that government action will be bad. I found the author's replies in the comments much more nuanced and would not have down-voted if I'd perceived the original post to be of the same quality.

Basically, I think whether or not one thinks whether alignment is hard or not is much more of the crux than whether or not they're utilitarian.

Pesonally, I don't find Pope & Belrose very convincing, although I do commend them for the reasonable effort - but if I did believe that AI is likely to go well, I'd probably also be all for it. I just don't see how this is related to utilitarianism (maybe for all but a very small subset of people in EA).

"One reason that goes overlooked is that most human beings are not utilitarians" I think this point is just straightforwardly wrong. Even from a purely selfish perspective, it's reasonable to want to stop AI.

The main reason humanity is not going to stop seems mainly like coordination problems, or something close to learned helplessness in these kind of competitive dynamics.

Amalthea13d2317

Unambiguously evil seems unnecessarily strong. Something like "almost certainly misguided" might be more appropriate? (still strong, but arguably defensible)

Amalthea1mo3618

Do you have an example for where better conversations are happening?

You're conflating between "have important consequences" and "can be used as weapons in discourse"

Load More