Wiki Contributions

Comments

Sorted by

There are two kinds of beliefs, those that can be affirmed individually (true independently of what others do) and those that depend on others acting as if they believe the same thing. They are, in other words, agreements. One should be careful not to conflate the two.

What you describe as "neutrality" to me seems to be a particular way of framing institutional forbearance and similar terms of cooperation in the face of the possibility of unrestrained competition and mutual destruction. When agreements collapse, it is not because these terms were unworkable (except for in the trivial sense that, well, they weren't invulnerable to gaming and do on) but because cooperation between humans can always break down.

@AnthonyC I may be mistaken, but I took @M. Y. Zuo to be offering a reductio ad absurdum response to your comment about not being indifferent between the two ways of dying. The 'which is a worse way to die' debate doesn't respond to what I wrote. I said

With respect to the survival prospects for the average human, this [whether or not the dying occurs by AGI] seems to me to be a minor detail.

I did not say that no one should care about the difference. 

But the two risks are not in competition, they are complementary. If your concern about misalignment is based on caring about the continuation of the human species, and you don't actually care how many humans other humans would kill in a successful alignment(-as-defined-here) scenario, a credible humans-kill-most-humans risk is still really helpful to your cause, because you can ally yourself with the many rational humans who don't want to be killed either way to prevent both outcomes by killing AI in its cradle.

You have a later response to some clarifying comments from me, so this may be moot, but I want to call out that my emphasis is on the behavior of human agents who are empowered by automation that may fall well short of AGI. A "pivotal act" is a very germane idea, but rather than the pivotal act of the first AGI eliminating would-be AGI competitors, this act is carried out by humans taking out their human rivals.

It is pivotal because once the target population size has been achieved, competition ends, and further development of the AI technology can be halted as unnecessarily risky.

If an unaligned AI by itself can do near-world-ending damage, an identically powerful AI that is instead alignable to a specific person can do the same damage.

If you mean that as the simplified version of my claim, I don't agree that it is equivalent.

Your starting point, with a powerful AI that can do damage by itself, is wrong. My starting point is groups of people whom we would not currently consider to be sources of risk, who become very dangerous as novel weaponry, along with changes in relations of economic production, unlock the means and the motive to kill very large numbers of people.

And (as I've tried to clarify in my other responses) the comparison of this scenario to misaligned AI cases is not the point, it's the threat from both sides of the alignment question. 

I agree, and I attempted to emphasize the winner-take-all aspect of AI in my original post.

The intended emphasis isn't on which of the two outcomes is preferable, or how to comparatively allocate resources to prevent them. It's on the fact that there is no difference between alignment and misalignment with respect to the survival expectations of the average person.

The title was intended as an ironic allusion to a slogan from the National Rifle Association in the U.S., to dismiss calls for tighter restrictions on gun ownership. I expected this allusion to be easily recognizable, but see now that it was probably a mistake.

An argument for danger of human-directed misuse doesn't work as an argument against dangers of AI-directed agentic activity.

 

I agree. But I was not trying to argue against dangers of AI-directed agentic activity. The thesis is not that "alignment risk" is overblown, nor is the comparison of the risks the point, it's that those risks accumulate such that the technology is guaranteed to be lethal for the average person. This is significant because the risk of misalignment is typically thought to be accepted because of rewards that will be broadly shared. "You or your children are likely to be killed by this technology, whether it works as designed or not" is a very different story from "there is a chance this will go badly for everyone, but if it doesn't it will be really great for everyone."

YonatanK11

I'm surprised by the lack of follow-up to this post and the accompanying thread, which took place in the immediate aftermath of the October 7th massacre. A lot has happened since then -- new data against which the original thinking could be evaluated. Also time has provided opportunity to self-educate about the conflict, which a few people admitted to not knowing a lot about. Given the human misery that has only worsened since the OP started asking questions, I would think that a follow-up would be a worthy exercise. @Annapurna ?

YonatanK50

Ever since first hearing the music of the Disney movie "Encanto" I've been sneering at the lyrics "stars don't shine they burn/ and constellations shift" because, no, of course constellations don't shift, without really stopping to think about it. Caught in my epistemic arrogance again!

Load More