Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

Most of that comes from me sharing the same so-called pessimistic (I would say realistic) expectations as some LWers (e.g. Yudkowsky's AGI Ruin: A List of Lethalities) that the default outcome of AI progress is unaligned AGI -> unaligned ASI -> extinction, that we're fully on track for that scenario, and that it's very hard to imagine how we'd get off that track.


Ok, but I don’t read see those LWers also saying >99%, so what do you know that they don’t which allows you to justifiably hold that kind of confidence? 

That's a disbelief in superintelligence.

For what it’s worth, after rereading my own comment I can see how you might think that. With that said, I do think super intelligence is overwhelmingly likely to be a thing.

I think your defense of the >99% thing is in your first comment where you provided a list of things that cause doom to be “overdetermined”- meaning you believe that any one of those things is sufficient enough to ensure doom on its own (which seems nowhere near obviously true to me?).

Ruby says you make a good case, but considering what you’re trying to prove, (I.e. near-term “technological extinction” is our nigh-inescapable destiny) I don’t think it’s an especially sufficient case, nor is it treading any new ground. Like yeah, the chances don’t look good, and it would be a good case (as Ruby says) if you were just arguing for a saner type of pessimism, but to say it’s overdetermined to the point where it’s a >99% chance that not even an extra 50 years could move just seems crazy to me, whether you feel like defending it or not.

As far as the policy thing goes, I don’t really know what the weakest thing I could see doing that could avert an apocalypse would be. Although, something I’d like to see would be some kind of coordination regarding setting standards for testing or minimum amounts of safety research and then have compliance reviewed by a board maybe, with both legal and financial penalties to be administered in case of violations.

Probably underwhelming to you, but then as far as concrete policy goes it’s not something I think about a ton, and I think we’ve already established my views are less extreme than yours. And absent of any of my idea being remotely feasible, that still wouldn’t get me up to >99%. Something that would get me there would be actually seeing the cloud of poison spewing death drones (or whatever) flying towards me. Heck, even if I had a crystal ball right now and saw exactly that, I still wouldn’t see previously having a >99% credence as justifiable.


Am I just misunderstanding you here?

Do you really think p(everyone dies) is >99%?