Psy-Kosh comments on Advice for AI makers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (196)
And now for a truly horrible thought:
I wonder to what extent we've been "saved" so far by anthropics. Okay, that's probably not the dominant effect. I mean, yeah, it's quite clear that AI is, as you note, REALLY hard.
But still, I can't help but wonder just how little or much that's there.
If you think anthropics has saved us from AI many times, you ought to believe we will likely die soon, because anthropics doesn't constrain the future, only the past. Each passing year without catastrophe should weaken your faith in the anthropic explanation.
The first sentence seems obviously true to me, the second probably false.
My reasoning: to make observations and update on them, I must continue to exist. Hence I expect to make the same observations & updates whether or not the anthropic explanation is true (because I won't exist to observe and update on AI extinction if it occurs), so observing a "passing year without catastrophe" actually has a likelihood ratio of one, and is not Bayesian evidence for or against the anthropic explanation.
Wouldn't the anthropic argument apply just as much in the future as it does now? The world not being destroyed is the only observable result.
The future hasn't happened yet.
Right. My point was in the future you are still going to say "wow the world hasn't been destroyed yet" even if in 99% of alternate realities it was. cousn_it said:
Which shouldn't be true at all.
If you can not observe a catastrophe happen, then not observing a catastrophe is not evidence for any hypothesis.
"Not observing a catastrophe" != "observing a non-catastrophe". If I'm playing russian roulette and I hear a click and survive, I see good reason to take that as extremely strong evidence that there was no bullet in the chamber.
But doesn't the anthropic argument still apply? Worlds where you survive playing russian roulette are going to be ones where there wasn't a bullet in the chamber. You should expect to hear a click when you pull the trigger.
As it stands, I expect to die (p=1/6) if I play russian roulette. I don't hear a click if I'm dead.
That's the point. You can't observe anything if you are dead, therefore any observations you make are conditional on you being alive.
Those universes where you die still exist, even if you don't observe them. If you carry your logic to its conclusion, there would be no risk to playing russian roulette, which is absurd.