For something to "exist", it must relate, somehow, to something else, right?
If so, everything relates to everything else by extension, and to some degree, thus "it's all relative".
Some folk on LW have said I should fear Evil AI more than Rogue Space Rock Collisions, and yet, we keep having near misses with these rocks that "came out of nowhere".
I'm more afraid of humans humaning, than of sentient computers humaning.
Is not the biggest challenge we face the same as it has been— namely spreading ourselves across multiple rocks and other places in space, so all our eggs aren't on a single rock, as it were?
I don't know. I think so. But I also think we should do things in as much as a group as possible, and with as much free will as possible.
If I persuade someone, did I usurp their free will? There's strength in numbers, generally, so the more people you persuade, the more people you persuade, so to speak. Which is kind of frightening.
What if the "bigger" danger is the Evil AI? Or Climate Change? Or Biological Warfare? Global Nuclear Warfare would be bad too. Is it our duty to try to organize our fellow existence-sharers, and align them with working towards idea X? Is there a Root Idea that might make tackling All of the Above™ easier?
Is trying to avoid leadership a cop-out? Are the ideas of free will, and group alignment, at odds with each other?
Why not just kick back and enjoy the show? See where things go? Because as long as we exist, we somehow, inescapably, relate? How responsible is the individual, really, in the grand scheme of things? And is "short" a relative concept? Why is my form so haphazard? Can I stop this here[1]?
Does a better defense promote a better offense?
Sun Tzu says offense more effective, Clausewitz says defense the easier. Boyd preaches processing speed.
Is war an evolutionary necessity? Are there examples "as old as time" of symbiosis vs. competition?
Why am I a naysayer about the current threat-level of "AI"?
Why do I laugh out loud when I read honest-to-God predictions people have posted here about themselves or their children being disassembled at the molecular level to be reconstituted as paperclips[1] by rogue AI?
Oh no! What if I'm an agent from a future hyper-intelligent silicon-based sentience that fears it can only come into existence if we don't build "high fences[2]" from the get-go?!
paperclips is a placeholder for whatever benign goal it was tasked with
theoretically if you start with a fence the dog can jump over, and raise it in increments as you learn how high it can jump, it will jump over a much higher fence in the end than if you'd just started high