I used ChatGPT, where I have the $200 a month subscription, and Gemini where I have the (I think) $20 a month one. The errors of the two models are surprisingly uncorrelated so its beneficial to get advice from both.
Interesting. I'm a PhD economist and I have heard that example many, many times.
Thanks for the complement. Sorry but I'm horrible at computer formatting, don't know how to do this, and it would probably take me 10 times longer to figure out that than it would a typical person.
I teach a course at Smith College called the economics of future technology in which I go over reasons to be pessimistic about AI. Students don't ask me how I stay sane, but why I don't devote myself to just having fun. My best response is that for a guy my age with my level of wealth giving into hedonism means going to Thailand for sex and drugs, an outcome my students (who are mostly women) find "icky".
I agree that the probability that any given message is received at the right time by a civilization that can both decode it and benefit from it is extremely low, but the upside is enormous and the cost of broadcasting is tiny, so a simple expected value calculation may still favor sending many such messages. If this is a simulation, the relevant probabilities may shift because the designers may care about game balance rather than our naive astrophysical prior beliefs. The persistent strangeness of the Fermi paradox should also make us cautious about assigning extremely small probabilities to any particular resolution. Anthropic reasoning should push us toward thinking that the situation humanity is in is more common than we might otherwise expect. Finally, if we are going to send any deliberate interstellar signal at all, then there is a strong argument that it should be the kind of warning this post proposes.
The message we send goes at the speed of light. If the AI has to send ships to conquer it probably has to go slower than the speed of light.
Could be a lot of time. The Andromeda galaxy is 2.5 million light years away from Earth. Say an AI takes over next year and sends a virus to a civilization in this galaxy that would successfully take over if humans didn't first issue a warning. Because of the warning the Earth Paperclip maximizer has to send a ship to the Andromeda civilization to take over, and say the ship goes at 90% of the speed of light. That gives the Andromeda civilization 280,000 years between when they get humanity's warning message and when the paperclip maximizer's ship arrives. During that time the Andromeda civilization will hopefully upgrade its defenses to be strong enough to resist the ship, and then thank humanity by avenging us if the paperclip maximizer has exterminated humanity.
In a big enough universe "you" are being tortured somewhere, so the goal is to reduce the fraction of you being tortured. Pulping a brain might increase this fraction.
That is a reasonable point about extinction risks motivating some people on climate change. But Republicans, and given their control of the US government and likely short AI time horizons influencing them is a top priority, detest the Extinction Rebellion movement, and current environmental activism seems to anti-motivate them to act on climate change.
Agreed, the article would have been stronger if it included successful defenses.