Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)
That counterargument is unfortunately always available for all scenarios, including non-AI cases. "Just don't do the bad thing." I'm not sure what you think specifically in this scenario triggers it to be more salient. Is it "The Military" as a common adversary? If I think about a scenario where AI is used to optimize or "control" the energy grid of supply chain logistics, would that be different?
not sure about India, but disagree for many African countries. See my comment above.
My wife is from Kenya (as a single mom mid career government employee could afford a 24/7 household help last year) and even the poor have much better child care support than even middle class in eg Germany. That can take the form of communal or familial support and the quality may be lower, but it is definitely the case that it is in some sense easier or "normal" to care of esp. small children.
Would be interesting to ask a Jeopardy egghead for comparison.
Cheap labor, or rather their absence, may also partly be a reason for the declining birthrates: In Kenya, most people can afford cheap child care. Raising kids with a full-time house help is easy. Except for school fees, but that is a different aspect.
Here is at least one scenario that should pass the mom-test, even though it is just boring old cold war with AI:
The Automated Cold War
Imagine the world’s great powers America, China, Russia, and/or Europe, always nervous about each other, always worried about being caught off guard. They used to rely on humans to make the big decisions about war and peace. Sometimes those humans have come terrifyingly close to pushing the nuclear button by accident.
Today, governments start automating these decisions with AI. AI is faster and they can sift through oceans of data. AI companies and the military will push for adoption and argue that "we cant fall behind." So one by one, nations roll out “AI decision-support systems” that track everything and recommend what to do in real time. First, the AIs suggest small things: move some submarines here, increase surveillance there. Over time, leaders start to rely on them more and more, especially when the advice turns out to be tactically smart. Soon, the AIs are recommending military deployments, cyber responses, even levels of nuclear alert.
At first, it works pretty well. Crises that would have taken weeks to analyze are now handled in hours. But these AIs aren’t programmed for caution. They’re programmed to “win.”
So what happens when an American AI notices a Chinese military exercise and interprets it as the prelude to an invasion? It recommends raising the nuclear alert level. The Chinese AI, watching America’s moves, reads this as a sign that the U.S. is preparing to strike. It, too, recommends raising its alert. Each local AI is acting logically, but together they’re creating a spiral of tension that’s invisible to most citizens.
Human leaders still technically have the final say, but the AI’s recommendation lands on their desk stamped 99% confidence level. Imagine being the president at 3 a.m. when your advisors say, “Sir, the AI says China is about to launch. We have seven minutes to respond.” People stop second-guessing the AI because, frankly, they don’t have time. Decisions that used to take months of negotiation now happen in seconds. For the public, life goes on as usual. But under the surface, the world is walking on a hair-trigger.
Then one mistake or data error. The AI, following its programming, interprets it as imminent nuclear war and pushes the strongest possible recommendation: Fire now before they fire at you. The president hesitates. But their rival’s AI has already given the same instruction, and missiles are on the move. There’s no chance to undo it, no second thoughts. Cities vanish. Power grids fail. Survivors die in the aftermath, and society collapses.
It’s not that anyone wanted this outcome. It’s not that the AIs were “evil.” It’s just that the world delegated its most dangerous decisions to machines optimized for speed and winning, not for patience or human judgment.
Paleolithic canoeing records to forecast when humans will reach the moon
Not disagreeing with your main point, but Robin Hanson has tried this.
What is the "I" in your reply "I have the same problem" referring to? What entity is doing the finding in "I can't find anything that..."? The first one can be answered with "the physical human entity currently speaking and called Dawn." But the second one is more tricky. At least it is not clear now that entity is doing the finding.
Describes me decently well:
I'd agree to a description of being "risk averse," but "anxious" doesn't feel fitting. I have a relatively high openness to experience. For example, on the last item, I didn't travel, estimating it to provide relatively little value of information per effort (or per negative stimuli?). Friends pointed out that I might be very wrong in my evaluation if I didn't travel even once. I accepted the challenge and visited India (for a friend's wedding; long story).
I guess people can be seen as imperfect Thompson samplers with different priors and weights.
OK. That seems to require AI hacking out of a box, which is unbelievable as per rule 4 or 8. Or do more mundane cases like AI doing economic transactions or research count?