I know this post will seem very insensitive, so I understand if it gets downvoted (though I would also say that's the very reason sympathy-exploitation tactics work), but I would like to posit a 3rd fork to the "How to Interpret This" section: That Annie suffers from a combination of narcissistic personality disorder and false memory creation in service of the envy that disorder spawns. If someone attempted to fabricate a story that was both maximally sympathy-inducing and reputation-threatening for the target, I don't think you could do much better than t...
I agree, I'm reminded of the quote about history being the search for better problems. The search for meaning in such a utopic world (from our perspective) thrills me, especially when I think about all the suffering that exists in the world today. The change may be chaotic & uncomfortable, but if I consider my personally emotions about the topic, it would be more frightening for the world to remain the same.
I should have been more precise. I'm talking about the kind of organizational capabilities required to physically ensure no AI unauthorized by central authority can be created. Whether aligned AGI exists (and presumably in this case, is loyal is said authority over other factions of society that may become dissatisfied) doesn't need to factor into the conversation much.
That may well be the price of survival, nonetheless I felt I needed to point out the very likely price of going down that route. Whether that price is worth paying to reduce x-risk from p(x...
This might sound either flippant or incendiary, but I mean it sincerely: Wouldn't creating a powerful enough enforcement regime to permanently, reliably guarantee no AGI development necessitate both the society implementing that regime being far more stable over future history than any state has thus far been, and more importantly introduce incredible risk of creating societies that most liberal democracies would find sub-optimal (to put it mildly) that are then locked-in even without AGI due to the aforementioned hyper-stability?
This plan seems likely to sacrifice most future value itself, unless the decision-making humans in charge of the power of the enforcement regime act purely altruistically.
"Normally when Cruise cars get stuck, they ask for help from HQ, and operators there give the vehicles advice or escape routes. Sometimes that fails or they can’t resolve a problem, and they send a human driver to rescue the vehicle. According to data released by Cruise last week, that happens about an average of once/day though they claim it has been getting better."
From the Forbes write-up of GM Cruise's debacle this weekend. I think this should update people downward somewhat in FSD % complete. I think commenters here are being too optimistic about curr...
That is one example, but wouldn't we typically assume there is some worst example of judicial malpractice at any given time, even in a healthy democracy? If we begin to see a wave of openly partisan right or left-wing judgements, that would be a cause for concern, particularly if they overwhelm the ability of the supreme court to overrule. The recent dueling rulings over mifepristone was an example of this (both the original ruling and the reactive ruling), but it is again a single example so far.
I actually think the more likely scenario then a fascistic b...
When you say "force demand to spread out more", what policies do you propose, and how confident are you that this is both easier to accomplish than the YIMBY solution and leads to better outcomes?
My default (weak) assumption is that a policy requiring more explicit force is more likely to produce unintended negative consequences as well as greater harm if unpopular. So a ban on A has a higher bar to clear for me to be on board than a subsidy of B over A. My initial reaction to the sentence "force demand to spread out more" is both worry at how heavy-handed...
Feeling unsafe is probably not a free action though; as far as we can tell cortisol has a deleterious effect on both physical health & mental ability over time, and it becomes more pronounced w/ continous exposure. So the cost of feeling unsafe all the time, particularly if one feels less safe/more readiness than the situation warrants, is to hurt your prospects in situations where the threat doesn't come to pass (the majority outcome).
The most extreme examples of this are preppers; if society collapses they do well for themselves, but in most worlds they simply have an expensive, presumably unfun hobby and inordinate amounts of stress about an event that doesn't come to pass.
That's pretty well tailored to the community here, but there are still some red flags. How would them sending money to you and you donating it to MIRI "accelerate the value"? Also, why would a legit matcher not simply want confirmation of your donation without them ever touching the money?
Not to mention, is it really this easy to use anti-fraud tools to perpetrate fraud?
I agree with the central point of this, and the anti-humanism is where the e/acc crowd turn entirely repugnant. But in reference to the generative AI portion, the example doesn't really land for me because I think the issue at its core is pitting two human groups against each other; the artists who would like to make a stable living off their craft, and the consumers of art who'd like less scarcity of art, particularly the marginally-creative stock variety that nonetheless forms the majority of most artists' paycheck (as opposed to entirely original works ... (read more)