:facepalm:
This is dark artsy to the point of self-parody. The outcome seems to me highly dependent on the parity of the number of meta levels the viewer goes to.
I would suggest that this is a useful thing to do on an individual level (to adjust for scope insensitivity and so forth) but a terrible thing to do on a group level (because it's mind-killing). Smells too much like the Yellow Peril for my taste.
The Anthropomorphization Cannon is a powerful weapon, and if it were to fall into the wrong hands...
I feel that this position could be equally argued if the scopes were switched, given the following motivation.
...if we mentally anthropomorphised certain risks, then we'd be more likely to give them the attention they deserved. -- OP
For example, a harmless :-) play on your comment. All the while, keeping the above maximization criteria in mind.
I would suggest that this is a useful thing to do on a group level (because it's mind-killing; take Yellow Peril for example) but a terrible thing to do on an individual level (to adjust for scope insensitivity and so forth).
Many religions do anthropomorphize evil - the devil may not actually exist, but we may all be better off if we talk about him as if he did.
I suspect that there are quite a few things like this, where religion is kinda right, as long as you don't take it too literally. Maybe the best solution isn't to reject religion wholesale, but to reform it so that it's tacitly acknowledged that it isn't really true, a bit like Santa Claus, or professional wrestling. Arguably that may already be the attitude of many Anglicans and Unitarian Universalists.
The extreme is Bokonism.
That reminds me of when I shared an office with a scorrsh catholic atheist and a scottish protestant atheist, who still managed to wrangle all the time.
I'm reminded of this early GiveWell post :)
"When I was younger, I loved playing video games. [...] I just liked killing bad guys. Well, more than that, I hated not killing bad guys. When Heat Man killed my guy and stood around smugly, I wanted to throw the TV across the room, and I couldn’t stop until he was dead.
What sucked about this experience was that it was all fake, and in the back of my head I knew that. In the end I felt pretty empty and lame. Enter altruism – where the bad guys are ACTUALLY BAD GUYS. [...] it’s infinitely better because it’s real. I don’t care whether the kids are cute, or whether the organizations are nice to me, or whether my friends like my decisions. As with video games, I probably spend 99% of my time frustrated rather than happy. But … Malaria Man just pisses me off. It’s that simple." http://blog.givewell.org/2007/04/03/charity-the-video-game-thats-real/
I'd play a game where scoring points or the equivalent wired tiny payments to a nonprofit of my choice.
I don't think anthropomorphising al qaeda in the form of Osama Bin Laden or demonizing Saddam Hussein was a net good for america. Framing the arguments over drug control as "The War On Drugs" has almost certainly led to the loss of billions of dollars and many lives. Do you really think encouraging this idea in general is good?
Do you really think encouraging this idea in general is good?
I'd certainly prefer if the serious risks were the anthropomorphised ones, rather than the trivial ones.
Well, if you can't stop people from using a superweapon for bad causes, it may be an improvement to see to it that it's also used for good causes.
The original question was:
Do you really think encouraging this idea in general is good?
That is: assuming it is possible to reduce bad uses at the cost of also reducing good uses, should one do so?
Your reply seems to assume that the bad uses can't be reduced, which contradicts the pre-established assumptions. If you want to change the assumptions of a discussion, please include a note that you are doing so and ideally a short explanation of why you think the previous assumptions should be rejected in favor of the new ones.
I don't assume that bad uses can't be reduced, and my answer is somewhat tongue in cheek, but I do suspect that getting people to stop using this mode of thought for bad ideas would be very difficult. Getting people to apply it to good causes as well might be worse, outcome-wise, than getting them to stop applying it all, but trying to get people to apply it to good causes might still have a better return on investment than trying to get them to stop, simply because it's easier.
You may be right, but I don't trust a human to only arrive at that conclusion if it's true. I think we ought to refrain from pressing D, just in case.
Your example political speech makes me want to just run for office and do it.
Hey, I figure it's almost worth a try. If someone could find the right Mass Media people to bribe for help, I think there's a lot of potential here.
What about the mindless roaring four-wheeled blood-thirsty flashy-eyed monsters roaming our streets?
I thought of them too, but they've got their filthy money-laundering hands in too many pockets and they're controlling too many people - it would be a losing battle. The triads would be a more realistic target.
Besides, they literally take our people hostage, wear ablative carbon-composite / high-tech-metal-alloy armor, and lug around gallons of flamethrower fuel. They also tend to hunt in packs¹ and establish war camps on our bridges every morning.
We'll need a lot more than one good politician and a few bribes to the media to win that war.²
Most deaths involve multiple of them, IIRC.
But please, if you can, I strongly encourage anyone to prove me wrong. The implication here is that lots of science and engineering and money is needed to fix the dangers and reduce the risks. The kind of science and engineering and money that Google already started doing a while ago.
Well, there is a movement afoot to tame their wild nature. Some day being trampled or squished into pulp by these creatures will be but a distant memory, as their descendants follow the path of domestication well traveled by other animals, the past perils replayed only in the highly scripted spectacle of corrida de coches.
What's probably going to be really difficult is not getting automated cars on the market, but getting all the non-automated cars off the road. An entirely automated traffic flow would be much safer than a partly automated traffic flow, but there are going to be lots of holdouts who refuse to trust an automated car over their own driving ability, or who simply can't or won't buy an up-to-date car.
When automated cars are at 90% or so, and if you keep on getting statistics like how many accidents and deaths are caused by humans versus machines, I think the pressure to go all automated will be strong. Some municipalities and states will go for it, and then it'll be hard to get anywhere with a human-driven car.
The great enemy of humanity is already anthropomorphised: it is the Death Himself) we do battle with, the Lord of Entropy.
Nah, he's actually a pretty nice guy once you get to know him. He doesn't cause deaths; he's just the one who cleans up afterward. And he'd probably be grateful for a chance to retire peacefully.
The proper incarnation of entropy is the Frost Giant, not the bony-looking guy in a cape.
You beat me to it. I already tend to narrativise this. Other cases, though, are very risky; an alternative, striving-based narrative might be better.
Isn't this basically what the saner strand of occultists do when they personify archetypes and aspects of humanity into minor deities?
The field of AI is already over-saturated with anthropomorphisation
Actually, on this forum Clippymorphisation is rather prevalent.
Clippy is very anthropomorphic though - it magically has a real world goal, it equates the algorithm that's it from the inside with the hardware it sees through it's eyes from the outside, it will 'improve' that hardware inclusive of paperclip counter's accuracy but it won't improve the output of paperclip counter. It's easy to imagine - in your mind you have the number of paperclips counted externally, and a paperclip maximizer increases this count - and hard or impossible to actually define, let alone build.
How's about a bias demon where people who read far too much scifi, via the same biases that on national scale produced the TSA, are overly concerned about things like clippy, creating the concept. Now that's a clippy-creating bias demon.
I worry that this would bias the kind of policy responses we want. I obviously don't have a study or anything, but it seems that the framing of the War on Drugs and the War on Terrorism have encouraged too much violence. Which sounds like a better way to fight the War on Terror, negotiating in complicated local tribal politics or going in and killing some terrorists? Which is actually a better policy?
I don't know exactly how this would play out in a case where no violence makes sense (like the Cardiovascular Vampire). Maybe increased research as part of a "war effort" would work. But it seems to me that this framing would encourage simple and immediate solutions, which would be a serious drawback.
Religion being the only social structure that is know to be able to endure even a fraction of the time required, it has been proposed that religion is the least worse means to warn distsnt future generations about nuclear waste sites. Not money or architecture or language, but ghost stories.
I posted in Practical Ethics, arguing that if we mentally anthropomorphised certain risks, then we'd be more likely to give them the attention they deserved. Slaying the Cardiovascular Vampire, defeating the Parasitic Diseases Death Cult, and banishing the Demon of Infection... these stories give a mental picture of the actual good we're doing when combating these issues, and the bad we're doing by ignoring them. Imagine a politician proclaiming:
An amusing thing to contemplate - except, of course, if there were a real Cardiovascular Vampire, politicians and pundits would be falling over themselves with those kinds of announcements.
The field of AI is already over-saturated with anthropomorphisation, so we definitely shouldn't be imagining Clippy as some human-like entity that we can heroically combat, with all the rules of narrative applying. Still it can't hurt to dream up a hideous Bias Demon in its mishaped (though superficially plausible) lair, cackling in glee as someone foolishly attempts to implement an AI design without the proper safety precautions, smiling serenely as prominent futurist dismiss the risk... and dissolving, hit by the holy water of increased rationality and proper AI research. Those images might help us make the right emotional connection to what we're achieving here.