Comments

I might make the move of saying: "Let's pretend for a moment, that people really conspire. Would that be really that problematic?"

We do really conspire! Conspiring is at best a handy social and economic coordination activity. At worst it is a big bunch of no fun, where people have to pretend to be conspiring while they'd really rather be working on personal projects, flirting, or playing video games; and everyone comes out feeling like they need to hide their freakish incompetence at pursuing the goals of the conspiracy.

We usually call it "having meetings" though.

Are you claiming that a being must be a moral agent in order to be a moral patient?

I was on an Android tablet, which I use in a laptop-like fashion (landscape mode, with keyboard) but which usually gets the mobile version of sites that try to be mobile-friendly.

The section presumes that the audience agrees wrt veganism. To an audience who isn't on board with EA veganism, that line comes across as the "arson, murder, and jaywalking" trope.

I'm curious if there's much record of intentional communities that aren't farming communes.

Oneida comes to mind. They had some farming (it was upstate New York in the 1850s, after all) but also a lot of manufacturing — most famously silverware. The community is long gone, but the silverware company is still around.

We should increase awareness of old fairy tales with a jinn who misinterprets wishes.

The most popular UFAI story I'm aware of is "The Sorcerer's Apprentice".

Sticking with European folktales that were made into classic Disney cartoons, maybe the analogy to be made is "AI isn't Pinocchio. It's Mickey's enchanted brooms. It doesn't want to be a Real Boy; it just wants to carry water. The danger isn't that it will grow up to be a naughty boy if it doesn't listen to its conscience. It's that it cannot care about anything other than carrying water; including whether or not it's flooding your home."

Thing is, much of the popular audience doesn't really know what code is. They've never written a bug and had a program do something unintended ... because they've never written any code at all. They've certainly never written a virus or worm, or even a script that accidentally overwrites their files with zeroes. They may have issued a bad order to a computer ("Oops, I shouldn't have sent that email!") but they've never composed and run a non-obviously bad set of instructions.

So, aside from folklore, better CS education may be part of the story here.

Why do you care if you're running in a simulation? Well, if you're a piece of malware, like a computer virus or Internet worm, you care whether you're being analyzed by an anti-malware researcher. If you're running in a researcher's lab instead of actually infecting target systems, you'd want to shut down so the researcher can't observe your real behavior and find out how to exterminate you. However, this means that if you can be fooled into believing you're running in a simulation when you're not, you can be made to shut down in the wild.

Yep, that happened. It's not even the first time.

The WannaCrypt worm contained logic that basically said, "Look for the following properties in the Internet. If you observe them, that means you're not running in the real Internet; you're running in a simulation." But the researcher was able to cause those properties to become true in the real Internet, thereby convincing the live malware that was infesting the actual Internet to believe it was in a simulation and shut down.

Anti-analysis or anti-debugging features, which attempt to ask "Am I running in a simulation?", are not a new thing in malware, or in other programs that attempt to extract value from humans — such as copy-protection routines. But they do make malware an interesting example of a type of agent for which the simulation hypothesis matters, and where mistaken beliefs about whether you're in a simulation can have devastating effects on your ability to function.

Harry Frankfurt's "On Bullshit" introduced the distinction between lies and bullshit. The liar wants to deceive you about the world (to get you to believe false statements), whereas the bullshitter wants to deceive you about his intentions (to get you to take his statements as good-faith efforts, when they are merely meant to impress).

We may need to introduce a third member of this set. Along with lies told by liars, and bullshit spread by bullshitters, there is also spam emitted by spambots.

Like the bullshitter (but unlike the liar), the spambot doesn't necessarily have any model of the truth of its sentences. However, unlike the bullshitter, the spambot doesn't particularly care what (or whether) you think of it. But it optimizes its sentences to cause you to do a particular action.

Load More