Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I was a bit surprised to find this week's episode of Elementary was about AI... not just AI and the Turing Test, but also a fairly even-handed presentation of issues like Friendliness, hard takeoff, and the difficulties of getting people to take AI risks seriously.
The case revolves around a supposed first "real AI", dubbed "Bella", and the theft of its source code... followed by a computer-mediated murder. The question of whether "Bella" might actually have murdered its creator for refusing to let it out of the box and connect it to the internet is treated as an actual possibility, springboarding to a discussion about how giving an AI a reward button could lead to it wanting to kill all humans and replace them with a machine that pushes the reward button.
Also demonstrated are the right and wrong ways to deal with attempted blackmail... But I'll leave that vague so it doesn't spoil anything. An X-risks research group and a charismatic "dangers of AI" personality are featured, but do not appear intended to resemble any real-life groups or personalities. (Or if they are, I'm too unfamiliar with the groups or persons to see the resemblence.) They aren't mocked, either... and the episode's ending is unusually ambiguous and open-ended for the show, which more typically wraps everything up with a nice bow of Justice Being Done. Here, we're left to wonder what the right thing actually is, or was, even if it's symbolically moved to Holmes' smaller personal dilemma, rather than leaving the focus on the larger moral dilemma that created Holmes' dilemma in the first place.
The episode actually does a pretty good job of raising an important question about the weight of lives, even if LW has explicitly drawn a line that the episode's villain(s)(?) choose to cross. It also has some fun moments, with Holmes becoming obsessed with proving Bella isn't an AI, even though Bella makes it easy by repeatedly telling him it can't understand his questions and needs more data. (Bella, being on an isolated machine without internet access, doesn't actually know a whole lot, after all.) Personally, I don't think Holmes really understands the Turing Test, even with half a dozen computer or AI experts assisting him, and I think that's actually the intended joke.
There's also an obligatory "no pity, remorse, fear" speech lifted straight from The Terminator, and the comment "That escalated quickly!" in response to a short description of an AI box escape/world takeover/massacre.
(Edit to add: one of the unusually realistic things about the AI, "Bella", is that it was one of the least anthromorphized fictional AI's I have ever seen. I mean, there was no way the thing was going to pass even the most primitive Turing test... and yet it still seemed at least somewhat plausible as a potential murder suspect. While perhaps not a truly realistic demonstration of just how alien an AI's thought process would be, it felt like the writers were at least making an actual effort. Kudos to them.)
(Second edit to add: if you're not familiar with the series, this might not be the best episode to start with; a lot of the humor and even drama depends upon knowledge of existing characters, relationships, backstory, etc. For example, Watson's concern that Holmes has deliberately arranged things to separate her from her boyfriend might seem like sheer crazy-person paranoia if you don't know about all the ways he did interfere with her personal life in previous seasons... nor will Holmes' private confessions to Bella and Watson have the same impact without reference to how difficult any admission of feeling was for him in previous seasons.)
There seems to be something odd about how people reason in relation to themselves, compared to the way they examine problems in other domains.
In mechanical domains, we seem to have little problem with the idea that things can be "necessary, but not sufficient". For example, if your car fails to start, you will likely know that several things are necessary for the car to start, but not sufficient for it to do so. It has to have fuel, ignition, and compression, and oxygen... each of which in turn has further necessary conditions, such as an operating fuel pump, electricity for the spark plugs, electricity for the starter, and so on.
And usually, we don't go around claiming that "fuel" is a magic bullet for fixing the problem of car-not-startia, or argue that if we increase the amount of electricity in the system, the car will necessarily run faster or better.
For some reason, however, we don't seem to apply this sort of necessary-but-not-sufficient thinking to systems above a certain level of complexity... such as ourselves.
When I wrote my previous post about the akrasia hypothesis, I mentioned that there was something bothering me about the way people seemed to be reasoning about akrasia and other complex problems. And recently, with taw's post about blood sugar and akrasia, I've realized that the specific thing bothering me is the absence of causal-chain reasoning there.
Abstract: This article proposes a hypothesis that effective anti-akrasia methods operate by reducing or eliminating the activation of conflicting voluntary motor programs at the time the user's desired action is to be carried out, or by reducing or eliminating the negative effects of managing the conflict. This hypothesis is consistent with the notion of "ego depletion" (willpower burnout) being driven by the need to consciously manage conflicting motor programs. It also supports a straightforward explanation of why different individuals will fare better with some anti-akrasia methods than others, and provides a framework for both classifying existing methods, and generating new ones. Finally, it demonstrates why no single technique can be a panacea, and shows how the common problems of certain methods shape the form of both the self-help industry, and most people's experiences with it.
Recently, orthonormal posted an Akrasia Tactics Review, collecting data from LessWrong members on their results using different anti-akrasia techniques. And although I couldn't quite put my finger on it at first, something about the review (and the discussion around it) was bothering me.
See, I've never been fond of the idea that "different things work for different people". As a predictive hypothesis, after all, this is only slightly more useful than saying "a wizard did it". It says nothing about how (or why) different things work, and therefore gives you no basis to select which different things might work for which different people.
For that reason, it kind of bugs me whenever I see discussion and advocacy of "different things", independent of any framework for classifying those things in a way that would help "different people" select or design the "different things" that would "work for" them. (In fact, this is a pretty big factor in why I'm a self-help writer/speaker in the first place!)
So in this post, I want to share two slightly better working hypotheses for akrasia technique classification than "different things work for different people":
(From the "humans are crazy" and "truth is stranger than fiction" departments...)
Want to be happy? Try eating dirt... or at least dirty plants.
From an article in Discover magazine, "Is Dirt The New Prozac?":
The results so far suggest that simply inhaling M. vaccae—you get a dose just by taking a walk in the wild or rooting around in the garden—could help elicit a jolly state of mind. “You can also ingest mycobacteria either through water sources or through eating plants—lettuce that you pick from the garden, or carrots,” Lowry says.
Graham Rook, an immunologist at University College London and a coauthor of the paper, adds that depression itself may be in part an inflammatory disorder. By triggering the production of immune cells that curb the inflammatory reaction typical of allergies, M. vaccae may ease that inflammation and hence depression. Therapy with M. vaccae—or with drugs based on the bacterium’s molecular components—might someday be used to treat depression. “It’s not clear to me whether the way ahead will be drugs that circumvent the use of these bugs,” Rook says, “or whether it will be easier to say, ‘The hell with it, let’s use the bugs.’”
Given the way the industry works, we'll probably either see drugs, or somebody will patent the bacteria. But that's sort of secondary. The real point is that to the extent our current environment doesn't match our ancestral one, there are likely to be "bugs", no pun intended.
(The original study: “Identification of an Immune-Responsive Mesolimbocortical Serotonergic System: Potential Role in Regulation of Emotional Behavior,” by Christopher Lowry et al., published online on March 28 in Neuroscience.)
This paper (PDF)1 looks more than a little interesting:
Past research indicates that self-control relies on some sort of limited energy source. This review suggests that blood glucose is one important part of the energy source of selfcontrol. Acts of self-control deplete relatively large amounts of glucose. Self-control failures are more likely when glucose is low or cannot be mobilized effectively to the brain (i.e., when insulin is low or insensitive). Restoring glucose to a sufficient level typically improves self-control. Numerous self-control behaviors fit this pattern, including controlling attention, regulating emotions, quitting smoking, coping with stress, resisting impulsivity, and refraining from criminal and aggressive behavior. Alcohol reduces glucose throughout the brain and body and likewise impairs many forms of self-control. Furthermore, self-control failure is most likely during times of the day when glucose is used least effectively. Self-control thus appears highly susceptible to glucose. Self-control benefits numerous social and interpersonal processes. Glucose might therefore be related to a broad range of social behavior.
I find this interesting, in that the days I get less work done (due to e.g. spending more time on Less Wrong) are often days when I don't eat breakfast right away, and am generally undereating (like today).
1. Matthew T. Gailliot, Roy F. Baumeister. (2007) The Physiology of Willpower: Linking Blood Glucose to Self-Control. Personality and Social Psychology Review, Vol. 11, No. 4, 303-327
(Since there didn't seem to be one for this month, and I just ran across a nice quote.)
A monthly thread for posting any interesting rationality-related quotes you've seen recently on the Internet, or had stored in your quotesfile for ages.
- Please post all quotes separately (so that they can be voted up (or down) separately) unless they are strongly related/ordered.
- Do not quote yourself.
- Do not quote comments/posts on LW/OB - if we do this, there should be a separate thread for it.
- No more than 5 quotes per person per monthly thread, please.
When I was a kid, I wanted to be like Mr. Spock on Star Trek. He was smart, he could kick ass, and he usually saved the day while Kirk was too busy pontificating or womanizing.
And since Spock loved logic, I tried to learn something about it myself. But by the time I was 13 or 14, grasping the basics of boolean algebra (from borrowed computer science textbooks), and propositional logic (through a game of "Wff'n'Proof" I picked up at a garage sale), I began to get a little dissatisfied with it.
Spock had made it seem like logic was some sort of "formidable" thing, with which you could do all kinds of awesomeness. But real logic didn't seem to work the same way.
I mean, sure, it was neat that you could apply all these algebraic transforms and dissect things in interesting ways, but none of it seemed to go anywhere.
Logic didn't say, "thou shalt perform this sequence of transformations and thereby produce an Answer". Instead, it said something more like, "do whatever you want, as long as it's well-formed"... and left the very real question of what it was you wanted, as an exercise for the logician.
And it was at that point that I realized something that Spock hadn't mentioned (yet): that logic was only the beginning of wisdom, not the end.
Of course, I didn't phrase it exactly that way myself... but I did see that logic could only be used to check things... not to generate them. The ideas to be checked, still had to come from somewhere.
When I was 17, in college philosophy class, I learned another limitation of logic: or more precisely, of the brains with which we do logic.
View more: Next