Humans can drive cars
There's been a lot of fuss lately about Google's gadgets. Computers can drive cars - pretty amazing, eh? I guess. But what amazed me as a child was that people can drive cars. I'd sit in the back seat while an adult controlled a machine taking us at insane speeds through a cluttered, seemingly quite unsafe environment. I distinctly remember thinking that something about this just doesn't add up.
It looked to me like there was just no adequate mechanism to keep the car on the road. At the speeds cars travel, a tiny deviation from the correct course would take us flying off the road in just a couple of seconds. Yet the adults seemed pretty nonchalant about it - the adult in the driver's seat could have relaxed conversations with other people in the car. But I knew that people were pretty clumsy. I was an ungainly kid but I knew even the adults would bump into stuff, drop things and generally fumble from time to time. Why didn't that seem to happen in the car? I felt I was missing something. Maybe there were magnets in the road?
Now that I am a driving adult I could more or less explain this to a 12-year-old me:
1. Yes, the course needs to be controlled very exactly and you need to make constant tiny course corrections or you're off to a serious accident in no time.
2. Fortunately, the steering wheel is a really good instrument for making small course corrections. The design is somewhat clumsiness-resistant.
3. Nevertheless, you really are just one misstep away from death and you need to focus intently. You can't take your eyes off the road for even one second. Under good circumstances, you can have light conversations while driving but a big part of your mind is still tied up by the task.
4. People can drive cars - but only just barely. You can't do it safely even while only mildly inebriated. That's not just an arbitrary law - the hit to your reflexes substantially increases the risks. You can do pretty much all other normal tasks after a couple of drinks, but not this.
So my 12-year-old self was not completely mistaken but still ultimately wrong. There are no magnets in the road. The explanation for why driving works out is mostly that people are just somewhat more capable than I'd thought. In my more sunny moments I hope that I'm making similar errors when thinking about artificial intelligence. Maybe creating a safe AGI isn't as impossible as it looks to me. Maybe it isn't beyond human capabilities. Maybe.
Edit: I intended no real analogy between AGI design and driving or car design - just the general observation that people are sometimes more competent than I expect. I find it interesting that multiple commenters note that they have also been puzzled by the relative safety of traffic. I'm not sure what lesson to draw.
Dr. Jubjub predicts a crisis
Dr. Jubjub: Sir, I have been running some calculations and I’m worried about the way our slithy toves are heading.
Prof. Bandersnatch: Huh? Why? The toves seem fine to me. Just look at them, gyring and gimbling in the wabe over there.
Dr. Jubjub: Yes, but there is a distinct negative trend in my data. The toves are gradually losing their slithiness.
Prof. Bandersnatch: Hmm, okay. That does sound serious. How long until it becomes a problem?
Dr. Jubjub: Well, I’d argue that it’s already having negative effects but I’d say we will reach a real crisis in around 120 years.
Prof. Bandersnatch: Phew, okay, you had me worried there for a moment. But it sounds like this is actually a non-problem. We can carry on working on the important stuff – technology will bail us out here in time.
Dr. Jubjub: Sir! We already have the technology to fix the toves. The most straightforward way would be to whiffle their tulgey wood but we could also...
Prof. Bandersnatch: What?? Whiffle their tulgey wood? Do you have any idea what that would cost? And besides, people won’t stand for it – slithy toves with unwhiffled tulgey wood are a part of our way of life.
Dr. Jubjub: So, when you say technology will bail us out you mean you expect a solution that will be cheap, socially acceptable and developed soon?
Prof. Bandersnatch: Of course! Prof. Jabberwock assures me the singularity will be here around tea-time on Tuesday. That is, if we roll up our sleeves and don’t waste time with trivialities like your tove issue.
Maybe it’s just me but I feel like I run into a lot of conversations like this around here. On any problem that won’t become an absolute crisis in the next few decades, someone will take the Bandersnatch view that it will be more easily solved later (with cheaper or more socially acceptable technology) so we shouldn’t work directly on it now. The way out is forward - let’s step on the gas and get to the finish line before any annoying problems catch up with us.
For all I know, Bandersnatch is absolutely right. But my natural inclination is to take the Jubjub view. I think the chances of a basically business-as-usual future for the next 200 or 300 years are not epsilon. They may not be very high but they seem like they need to be seriously taken into account. Problems may prove harder than they look. Apparently promising technology may not become practical. Maybe we'll have the capacity for AI in 50 years - but need another 500 years to make it friendly. I'd prefer humanity to plan in such a way that things will gradually improve rather than gradually deteriorate, even in a slow-technology scenario.
We need new humans, please help
This topic is in vogue, so here's my pitch.
My fellow humans, I have some bad news and some good news. The bad news is that you are likely to eventually enter an enfeebled state, during which you will not be able to independently provide for yourself. Even worse, you will at some point altogether cease to function and then you can no longer contribute to the things you care about. The good news is that both of those problems can be ameliorated by the same scheme – the creation of new humans. The new humans can provide us with the assistance we need as our own abilities diminish. And when we cease to function, the new humans can carry on with the projects we value.
Now, the thing is, creating fully functioning new humans is a huge project, consuming many man-years of work. A person engaged in preparing and outfitting a new human will need to sacrifice a lot of time that could otherwise be devoted to personal leisure and other projects. We currently have a volunteer system for replenishing the population and in many ways this works well. Not everyone is well-placed for creating humans while some people are in a good position to create many. But this system is not perfect and it can be exploited. There are some freeloaders who do not create humans even though they are in a suitable position to do so. Those same people almost always value receiving care in old age and value humanity having a future. But they are relying on the rest of us to provide enough new humans for this to happen while they can devote all their time to other projects and zero time to diapers with poop in them.
Sometimes the non-child-creators justify their decision by suggesting that the projects they are working on are especially socially valuable and thus they can spend time on them in preference to child-creation without violating their duty to society. While it is *possible* that this argument goes through in some cases, it seems suspiciously self-serving. What is especially worth taking into account is that if the humans in question really are so highly valuable, they would statistically have highly valuable offspring. Thus, it seems doubtful in the general case that high-value people refraining from procreating is a net gain for society.
[Poorly conceived section on my personal experiences removed.]
[LINK] Why I'm not on the Rationalist Masterlist
A long blog post explains why the author, a feminist, is not comfortable with the rationalist community despite thinking it is "super cool and interesting". It's directed specifically at Yvain, but it's probably general enough to be of some interest here.
http://apophemi.wordpress.com/2014/01/04/why-im-not-on-the-rationalist-masterlist/
I'm not sure if I can summarize this fairly but the main thrust seems to be that we are overly willing to entertain offensive/taboo/hurtful ideas and this drives off many types of people. Here's a quote:
In other words, prizing discourse without limitations (I tried to find a convenient analogy for said limitations and failed. Fenders? Safety belts?) will result in an environment in which people are more comfortable speaking the more social privilege they hold.
The author perceives a link between LW type open discourse and danger to minority groups. I'm not sure whether that's true or not. Take race. Many LWers are willing to entertain ideas about the existence and possible importance of average group differences in psychological traits. So, maybe LWers are racists. But they're racists who continually obsess over optimizing their philanthropic contributions to African charities. So, maybe not racists in a dangerous way?
An overly rosy view, perhaps, and I don't want to deny the reality of the blogger's experience. Clearly, the person is intelligent and attracted to some aspects of LW discourse while turned off by other aspects.
The Noddy problem
An episode of the Noddy animated series has the following plot.
Noddy needs to go pick up Martha Monkey at the station. But it's such a nice, sunny day that he would prefer to play around outside. He gets an idea to solve this dilemma. He casts a duplication spell on himself and his car and tells the duplicate to go fetch Martha while he goes out to play. Later, Noddy is out having fun when he suddenly spots his duplicate. It turns out that the duplicate also preferred playing outside to doing the errand so he also cast a duplication spell. Then they see another duplicate, and another...
I think this story makes for a nice simple illustration of one of our perennial decision theoretic issues: When making decisions you should take into account that agents identical to yourself will make the same decision in the same situation. A common real-life example of the Noddy problem is when we try to pawn off our dietary problems to our future selves.
Lecturing congressmen on cognitive biases
A new session of Iceland's parliament convened on Saturday, opening with a religious service as is traditional. For the last couple of years, a local humanist group has offered alternatives to the religious ceremony. On Saturday they had a psychologist give a lecture on cognitive biases, principally on confirmation bias and the availability heuristic. This was attended by 13 out of 63 members of parliament. (Source in Icelandic).
I'm more pro-religion than most people who read Less Wrong and I am generally not excited about atheist activism. This, however, struck me as a good idea.
The Science of Cutting Peppers
Summary: Rigorous scientific experiments are hard to apply in daily life but we still want to try out and evaluate things like self-improvement methods. In doing so we can look for things such as a) effect sizes that are so large that they don't seem likely to be attributable to bias, b) a deep understanding of the mechanism of a technique, c) simple non-rigorous tests.
Hello there! This is my first attempt at a top-level post and I'll start it off with a little story.
Five years ago, in a kitchen in London...
My wife: We're going to have my friends over for dinner and we're making that pasta sauce everyone likes. I'm going to need you to cut some red peppers.
Me: Can do! *chop chop chop*
My wife: Hey, Mr. Engineer, you've got seeds all over! What are you doing to that pepper?
Me: Well, admittedly this time I was a bit clumsy and there's more seed spillage than usual - but it's precisely to avoid spilling seeds that I start by surgically removing the core and then...
My wife: Stop, just stop. That's got to be the worst possible way to do this. See, this is how you cut a pepper, *chop chop chop*. Nice slices, no mess.
Me: *is humiliated* *learns*
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)