Perhaps it might be better if you skipped over the books that were "pulling another Robbie". This post is basically "the story doesn't teach us anything useful".
True. Do you think I should still list and quickly explain the stories that are "useless" for this point someplace?
Yes, I think that would be good. Perhaps you could make it a draft epilogue, and add an entry to it every time you've got nothing really to write about a story. And if your quick summary of why it's useless starts getting too big for the list, you can always split it off into a separate post.
This has the potential to be a good series (but I must say it's a terrible start! :-p).
With recent events, you might not have been able to write more of these. Are you still planning to? I'd really like to read them.
Thanks for the comment! My lateness to write the next installment is more related to having a lot of research work and study to do (as well as preparing a job interview), but I already have a draft of the second post. And this time, the short story has loads of ideas related to AI safety in non trivial ways. ;)
I should be able to post it around the end of this week.
Every so often, when explaining issues related to AI safety, I call on good old Asimov. That's easy: almost everyone that is at least interested in science knows his name, and the Three Laws of Robotics are a very good example of misspecified goal. Or are they?
The truth is: I don't know. My last reading through Asimov's robots dates back ten years; it was in french; and I didn't know anything about AI safety, specification and many parts of my current mental scaffolding. So when I use Asimov for my points now, I'm not sure whether I'm spouting bullshit or not.
Fortunately, the solution is simple, for once: I just have to read the goddamn stories. And since I'm not the only one I heard talking about Asimov in this context, I thought that a sequence on the robots stories would prove useful.
My first stop is by "I,Robot", the first robot short story collection. And it starts with the first story published by Asimov, "Robbie".
Basically, this Robbie is a robot that takes care of a little girl named Gloria. All is well, until Gloria's mother turns into the bad guy, and decides that her girl should not be raised by a machine. She harasses her weak husband until he accepts to get rid of Robbie. But when Gloria discovers the loss of her friend, nothing can comfort her. The parents try everything, including a trip to New York, paradise to suburbians. But nope, the girl is still heartbroken. Last try of the father: a visit to a factory manned by robots, so little Gloria can see that they are lifeless machines, not real people. But, tada! Robbie was there! And he even saves the girl from an oncoming truck! It's revealed that the father planned it (Robbie being there, not the murder attempt on his daughter), but even so, the mother can't really send back the savior of her little girl. The End.
Just a simple story about a nice little robot beloved by a girl, and the machinations of her mother to "protect" her from him. What's not to love? It's straight to the point, nicely written, and, if you can gloss over the obvious sexism, quite enjoyable.
How does it hold in terms of AI safety discussion? Well, let Mr Weston, the father, give it to us:
That was underwhelming.
See, Robbie is a human in a tin wrapping. Even worse, he's a human with a perfect temper, that never really gets mad at the girl. For example, here:
and here:
Nowhere do I see the kind of AI we're all thinking about -- an AI that does not hate you, but does not love you either. Robbie loves you. At least Gloria. And this sidesteps pretty much every issue of AI safety.
To be fair with old Isaac, the point of this story is clearly to counter the paranoia about robots and machines. An anti-terminator, if you wish. And it works decently on that front. Robbie is always nice with Gloria -- he even saves her at the end. He's one of the characters with which we have more empathy. And the only bad guys are the mother, and the robophobic neighbors.
This would be okay, if it did not wrap a wrong assumption: robots are safe and the only issue comes from the nasty humans. Whereas what we want people to understand is that robots and AIs are not unsafe because they don't do what we tell them to do, but because they do exactly that.
What about the First Law, you may ask? After all, it was mentioned in the quote above. Well, that mention is all we get in this story. To find the actual Law (yes, I know it, and so do you, but let's assume an innocent reader), you have to look at the first page of the book:
That's what I'm talking about! I've come looking for Laws breaking up, not anti-discrimination against non-existent robots. I assume these are treated in the next stories. After all, there are three Laws of Robotics, and only one is mentioned -- not even written -- here. I'll reserve my judgement until all the stories are in. But still, don't try to pull another Robbie on me, Asimov.