Why 'of course'? This doesn't seem obvious to me.
HungryHobo gave good arguments from tradition and liability; here's an argument from utility:
Google's cars are up over a million autonomously-driven km without an accident. That's not proof that they're safer than the average human-driven car (something like 2 accidents per million km in the US?) but it's mounting evidence. If car AI written to prioritize its passengers turns out to still be an order of magnitude safer for third parties than human drivers, then the direct benefit of optimizing for total safety may be outweighed by the indirect benefit of optimizing for own-passenger safety and thereby enticing more rapid adoption of the technology.
When a collision is unavoidable, should a self-driving car try to maximize the survival chances of its occupants, or of all people involved?
http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/