In this video, Julian Savulescu from the Uehiro centre for Practical Ethics argues that human beings are "Unfit for the future" - that radical technological advance, liberal democracy and human nature will combine to make the 21st century the century of global catastropes, perpetrated by terrorists and psychopaths, with tools such as engineered viruses. He goes on to argue that enhanced intelligence and a reduced urge to violence and defection in large commons problems could be achieved using science, and may be a way out for humanity.
Skip to 1:30 to avoid the tedious introduction
Genetically enhance humanity or face extinction - PART 1 from Ethics of the New Biosciences on Vimeo.
Genetically enhance humanity or face extinction - PART 2 from Ethics of the New Biosciences on Vimeo.
Well, I have already said something rather like this. Perhaps this really is a good idea, more important, even, than coding a friendly AI? AI timelines where super-smart AI doesn't get invented until 2060+ would leave enough room for human intelligence enhancement to happen and have an effect. When I collected some SIAI volunteers' opinions on this, most thought that there was a very significant chance that super-smart AI will arrive sooner than that, though.
A large portion of the video consists of pointing out the very strong scientific case that our behavior is a result of the way our brains are structured, and that this means that changes in our behavior are the result of changes in the way our brains are wired.
If drift were a good hypothesis, steps "forwards" (from our POV) would be about as common as steps "backwards". Are those "backwards" steps really that common?
If we model morality as a one-dimensional scale and change as a random walk, then what you say is true. However, if we model it as a million-dimensional scale on which each step affects only one dimension, after a thousand steps we would expect to find that nearly every step brought us closer to our current position.
EDIT: simulation seems to indicate I'm wrong about this. Will investigate further. EDIT: it was a bug in the simulation. Numpy code available on request.