After learning more about the math behind quantum mechanics, I'm pretty sure indeterminacy doesn't work that way. :P
From the Azkaban chapters:
From what Amelia heard, Dumbledore had gotten smarter toward the end of the war, mostly due to Mad-Eye's nonstop nagging; but had relapsed into his foolish mercies the instant Voldemort's body was found.
Dumbledore's lesson from his room isn't that you needed to shut up and multiply, it's that war is so terrible that you must be willing to sacrifice anything so prevent it from occurring again. He prioritized people's lives to stop a war, but he's not willing to sacrifice anyone except to prevent more violence. Dumbledore never wanted to sacrifice his sacred values for the greater good, he was forced to by the war. From "Taboo Tradeoffs":
He had to choose between losing his war and his brother. Albus Dumbledore knows, he learned in the worst possible way, that there are limits to the value of one life; and it almost broke his sanity to admit it.
In "Pretending to Be Wise", Dumbledore says that the reason he doesn't subscribe to purely utilitarian ethics is because he doesn't trust himself:
"Grindelwald was my dark mirror, the man I could so easily have been, had I given in to the temptation to believe that I >was a good person, and therefore always in the right. For the greater good, that was his slogan; and he truly believed >it himself, even as he tore at all Europe like a wounded animal."
So he sticks to his virtue ethics, unless he is forced to, since he doesn't trust his morality enough to do non-virtuous things in service of it, lest he become another Grindelwald. It is only when forced to that he abandons his principles, and only to prevent further violence. Choosing to sacrifice someone is against his nature, his room might remind him of the costs of that course of action, but it doesn't change who he is, only make him regret his failure in the War.
Add to that the fact that Filch is someone Dumbledore feels much sympathy toward, and the fact that he wasn't facing Lucius or anyone on the other side, him taking Filch's side is understandable, if not expected.
Very interesting. When I was 10, a friend and I got together to "crack" the problem of indeterminacy. We also came up with this hypothesis (I fail to recall how).
(On a tangentially related note: After reading a couple of wikipedia articles, we decided we were wrong and moved onto the hypothesis that the universe was a giant simulation, and quantum indeterminacy was floating-point error.)
I found this very interesting because when I was 12, I read a very similar book to the Childcraft book you mention, and also vowed never to do drugs, drink, give in to peer pressure, act angry and emotional, etc. Except later on, when I became a teenager, my guardians took this behavior as evidence of my "abnormality" and tried very hard to quash it out of me, even going so hard as to push me to drink and "fit in". Unfortunately they've been partially successful - at very least, I felt very resentful at them for a very long time.
Much like the NSA is considered ahead of the public because their cypher-tech that's leaked is years ahead of publicly available tech, the SI/MIRI is ahead of us because the things that are leaked from them show that they've figured out what we've just figured out a long time ago.
I don't think that's a forgone conclusion. After all, there seem to be many proposals on how to get around this problem that individuals compete each other. For example, there's Eliezer's idea of using humanity's coherent extrapolated voalition to guide the AI. I also don't think that its in anyone's advantage to have hostile AI, that no one will try to bring about explicitly hostile AI on purpose, and that anyone sufficiently intelligent to program a working AI will probably recognize the dangers that AI contain.
Yes, humans will fight amongst each other and there is temptation for seed AI programmers to abuse the resulting AI to destroy their rivals. But I don't agree with the idea that AIs will always be hostile to the enemies of programmers. With some of the proposals that researchers have, it doesn't seem like individuals can abuse the AI to compete with other humans at all. The large potential for abuse doesn't mean that there is no potential for a good result.
Interesting post! A relevant post might be Eliezer's Harder Choices Matter Less.