Good post!
In this metaphor, it seems to me that Qui-Gon also has a strong positive prior on [a child with a lot of midi-chlorian] being good. In other word, because of the prophecy, he believes that developing a lot of capabilities is likely to turn out good. The council doesn't seem to share this prior. As you say, they are worried about the misalignement and the bad consequences it may lead to.
Of course, this is only a fiction, the fact that this kind of optimism led to some terrible outcome says little about how our future will be. A bad outcome made for a better story.
Thanks!
Yup I think you're right about Qui-Gon and the council having different priors here and that being the reason for their different reactions. And yeah, definitely gotta be careful about The Logical Fallacy of Generalizing from Fictional Evidence here.
In Star Wars: Episode I - The Phantom Menace, Master Jedi Qui-Gon Jinn discovers Anakin. Anakin is just a young child slave on the forgettable planet of Tatooine. However, Qui-Gon feels something noteworthy about the boy.
He feels... the Force. To confirm this feeling, he takes a sample of blood from Anakin.
So then, Qui-Gon wants to take advantage of this. He basically has a budding superweapon in front of him and wants to utilize it in the name of Good.
More specifically, he believes that Anakin is the Chosen One.
Given this, Qui-Gon wants to train Anakin as a Jedi so that the prophecy can be fulfilled. But when Qui-Gon presents this proposal to the Jedi Counsel, they are not eager to jump on it.
Damn Yoda. What a good Bayesian.
New scene.
New scene.
And so, the Jedi Counsel declines to allow Anakin to be trained as a Jedi.
When I think about these scenes, I think about alignment vs capabilities.
In the context of AI, we talk about alignment and capabilities. Capabilities research is what makes AIs more powerful. It's what takes us from GPT-2 to GPT-3 to GPT-4 to GPT-N. Alignment is what increases the probability of the AI doing good instead of bad.
In the context of Star Wars, Qui-Gon seems to be ignoring the question of alignment. He has faith that, if trained, Anakin's power would be used for good. That it would be a net positive. The Jedi Counsel on the other hand reacts with fear, worry, skepticism, and a desire to explore the question of alignment before pursuing developments in capabilities.
This question of alignment vs capabilities is not specific to AI and Star Wars, of course. It is more general than that. It can be applied to anything, really.
Knowledge of the physics of subatomic particles lead to both nuclear energy and nuclear weapons. Knowledge of the chemistry of molecules lead to both medicine and cigarettes. Knowledge of human psychology lead to both cognitive behavioral therapy and Facebook.
It's not just knowledge though. An apple can be consumed as a delicious snack (good) or thrown at the head of your younger brother (bad).
If things can be used for both good or for bad, surely we have to think about whether or not the good will outweigh the bad. I don't think this happens nearly enough though. We currently have too much Qui-Gon and not enough Yoda.