There are no discrete "worlds" and "branches" in quantum physics as such. Once two regions in state space are sufficiently separated to no longer significantly influence each other they might be considered split, which makes the answer to your question "yes" by definition.
The highly specific predictions should be lowered in their probability when updating on the statement like 'unpredictable'.
That depends what your initial probability is and why. If it already low due to updates on predictions about the system, then updating on "unpredictable" will increase the probability by lowering the strength of those predictions. Since destruction of humanity is rather important, even if the existential AI risk scenario is of low probability it matters exactly how low.
This of course has the same shape as Pascal's mugging,...
Just because it doesn't do exactly what you want doesn't mean it is going to fail in some utterly spectacular way.
I certainly agree, and I am not even sure what the official SI position is on the probability of such failure. I know that Eliezer in hist writing does give the impression that any mistake will mean certain doom, which I believe to be an exaggeration. But failure of this kind is fundamentally unpredictable, and if a low probability even kills you, you are still dead, and I think that it is high enough that the Friendly AI type effort would n...
Just because software is built line by line doesn't mean it automatically does exactly what you want. In addition to outright bugs any complex system will have unpredictable behaviour, especially when exposed to real word data. Just because the system can restrict the search space sufficiently to achieve an objective doesn't mean it will restrict itself only to the parts of the solution space the programmer wants. The basic purpose of Friendly AI project is to formalize human value system sufficiently that it can be included into the specification of such ...
Long before you have to worry about the software finding an unintended way to achieve the objective, you encounter the problem of software not finding any way to achieve the objective
Well, obviously, since it is pretty much the problem we have now. The whole point of the Friendly AI as formulated by SI is that you have to solve the former problem before the latter is solved, because once the software can achieve any serious objectives it will likely cause enormous damage on its way there.
As often happens, it is to quite an extent a matter of definitions. If by an "end" you mean a terminal value, then no purely internal process can change that value, because otherwise it wouldn't be terminal. This is essentially the same as the choice of reasoning priors, in that anything that can be chosen is, by definition, not a prior, but a posterior of the choice process.
Obviously, if you split the reasoning process into sections, then posteriors of a certain sections can become priors of the sections following. Likewise, certain means can be...
Let's try a car analogy for a compatibilist position, as I understand it: there is car, and why does it move? Because it has an engine and wheels and other parts all arranged in a specific pattern. There is no separate "carness" that makes it move ("automobileness" if you will), it is the totality of its parts that makes it a car.
Will is the same, it is the totality of your identity which creates a process by which choices are made. This doesn't mean there is no such thing any more than the fact that a car is composed of identifiable parts means that no car exists, it is just not a basic indivisible thing.