Vratko_Polak
Vratko_Polak has not written any posts yet.

Vratko_Polak has not written any posts yet.

Why, then, don't more people realize that many worlds is correct?
I am going to try and provide short answer, as I see it. (Fighting urge to write about different levels of "physical reality".)
Many Words is an Interpretation. An interpretation should translate from mathematical formalism towards practical algorithms, but MWI does not go all the way. Namely, it does not specify the quantum state an Agent should use for computation. One possible state agrees with "Schroedinger's experiment was definitely set up and started", another state implies "cat definitely turned out to be alive", but those certainties cannot occur simultaneously.
Bayesian inference in non-quantum physics also changes (probabilistic) state, but we can interpret it as... (read more)
political systems (such as democracy) are about power.
Precisely. Democracy allows the competition of governing ideas. Granting legitimacy to the winner (to became government) and making system stable.
I see idea of democracy in detecting power shifts without open conflicts. How many fighters would this party have if civil war erupted? Election will show. Number of votes may be very far from actual power (e.g. military strength) but it still can make the weaker side to not seek conflict anymore.
Without being explicit about what power ems will have, specifically in the meatworld, the question seems too ill-defined to me
Well, I am not even sure about powers of individual humans today. But I am sure... (read more)
Thanks for the tip and for the welcome. Now I see that what I really needed was just to read the manual first. By the way, where is the appropriate place to write comments about how misleading the sanbox (in contrast with manual) actually is?
Yes, CEV is a slippery slope. We should make sure to be as aware of possible consequences as practical, before making the first step. But CEV is the kind of slippery slope intended to go "upwards", in the direction of greater good and less biased morals. In the hands of superintelligence, I expect CEV to extrapolate values beyond "weird", to "outright alien" or "utterly incomprehensible" very fast. (Abandoning Friendliness on the way, for something less incompatible with The Basic AI Drives. But that is for completely different topic.)
... (read more)There's a deeper question here: ideally, we would like our CEV to make choices for us that aren't our choices. We would like our CEV to
I was reading Outlawing Anthropics and especially this subconversation has caught my attention. I got some ideas; but that thread is nearly four years old, so I am commenting here instead of there.
My version of the simplified situation: There is an intelligent rational agent (her name is Abby, she is well versed in Bayesian statistics) and there are two urns, each urn containing two marbles. Three of the marbles are green. They are macroscopic, so distinguishable, but not for Abby's senses. Anyway, Abby can number them to be marbles 1, 2 and 3, she is just unable to "read" the number even on close examination. One marble is red, she can distinguish... (read more)