Comment author: Vratko_Polak 17 June 2013 10:16:53PM *  0 points [-]

I was reading Outlawing Anthropics and especially this subconversation has caught my attention. I got some ideas; but that thread is nearly four years old, so I am commenting here instead of there.

My version of the simplified situation: There is an intelligent rational agent (her name is Abby, she is well versed in Bayesian statistics) and there are two urns, each urn containing two marbles. Three of the marbles are green. They are macroscopic, so distinguishable, but not for Abby's senses. Anyway, Abby can number them to be marbles 1, 2 and 3, she is just unable to "read" the number even on close examination. One marble is red, she can distinguish it, and it gets number 0. One urn has marbles 0 and 2, this is the "even" urn, the second has marbles 1 and 3, and is called odd. Again, Abby cannot distinguish urns without examining marbles. Now, assistent takes both urns to another room, computes 256th binary digit of exp(-1), and gets back with just one urn of corresponding parity. Abby is allowed to draw one marble (turns out it is green) and then urn is taken away and Abby is basically asked to state her subjective probability of the urn being odd (by accepting or refusing some bets). And only then she is told that in another room there is another person (Bart) who is being presented with the same choices after drawing the other marble from the very same urn. And finally, Abby is asked (informally) what is her averaged expectation of Bart's subjective probability of the urn being odd (now that she sees her marble is green). And, if this average is different from her subjective probability, why is she not taking that value as an indirect evidence in her calculations (which clearly means that the assistent is just messing with her).

Assumptions are that neither Abby nor Bart have a clue about binary digits of exp(-1), they are not able to compute that far and so they assign prior probability of the urn being odd to 50%. Another assumption is that Abby and Bart both have chosen their marbles randomly, and in fact, they do not even know which one of them was drawing first. So there are 4 "possible" worlds, numbered by the marble Abby "would" have drawn, all of them appearing equally probable before the marble drawing.

Question is (of course) what subjective probability should Abby use when accepting/refusing bets. And to give a witty retort to assistant's "why" question, where applicable; or else, to explain why Boltzmann brains are not that big obstacle to rationality.

And here I am, way over my time budget, having finished around one third of my planned comment. So I guess I shall leave you with questions for now, and I will resume commenting later.

Edit: Note to self: Do not forget to include http:// in links. RTFM.

Edit: "possible" worlds, numbered by marble Abby has drawn -> "possible" worlds, numbered by marble Abby "would" have drawn

Comment author: endoself 05 June 2013 09:20:22AM 2 points [-]

From If Many-Worlds had Come First:

the thought experiment goes: 'Hey, suppose we have a radioactive particle that enters a superposition of decaying and not decaying. Then the particle interacts with a sensor, and the sensor goes into a superposition of going off and not going off. The sensor interacts with an explosive, that goes into a superposition of exploding and not exploding; which interacts with the cat, so the cat goes into a superposition of being alive and dead. Then a human looks at the cat,' and at this point Schrödinger stops, and goes, 'gee, I just can't imagine what could happen next.' So Schrödinger shows this to everyone else, and they're also like 'Wow, I got no idea what could happen at this point, what an amazing paradox'. Until finally you hear about it, and you're like, 'hey, maybe at that point half of the superposition just vanishes, at random, faster than light', and everyone else is like, 'Wow, what a great idea!'"

Obviously this is a parody and Eliezer is making an argument for many worlds. However, this isn't that far from how the thought experiment is presented in introductory books and even popularizations. Why, then, don't more people realize that many worlds is correct? Why aren't tons of bright middle-school children who read science fiction and popular science spontaneously rediscovering many worlds?

Comment author: Vratko_Polak 10 June 2013 08:22:04PM -1 points [-]

Why, then, don't more people realize that many worlds is correct?

I am going to try and provide short answer, as I see it. (Fighting urge to write about different levels of "physical reality".)

Many Words is an Interpretation. An interpretation should translate from mathematical formalism towards practical algorithms, but MWI does not go all the way. Namely, it does not specify the quantum state an Agent should use for computation. One possible state agrees with "Schroedinger's experiment was definitely set up and started", another state implies "cat definitely turned out to be alive", but those certainties cannot occur simultaneously.

Bayesian inference in non-quantum physics also changes (probabilistic) state, but we can interpret it as a mere change of our beliefs, and not a change in the physical system. But in quantum mechanics, upon observation, the "objective" state fitting our knowledge changes. MWI says "fitting our knowledge" is not a good criterion of choosing quantum state to compute with (because no state can be fitting enough, as example with Shroedinger's cat shows) and we should compute with superposition of Agents. MWI may be more "objectively correct", but it does not seem to be more "practical" than Copenhagen interpretation. So physicists do like to cautiously agree with MWI, then wave hands, proclaim "Decoherence!" and at the end use Copenhagen interpretation as before.

Introductory books emphasize experiments, and experimental results do not come in form of superpositioned bits. So before student gets familiar enough with mathematical formalism to think about detectors in superposition, Copenhagen is already occupying slot for Interpretation.

Comment author: Lumifer 06 June 2013 06:33:09PM *  1 point [-]

...democracy?

/flicks the OFF switch.

(to be a bit more clear, political systems (such as democracy) are about power. Without being explicit about what power ems will have, specifically in the meatworld, the question seems too ill-defined to me)

Comment author: Vratko_Polak 10 June 2013 06:18:45AM 1 point [-]

political systems (such as democracy) are about power.

Precisely. Democracy allows the competition of governing ideas. Granting legitimacy to the winner (to became government) and making system stable.

I see idea of democracy in detecting power shifts without open conflicts. How many fighters would this party have if civil war erupted? Election will show. Number of votes may be very far from actual power (e.g. military strength) but it still can make the weaker side to not seek conflict anymore.

Without being explicit about what power ems will have, specifically in the meatworld, the question seems too ill-defined to me

Well, I am not even sure about powers of individual humans today. But I am sure that counting adult = 1 vote, adolescent = 0 votes is not precise. On the other hand, it does not need to be precise. Every form of power can be roughly transformed to "ability to campaign for more votes". Making votes more sophisticated would add a subgoal of "increasing voting power" that could become as taxing as actual conflict. Or not, I really have no idea; sociology is difficult.

Back on topic. I see problems when ems are more varied in personal power, compared to children vs adults variance of today. Would "voting weight" have do be more fine-grained? Would this weight be measured in a friendly competition, akin to sports of today? Or would there be privileged caste, and everyone else would have no voting rights? Would the voting rights be not granted for persons, but for military platforms instead? (Those platforms would not be actually used, they will exist just for signalling purposes.) Or will any simpleton barely managing digital signature be a voter? Subject to brain-washing by those with actual power?

I hope that these low-quality questions can help someone else to give high-quality answers.

But I want to stress that I do not see any problems specific to copy-ability of ems. Democracy only measures power of political party, democracy does not reflect on which methods have lead to the said power.

Comment author: Kawoomba 08 June 2013 11:03:04AM 0 points [-]

P.S.: Does this web interface have anything like "preview" button?

There's a sandbox here, it's also linked to when you click "Show help", the button at the lower right corner of the text box which opens when you start a reply. Welcome, yay for more PhD-level physicists.

Comment author: Vratko_Polak 08 June 2013 11:34:37AM 1 point [-]

Thanks for the tip and for the welcome. Now I see that what I really needed was just to read the manual first. By the way, where is the appropriate place to write comments about how misleading the sanbox (in contrast with manual) actually is?

Comment author: nigerweiss 07 June 2013 10:02:18PM *  2 points [-]

There's a deeper question here: ideally, we would like our CEV to make choices for us that aren't our choices. We would like our CEV to give us the potential for growth, and not to burden us with a powerful optimization engine driven by our childish foolishness.

One obvious way to solve the problem you raise is to treat 'modifying your current value approximation'' as an object-level action by the AI, and one that requires it to compute your current EV - meaning that, if the logical consequences of the change (including all the future changes that the AI predicts will result from that change) don't look palatable to you, the AI won't make the first change. In other words, the AI will never assign you a value set that you find objectionable right now. This is safe in some sense, but not ideal. The profoundly racist will never accept a version of their values which, because of its exposure to more data and fewer cognitive biases, isn't racist. Ditto for the devoutly religious. This model of CEV doesn't offer the opportunity for growth.

It might be wise to compromise by locking the maximum number of edges in the graph between you and your EV to some small number, like two or three - a small enough number that value drift can't take you somewhere horrifying, but not so tightly bound up that things can never change. If your CEV says it's okay under this schema, then you can increase or decrease that number later.

Comment author: Vratko_Polak 08 June 2013 10:35:28AM *  3 points [-]

Yes, CEV is a slippery slope. We should make sure to be as aware of possible consequences as practical, before making the first step. But CEV is the kind of slippery slope intended to go "upwards", in the direction of greater good and less biased morals. In the hands of superintelligence, I expect CEV to extrapolate values beyond "weird", to "outright alien" or "utterly incomprehensible" very fast. (Abandoning Friendliness on the way, for something less incompatible with The Basic AI Drives. But that is for completely different topic.)

There's a deeper question here: ideally, we would like our CEV to make choices for us that aren't our choices. We would like our CEV to give us the potential for growth, and not to burden us with a powerful optimization engine driven by our childish foolishness.

Thank you for mentioning "childish foolishness". I was not sure whether such suggestive emotional analogies would be welcome. This is my first comment on LessWrong, you know.

Let me just state that I was surprised by my strong emotional reaction while reading the original post. As long as higher versions are extrapolated to be more competent, moral, responsible and so on; they should be allowed to be extrapolated further.

If anyone considers the original post to be a formulation of a problem (and ponders possible solutions), and if the said anyone is interested in counter-arguments based on shallow, emotional and biased analogies, here is one such analogy: Imagine children pondering their future development. They envision growing up, but they also see themselves start caring more about work and less about play. Children consider those extrapolated values to be unwanted, so they formulate the scenario as "problem of growing up" and they try to come up with a safe solution. Of course, you may substitute "play versus work" with any "children versus adults" trope of your chice. Or "adolescents versus adults", and so on.

Reades may wish to counter-balance any emotional "aftertaste" by focusing on The Legend of Murder-Gandhi again.

P.S.: Does this web interface have anything like "preview" button?

Edit: typo and grammar.