Comment author: 19 February 2013 11:11:00AM *  2 points [-]

you cannot use the category of "quantum random" to actual coin flip, because an object to be truly so it must be in a superposition of at least two different pure states, a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time).

Given the level of subtlety in the question, which gets at the relative nature of superposition, this claim doesn't quite make sense. If I am entangled with a a state that you are not entangled with, it may "be superposed" from your perspective but not from either of my various perspectives.

For example: a projection of the universe can be in state

(you observe NULL)&otimes;(I observe UP)&otimes;(photon is spin UP) + (you observe NULL)&otimes;(I observe DOWN)&otimes;(photon is spin DOWN) = (you observe NULL)&otimes;((I observe UP)&otimes;(photon is spin UP) + (I observe DOWN)&otimes;(photon is spin DOWN))

The fact that your state factors out means you are disentangled from the joint state of me and the particle, and so together the particle and I are "in a superimposed state" from "your perspective". However, my state does not factor out here; there are (at least) two of me, each observing a different outcome and not a superimposed photon.

Anyway, having cleared that up, I'm not convinced that there is enough mutual information connecting my frontal lobe and the coin for the state of the coin to be entangled with me (i.e. not "in a superposed state") before I observe it. I realize this is testable, e.g., if the state amplitudes of the coin can be forced to have complex arguments differing in a predictable way so as to produce an expected and measurable interference paterns. This is what we have failed to produce at a macroscopic level, and it is this failure that you are talking about when you say

a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time).

I do not believe I have been shown a convincing empirical test ruling out the possibility that the state is not, from my brain's perspective, in a superposition of vastly many states with amplitudes whose complex arguments are difficult to predict or control well enough to produce clear interference patterns, and half of which are "heads" state and half of which are "tails" states. But I am very ready to be corrected on this, so if anyone can help me out, please do!

Comment author: 12 January 2013 08:07:55PM 0 points [-]

I disagree. Justification is the act of explaining something in a way that makes it seem less dirty.

Comment author: 11 January 2013 11:40:57PM *  3 points [-]

If you're curious about someone else's emotions or perspective, first, remember that there are two ways to encode knowledge of how someone else feels: by having a description of their feelings, or by empathizing and actually feeling them yourself. It is more costly --- in terms of emotional energy --- to empathize with someone, but if you care enough about them to afford them that cost, I think it's the way to go. You can ask them to help you understand how they feel, or help you to see things the way they do. If you succeed, they'll appreciate having someone who can share their perspective.

In response to Macro, not Micro
Comment author: 08 January 2013 07:21:54PM *  2 points [-]

My summary of this idea has been that life is a non-convex optimization problem. Hill-climbing will only get you to the top of the hill that you're on; getting to other hills requires periodic re-initializing. Existing non-convex optimization techniques are often heuristic rather than provably optimal, and when they are provable, they're slow.

Comment author: 07 January 2013 06:46:50PM *  2 points [-]

And the point of CFAR is to help people become better filtering good ideas from bad. It is plainly not to produce people who automatically believe the best verbal argument anyone presents to them without regard for what filters that argument has been through, or what incentives the Skilled Arguer might have to utter the Very Convincing Argument for X instead of the Very Very Convincing Argument for Y. And certainly not to have people ignore their instincts; e.g. CFAR constantly recommends Thinking Fast and Slow by Kahneman, and teaches exercises to extract more information from emotional and physical senses.

Comment author: 07 January 2013 06:32:04PM *  3 points [-]

What if we also add a requirement that the FAI doesn't make anyone worse off in expected utility compared to no FAI?

I don't think that seems reasonable at all, especially when some agents want to engage in massively negative-sum games with others (like those you describe), or have massively discrete utility functions that prevent them from compromising with others (like those you describe). I'm okay with some agents being worse off with the FAI, if that's the kind of agents they are.

Luckily, I think people, given time to reflect and grown and learn, are not like that, which is probably what made the idea seem reasonable to you.

Comment author: 07 January 2013 06:22:57PM 2 points [-]

Non-VNM agents satisfying only axiom 1 have coherent preferences... they just don't mix well with probabilities.

Comment author: 07 January 2013 06:14:35PM *  0 points [-]

Dumb solution: an FAI could have a sense of justice which downweights the utility function of people who are killing and/or procreating to game their representation in AI's utility function, or something like that do disincentivize it. (It's dumb because I don't know how to operationalize justice; maybe enough people would not cheat and want to punish the cheaters that the FAI would figure that out.)

Also, given what we mostly believe about moral progress, I think defining morality in terms of the CEV of all people who ever lived is probably okay... they'd probably learn to dislike slavery in the AI's simulation of them.

Comment author: 07 January 2013 06:06:55PM 0 points [-]

Thanks for writing this up!

Comment author: 06 January 2013 04:32:06PM *  3 points [-]

I don't see how it could be true even in the sense described in the article without violating Well Foundation somehow

Here's why I think you don't get a violation of the axiom of well-foundation from Joel's answer, starting from way-back-when-things-made-sense. If you want to skim and intuit the context, just read the bold parts.

1) Humans are born and see rocks and other objects. In their minds, a language forms for talking about objects, existence, and truth. When they say "rocks" in their head, sensory neurons associated with the presence of rocks fire. When they say "rocks exist", sensory neurons associated with "true" fire.

2) Eventually the humans get really excited and invent a system of rules for making cave drawings like "∃" and "x" and "∈" which they call ZFC, which asserts the existence of infinite sets. In particular, many of the humans interpret the cave drawing "∃" to mean "there exists". That is, many of the same neurons fire when they read "∃" as when they say "exists" to themselves. Some of the humans are careful not to necessarily believe the ZFC cave drawing, and imagine a guy named ZFC who is saying those things... "ZFC says there exists...".

3) Some humans find ways to write a string of ZFC cave drawings which, when interpreted --- when allowed to make human neurons fire --- in the usual way, mean to the humans that ZFC is consistent. Instead of writing out that string, I'll just write <ZFC is consistent> in place of it.

4) Some humans apply the ZFC rules to turn the ZFC axiom-cave-drawings and the cave drawing <ZFC is consistent> into a cave drawing that looks like this:

"∃ a set X and a relation e such that <(X,e) is a model of ZFC>"

where <(X,e) is a model of ZFC> is a string of ZFC cave drawings that means to the humans that (X,e) is a model of ZFC. That is, for each axiom A of ZFC, they produce another ZFC cave drawing A' where "∃y" is always replaced by "∃y∈X", and "∈" is always replaced by "e", and then derive that cave drawing from the cave drawing "<ZFC axioms> and <ZFC is consistent>" according to the ZFC rules.

Some cautious humans try not to believe that X really exists... only that ZFC and the consistency of ZFC imply that X exists. In fact if X did exist and ZFC meant what it usually does, then X would be infinite.

4) The humans derive another cave drawing from ZFC+<ZFC is consistent>:

"∃Y∈X and f∈X such that <(Y,f) is a model of ZFC>",

6) The humans derive yet another cave drawing,

"∃ZeY and geX such that <(Z,g) is a model of ZFC>".

Some of the humans, like me, think for a moment that ZY∈X, and that if ZFC can prove this pattern continues then ZFC will assert the existence of an infinite regress of sets violating the axiom of well-foundation... but actually, we only have "ZeY∈X" ... ZFC only says that Z is related to Y by the extra-artificial e-relation that ZFC said existed on X.

I think that's why you don't get a contradiction of well-foundation.

View more: Next