Comment author: Strilanc 14 January 2015 08:06:28AM *  0 points [-]

I think this is just a more-involved version of the Elitzur-Vaidman bomb tester. The main difference seems to be that they're going out of their way to make sure the photons that interact with the object are at a different frequency.

The quantum bomb tester works by relying on the fact that the two arms interfere with each other to prevent one of the detectors from going off. But if there's a measure-like interaction on one arm, that cancelled-out detector starts clicking. The "magic" is that it can click even when the interaction doesn't occur. (I think the many worlds view here is that the bomb blew up in one world, creating differences that prevented it from ending up in the same state as. and thus interfering with. the non-bomb-blowing-up worlds.)

Comment author: calef 07 January 2015 11:52:44PM *  2 points [-]

Here's a discussion of the paper by the authors. For a sort of critical discussion of the result, see the comments in this blog post.

Comment author: Strilanc 08 January 2015 12:02:38AM 2 points [-]

This is an attempt at a “plain Jane” presentation of the results discussed in the recent arxiv paper

... [No concrete example given] ...

Urgh...

Comment author: Omid 05 January 2015 04:47:59PM 1 point [-]

So I signed up for a password manager, and even got a complex password. But how do I remember the password? It's a random combination of upper and lower case letters plus numbers. I suppose I could use space repition software to memorize it, but wouldn't that be insecure?

Comment author: Strilanc 05 January 2015 07:26:48PM 3 points [-]
  • Write the password down on paper and keep that paper somewhere safe.
  • Practice typing it in. Practice writing it down. Practice singing it in your head.
  • Set things up so you have to enter it periodically.
Comment author: Luke_A_Somers 19 September 2013 05:14:27PM 6 points [-]

You can build arbitarily-phase-shifting optical components. There's no reason one couldn't make half-silvered mirrors with a coating that makes it act like Eliezer's... and any physicist ought to know this. Plus, the real issue is the total difference in phase across the two paths, and you can tweak that however you like by adjusting the path lengths.

SO, either fix it numerically or include a note to that effect, because there's no reason this needs to fall to a silly nitpick.

Comment author: Strilanc 17 December 2014 09:03:04AM *  1 point [-]

A concrete example of a paper using the add-i-to-reflected-part type of beam splitter is the "Quantum Cheshire Cats" paper:

A simple way to prepare such a state is to send a horizontally polarized photon towards a 50:50 beam splitter, as depicted in Fig. 1. The state after the beam splitter is |Psi>, with |L> now denoting the left arm and |R> the right arm; the reflected beam acquires a relative phase factor i.

The figure from the paper:

The figure

I also translated the optical system into a similar quantum logic circuit:

My recreation of the circuit they described

Note that I also included the left-path detector they talk about later in the paper, and some read-outs that show (among other things) that the conditional probability of the left-path detector having gone off, given that D1 went off, is indeed 100%. (The circuit editor I fiddle with is here.)

It's notable that my recreation uses gates with different global phase factors (the beam splitter is 1/2-i/2 and 1/2+i/2 instead of 1/sqrt(2) and i/sqrt(2)). It also ignores the mirrors that appear once on both paths. The effect is the same because global phase factors don't matter.

edit My ability to make sign errors upon sign errors is legendary and hopefully fixed.

Comment author: KatjaGrace 16 December 2014 02:21:40AM 2 points [-]

Might it be unethical to make creatures who want to serve your will?

Comment author: Strilanc 16 December 2014 07:36:32PM 4 points [-]

Possible analogy: Was molding the evolutionary path of wolves, so they turned into dogs that serve us, unethical? Should we stop?

Comment author: Strilanc 13 December 2014 04:10:51PM 1 point [-]

Wait, I had the impression that this community had come to the consensus that SIA vs SSA was a problem along the lines of "If a tree falls in the woods and no one's around, does it make a sound?"? It finds an ambiguity in what we mean by "probability", and forces us to grapple with it.

In fact, there's a well-upvoted post with exactly that content.

The Bayesian definition of "probability" is essentially just a number you use in decision making algorithms constrained to satisfy certain optimality criteria. The optimal number to use in a decision obviously depends on the problem, but the unintuitive and surprising thing is that it can depend on details like how forgetful you are and whether you've been copied and how payoffs are aggregated.

The post I linked gave some examples:

  • If Sleeping Beauty is credited a cumulative dollar every time she guesses correctly, she should act as if she assigns a probability of 1/2 to the proposition.

  • If Sleeping Beauty is given a dollar only if she guesses correctly in all cases, otherwise nothing, then she should act as if she assigns a probability of 1/3 to the proposition.

Other payoff structures give other probabilities. If you never recombine Sleeping Beauty, then the problem starts to become about whether or not she values her alternate self getting money and what she believes her alternate self will do.

Comment author: Yvain 05 March 2009 06:17:05PM 23 points [-]

When I first read "Belief in Belief", I liked it, and agreed with it, but I thought it was describing a curiousity; an exotic specimen of irrationality for us to oooh and aaah over. I mentally applied it to Unitarians and Reform Jews and that was about it.

I've since started wondering more and more if it actually describes a majority of religious people. I don't know if this is how Eliezer intended it, but it was two things that really convinced me:

The first reason was behavior. Most theists I know occasionally deviate from their religious principles; not egregiously, but they're far from perfect. But when I imagine a world that would make me believe religion with certainty - a world where angels routinely descend to people's bedsides to carry their souls to Heaven, or where Satan allows National Geographic into Hell to film a documentary - I find it hard to imagine people sleeping in on Sundays. Not even the most hardened criminal will steal when the policeman's right in front of him and the punishment is infinite.

The second was a webcomic: http://www.heavingdeadcats.com/wp-content/uploads/2009/02/file1126-2.jpg It wasn't so much that theists wouldn't drink the poison as that they'd be surprised, even offended at being asked. It would seem like a cheap trick. Whereas (for example) I would be happy to prove my "faith" in science by ingesting poison after I'd taken an antidote proven to work in clinical trials.

I see two ways this issue is directly important to rationalists:

  1. Is this solely a religious phenomenon, or are our own beliefs vulnerable to this kind of self-deception?

  2. What kind of tests can we create to determine whether a belief is sincerely held?

Comment author: Strilanc 09 December 2014 05:50:09PM 1 point [-]

I would be happy to prove my "faith" in science by ingesting poison after I'd taken an antidote proven to work in clinical trials.

This is one of the things James Randi is known for. He'll take a "fatal" dose of homeopathic sleeping pills during talks (e.g. his TED talk) as a way of showing they don't work.

Comment author: Luke_A_Somers 20 November 2014 08:06:42PM 6 points [-]

Given our ignorance we cannot rationally give zero probability to this possibility, and probably not even give it less than 1% (since that is about the natural lowest error rate of humans on anything)

I am pretty sick of 1% being given as the natural lowest error rate of humans on anything. It's not.

In this particular case, we've made balls of stuff much colder than this, though smaller. So not only does this killer effect have to exist, but it also needs to be size-dependent like fission.

If you give me 100 theories as far-fetched as this, I'd be more confident that all of them are false, than that any are true.

Comment author: Strilanc 21 November 2014 05:51:28PM *  2 points [-]

I am pretty sick of 1% being given as the natural lowest error rate of humans on anything. It's not.

Hmm. Our error rate moment to moment may be that high, but it's low enough that we can do error correction and do better over time or as a group. Not sure why I didn't realize that until now.

(If the error rate was too high, error correction would be so error-prone it would just introduce more error. Something analogous happens in quantum error correction codes).

Comment author: Strilanc 17 November 2014 06:41:17PM 4 points [-]

Oh, so M is not a stock-market-optimizer it's a verify-that-stock-market-gets-optimized-er.

I'm not sure how this differs from a person just asking the AI if it will optimize the stock market. The same issues with deception apply: the AI realizes that M will shut it off, so it tells M the stock market will totally get super optimized. If you can force it to tell M the truth, then you could just do the same thing to force it to tell you the truth directly. M is perhaps making things more convenient, but I don't think it's solving any of the hard problems.

Comment author: Strilanc 17 November 2014 06:30:22PM *  3 points [-]

It's extremely premature to leap to the conclusion that consciousness is some sort of unobservable opaque fact. In particular, we don't know the mechanics of what's going on in the brain as you understand and say "I am conscious". We have to at least look for the causes of these effects where they're most likely to be, before concluding that they are causeless.

People don't even have a good definition of consciousness that cleanly separates it from nearby concepts like introspection or self-awareness in terms of observable effects. The lack of observable effects goes so far that people posit they could get rid of consciousness and everything would happen the same (i.e. p-zombies). That is not a unassailable strength making consciousness impossible to study, it is a glaring weakness implying that p-zombie-style consciousness is a useless or malformed concept.

I completely agree with Eliezer on this one: a big chunk of this mystery should dissolve under the weight of neuroscience.

View more: Prev | Next