Comment author: Strilanc 01 March 2015 02:56:38AM *  1 point [-]

I made my suggestion.

Assuming you can take down the death eaters, I think the correct follow-up for despawning LV is... massed somnium.

We've seen somnium be effective at range in the past, taking down an actively dodging broomstick rider at range. We've seen the resonance hit LV harder than Harry, requiring tens of minutes to recover versus seconds.

LV is not wearing medieval armor to block the somnium. LV is way up high, too far away to have good accuracy with a hand gun.If LV dodges behind something, Harry has time to expecto patronum a message out.

... I think the main risk is LV apparating away, apparating back directly behind harry, and pulling the trigger.

Comment author: Diadem 26 February 2015 04:29:16PM 17 points [-]

I disagree that the writing has deteriorated.

People complain a lot about the lack of foreshadowing of the mirror and the "Riddle can't kill Riddle" curse. But I don't think the lack of foreshadowing matters, because both of these things are minor details in the overall story line. Let's start with the "Riddle can't kill Riddle" curse. Voldemort wasn't just not killing Harry because of this curse. After all now that the curse is lifted he still isn't killing Harry. The curse is entirely unneeded to explain his earlier before, or his current behavior. Nor was the curse needed to resolve the current plot. Voldemort was in complete control of the situation all along.

So there's no deus ex machina. It's a sudden unexpected development, yes, but one that doesn't really affect the story. It's purpose was to drive home how utterly defeated Harry is. How he is now completely at the mercy of Voldemort, having no clever tricks or last minute saves. Also it gave us a nice cliffhanger. But you can take out the final lines from 111 and the first few lines from 112 and the story continues exactly as it does now.

The same with the mirror scene were Dumbledore gets defeated. Take it out, have Dumbledore never show up,and the story still continues exactly the same as it does now. Dumbledore is a side character. He needed to be got rid of, so neither Harry nor the reader would expect or hope for Dumbledore to show up at the last minute and save the day, but ultimate he's not important to the story. And Voldie getting rid of Dumbledore with relativele ease is entirely expected anyway. He is established as being much stronger.

Anyway, bottomline: I really like the story so far. Elizier is doing a terrific job of driving home just how utterly screwed Harry is. How completely outplayed and outgunned he is.

I'm really looking forward to the resolution. I have no idea what it is going to be, but I fully expect it to be glorious. I do know it won't be Harry casting "Problemsolvius" or someone showing up casting "Savethedayius". I know this because Elizier went to great length to crush that expectation at every possible avenue.

Of course, my disappointment if I am mistaken and the final solution does some completely unexpected deus ex machina, shall be big indeed.

And for the record: My prediction is still that Voldemort shall not be dead by the end of the story. I give that 80%. Up to a few chapters ago my theory was that Voldemort wanted to team up with Harry to permanently get rid of death, but that seems increasingly less likely.

Comment author: Strilanc 26 February 2015 06:02:52PM 4 points [-]

Dumbledore is a side character. He needed to be got rid of, so neither Harry nor the reader would expect or hope for Dumbledore to show up at the last minute and save the day

There's technically six more hours of story time for a time-turned Dumbledore to show up, before going on to get trapped. He does mention that he's in two places during the mirror scene.

Dumbledore has previously stated that trying to fake situations goes terribly wrong, so there could be some interesting play with that concept and him being trapped by the mirror.

Comment author: Squark 03 February 2015 06:13:20AM 2 points [-]

Thx for commenting!

I'm talking about Levin-Kolmogorov complexity. The LK complexity of x is defined to be the minimum over all programs p producing x of log(T(p)) + L(p) where T(p) is the execution time of p and L(p) is the length of p.

Comment author: Strilanc 03 February 2015 03:13:04PM *  0 points [-]

Sorry for getting that one wrong (I can only say that it's an unfortunately confusing name).

Your claim is that AGI programs have large min-length-plus-log-of-running-time complexity.

I think you need more justification for this being a useful analogy for how AGI is hard. Clarifications, to avoid mixing notions of problems getting harder as they get longer for any program with notions of single programs taking a lot of space to specify, would also be good.

Unless we're dealing with things like the Ackermann function or Ramsey numbers, the log-of-running-time component of KL complexity is going to be negligible compared to the space component.

Even in the case of search problems, this holds. Sure it takes 2^100 years to solve a huge 3-SAT problem, but that contribution of ~160 time-bits pales in comparison to the several kilobytes of space-bits you needed when encoding the input into the program.

Or suppose we're looking at the complexity of programs that find an AGI program. Presumably high, right? Except that the finder can bypass the time cost by pushing the search into the returned AGI's bootstrap code. Basically, you replace "run this" with "return this" at the start of the program and suddenly AGI-finding's KL complexity is just its K complexity. (see also: the P=NP algorithm that brute force finds programs that work, and so only "works" if P=NP)

I think what I'm getting at is: just use length plus running time, without the free logarithm. That will correctly capture the difficulty of search, instead of making it negligible compared to specifying the input.

Plus, after you move to non-logarithmed time complexity, you can more appropriately appeal to things like the no free lunch theorem and NP-completeness as weak justification for expecting AGI to be hard.

Comment author: Strilanc 02 February 2015 10:52:54PM *  7 points [-]

Kolmogorov complexity is not (closely) related to NP completeness. Random sequences maximize Kolmogorov complexity but are trivial to produce. 3-SAT solvers have tiny Kolmogorov complexity despite their exponential worst case performance.

I also object to thinking of intelligence as "being NP-Complete", unless you mean that incremental improvements in intelligence should take longer and longer (growing at a super-polynomial rate). When talking about achieving a fixed level of intelligence, complexity theory is a bad analogy. Kolmogorov complexity is also a bad analogy here because we want any solution, not the shortest solution.

Comment author: Metus 02 February 2015 03:40:49PM 1 point [-]

π seems like half the size it should be

That one you found out already, it would make it much more consistent with how similar constants are used.

The gravitational constant looks like off by a factor of 4π

Not sure what you mean. Do you mean when comparing the equation for gravitational force to the electric force? Or do you mean when looking at the 'intuitive' way of writing the differential equation

?

In either case it seems that the choice of 4π is arbitrary on one equation or the other. For example choosing Gaussian units introduces a 4π in the electrical equation and makes it look more like the gravitational equation.

cosine seems more primitive than sine

They seem equally primitive by

and

The Riemann Zeta function ζ(s) negates s for reasons beyond me

It doesn't according to Wikipedia

The Gamma function has this -1 I don't understand

I haven't read up on that so I don't really know. Seems arbitrary to me too.

Comment author: Strilanc 02 February 2015 08:03:00PM 2 points [-]

I would say cos is simpler than sin because its Taylor series has a factor of x knocked off.

In practice they tend to show up together, though. Often you can replace the pair with something like e^(i x), so maybe that should be considered the simplest.

Comment author: Strilanc 02 February 2015 07:50:46PM 2 points [-]

Here's another interesting example.

Suppose you're going to observe Y in order to infer some parameter X. You know that P(x=c | y) = 1/2^(c-y).

  • You set your prior to P(x=c) = 1 for all c. Very improper.
  • You make an observation, y=1.
  • You update: P(x=c) = 1/2^(c-1)
  • You can now normalize P(x) so its area under the curve is 1.
  • You could have done that, regardless of what you observed y to be. Your posterior is guaranteed to be well formed.

You get well formed probabilities out of this process. It converges to the same result that Bayesianism does as more observations are made. The main constraint imposed is that the prior must "sufficiently disagree" in predictions about a coming observation, so that the area becomes finite in every case.

I think you can also get these improper priors by running the updating process backwards. Some posteriors are only accessible via improper priors.

Comment author: Strilanc 30 January 2015 11:27:27PM 1 point [-]

I did notice that they were spending the whole time debating a definition, and that the article failed to get to any consequences.

I think that existing policies are written in terms of "broadband", perhaps such as benefits to ISPs based on how many customers have access to broadband? That would make it a debate about conditions for subsidies, minimum service requirements, and the wording of advertising.

Comment author: Luke_A_Somers 17 January 2015 05:27:34PM -1 points [-]

The photon does not get converted into red OR yellow. It gets converted into red AND yellow.

Comment author: Strilanc 17 January 2015 07:02:15PM *  1 point [-]

Hrm... reading the paper, it does look like NL1 goes from |a> to |cd> instead of |c> + |d>, This is going to move all the numbers around, but you'll still find that it works as a bomb detector. The yellow coming out of the left non-interacting-with-bomb path only interferes with the yellow from the right-and-mid path when the bomb is a dud.

Just to be sure, I tried my hand at converting it into a logic circuit. Here's what I get:

circuit

Having it create both the red and yellow photon, instead of either-or, seems to have improved its function as a bomb tester back up to the level of the naive bomb tester. Half of the live bombs will explode, a quarter will trigger g, and the other quarter will trigger h. None of the dud bombs will explode or trigger g; all of them trigger h. Anytime g triggers, you've found a live bomb without exploding it.

If you're going to point out another minor flaw, please actually go through the analysis to show it stops working as a bomb tester. It's frustrating for the workload to be so asymmetric, and hints at motivated stopping (and I suppose motivated continuing for me).

Comment author: Luke_A_Somers 17 January 2015 01:41:06AM -1 points [-]

I do not see a way that a live bomb can trigger nothing, or for an exploded bomb to trigger either g or h.

Comment author: Strilanc 17 January 2015 03:29:00AM 0 points [-]

A live bomb triggers nothing when the photon takes the left leg (50% chance), gets converted into red instead of yellow (50% chance), and gets reflected out.

An exploded bomb triggers g or h because I assumed the photon kept going. That is to say, I modeled the bomb as a controlled-not gate with the photon passing by being the control. This has no effect on how well the bomb tester works, since we only care about the ratio of live-to-dud bombs for each outcome. You can collapse all the exploded-and-triggered cases into just "exploded" if you like.

Comment author: Luke_A_Somers 15 January 2015 04:20:05PM 0 points [-]

If you used their current camera as a bomb tester, it would blow up 50% of the time.

Comment author: Strilanc 17 January 2015 12:20:36AM *  2 points [-]

Okay, I've gone through all the work of checking if this actually works as a bomb tester. What I found is that you can use the camera to remove more dud bombs than live bombs, but it does worse than the trivial bomb tester.

So I was wrong when I said you could use it as a drop-in replacement. You have to be aware that you're getting less evidence per trial, and so the tradeoffs for doing another pass are higher (since you lose half of the good bombs with every pass with both the camera and the trivial bomb tester; better bomb testers can lose fewer bombs per pass). But it can be repurposed into a bomb tester.

I do still think that understanding the bomb tester is a stepping stone towards understanding the camera.

Anyways, on to the clunky analysis.

Here's the (simpler version of the) interferometer diagram from the paper:

interferometer

Here's my interpretation of the state progression:

  • Start

    |green on left-toward-BS1>
    
  • Beam splitter is hit. s = sqrt(2)

    |green on a>/s + i |green on left-downward-path>/s
    
  • non-linear crystal 1 is hit, splits green into (red + yellow) / s

    |red on a>/2 + |yellow on a>/2 + i |green on left-downward-path>/s
    
  • hit frequency-specific-mirror D1 and bottom-left mirror

    i |red on d>/s^2 + |yellow on c>/s^2 - |green on b>/s
    
  • interaction with O, which is either a detector or nothing at all

    i |red on d>|O yes>/s^2 + |yellow on c>|O no>/s^2 - |green on b>|O no>/s
    
  • hit frequency-specific-mirror D2, and top-right mirror

    -|red on b>|O yes>/s^2 + i |yellow on right-toward-BS2>|O no>/s^2 - |green on b>|O no>/s
    
  • hit non-linear crystal 2, which acts like NL1 for green but also splits red into red-yellow. Not sure how this one is unitary... probably a green -> [1, 1] while red -> [1, -1] thing so that's what I'll do:

    -|red on f>|O yes>/s^3 + |yellow on e>|O yes>/s^3 + i |yellow on right-toward-BS2>|O no>/s^2 - |red on f>|O no>/s^2 - |yellow on e>|O no>/s^2
    
  • red is reflected away; call those "away" and stop caring about color:

    |e>|O yes>/s^3 + i |right-toward-BS2>|O no>/2 - |e>|O no>/2 - |away>|O yes>/s^3 - |away>|O no>/s^2
    
  • yellows go through the beam splitter, only interferes when O-ness agrees.

    |h>|O yes>/s^4 + i|g>|O yes>/s^4 + i |g>|O no>/s^3 - |h>|O no>/s^3 - |h>|O no>/s^3 - i|g>|O no>/s^3 - |away>|O yes>/s^3 - |away>|O no>/s^2
    |h>|O yes>/s^4 + i|g>|O yes>/s^4 - |h>|O no>/s - |away>|O yes>/s^3 - |away>|O no>/s^2
    ~ 6% h yes, 6% g yes, 50% h no, 13% away yes, 25% away no
    

CONDITIONAL upon O not having been present, |O yes> is equal to |O no> and there's more interference before going to percentages:

 |h>/s^4 + i|g>/s^4 - |h>/s - |away>/s^3 - |away>/s^2
|h>(1/s^4-1/s) + i|g>/s^4 - |away>(1/s^2 + 1/s^3)
~ 21% h, 6% g, 73% away

Ignoring the fact that I probably made a half-dozen repairable sign errors, what happens if we use this as a bomb tester on 200 bombs where a hundred of them are live but we don't know which? Approximately:

  • 6 exploded bombs that triggered h
  • 21 dud bombs that triggered h
  • 50 live bombs that triggered h
  • 6 exploded bombs that triggered g
  • 6 dud bombs that triggered g
  • 0 live bombs that triggered g
  • 13 exploded bombs that triggered nothing
  • 25 live bombs that triggered nothing
  • 73 dud bombs that triggered nothing

So, of the bombs that triggered h but did not explode, 50/71 are live. Of the bombs that triggered g but did not explode, none are live. Of the bombs that triggered nothing but did not explode, 25/98 are live.

If we keep only the bombs that triggered h, we have raised our proportion of good unexploded bombs from 50% to 70%. In doing so, we lost half of the good bombs. We can repeat the test again to gain more evidence, and each time we'll lose half the good bombs, but we'll lose proportionally more of the dud bombs.

Therefore the camera works as a bomb tester.

View more: Prev | Next