Comment author: DanielLC 13 January 2013 11:01:48PM 2 points [-]

The problem with training isn't purposely creating things that pass. It's purposely creating things that don't. In order to figure out what doesn't pass, we need a predicate function. Once we've figured out how to find things that won't pass, we've already found the answer.

Comment author: anotherblackhat 15 January 2013 09:39:27PM *  0 points [-]

Doesn't follow. Consider;

I claim a rock is a non-person.
I expect you accept that statement, I expect that you therefore have a non-person predicate function, yet I also expect you haven't found the answer.

I accept that in order to classify something, we need to be able to classify it.

I'm suggesting there might be a function that classifies some things incorrectly, and is still useful.

Comment author: Qiaochu_Yuan 13 January 2013 11:22:45PM *  3 points [-]

Nitpick: strictly speaking, a computer is not a universal Turing machine because it has a finite amount of memory and is therefore a finite state machine (and in particular can therefore only run finitely many programs). When we say that computers are universal Turing machines we are talking about an idealized version of a computer that acquires more memory as necessary.

Regarding the problem of determining whether a Turing machine is universal, this is undecidable by Rice's theorem, which asserts more generally that any nontrivial property (in the sense that at least one Turing machine has it and at least one Turing machine doesn't have it) of Turing machines is undecidable. The best you can do is an algorithm which returns "is a UTM" on some Turing machines, returns "is not a UTM" on others, or doesn't halt (or outputs "unknown"). For example, it might search through proofs that a given Turing machine is or is not universal (possibly up to some upper bound).

Comment author: anotherblackhat 15 January 2013 09:20:45PM 0 points [-]

Yes, that was sort of the point - you can't make a function for "is a Turing machine" that works in all cases, and you can't make a "is a non-person" function that works in all case either. Further, the set of things you can rule out with 100% certainty is to small to be useful.

Don't see how that relates to my suggestion of a probabilistic answer though. Has anyone proven that you can't make a statistically valid statement about the "Is a Turing machine" question?

Comment author: anotherblackhat 13 January 2013 10:02:46PM 0 points [-]

Consider the intuitively simpler problem of "is something a universal turing machine?" Consider further this list of things that are capable of being a universal turing machine;

  • Computers.
  • Conway's game of life.
  • Elementary cellular automata.
  • Lots of Nand gates.

Even a sufficiently complex shopping list might qualify. And it's even worse, because knowing that A doesn't have personhood, and that B doesn't have personhood doesn't let us conclude that A+B doesn't have personhood. A single Transistor isn't a computer, but 3510 transistors might be a 6502. If we want to be 100% safe, we have to rule out anything we can't analyze, which means we pretty much have to rule out everything. We might as well make the function always return 1.

OK, as bad as that sounds, it just means we shouldn't work too hard on solving the problem perfectly, because we know we'll never be able to do so in a meaningful way. But perhaps we can solve the problem imperfectly. Spam assassin faces a very similar kind of problem, "how can we tell if a message is spam?" The technique it uses is conceptually simple; Pick a test that some messages pass and some fail. Use the test on a corpus of messages classified as spam and a corpus classified as non-spam, and use the results to assign a probability that a message is spam if it passes the test. In addition to the obvious advantage of "I can see how to do that for a non-person predicate test", such a test could also give a score for "has some person-like properties". Thus we can meaningfully approach the problem of A + B being a person even though A and B aren't by themselves.

What kind of tests can we run? Beats me, but presumably we'll have something before we can make an AI by design.

One problem with this approach is it could be wrong. It might even be very wrong. Also, training the predicate function might be an evil process - that is, training may involve purposely creating things that pass.

Comment author: Blackened 31 December 2012 06:29:17PM 0 points [-]

It doesn't fit my model of human behavior. But that's possibly just me.

I'd imagine that if Snape got really angry, but it's only because Harry offended him without knowing, well, he wouldn't be close to harming him. I guess it would be appropriate to say "you almost died" if it's not true, but then Harry acted as if Snape might reconsider his decision to not kill him, rather than being just apologetic, or something like that. Or maybe he was indeed, and I am likely to be underestimating the strength of the impact that Harry's words had on Snape.

But if others interpreted it like me, then I got it right. Hmm.

Comment author: anotherblackhat 31 December 2012 10:57:50PM 2 points [-]

Cannon!Snape has loved Lily since the two of them were children - considerably longer than 11 years. I don't think it's unrealistic at all. While I wouldn't call such a love typical human behavior, it's also not particularly rare. There are thousands of people who still profess love for Princess Di for example.

I doubt that it was telling Snape what an idiot he is that angered him, but rather saying Lily was shallow and unworthy.

I agree that it's weird that someone who could carry a torch for that long would stop just because an 11 year old boy gave them random advice. I think it's likely that when Snape kills Dumbledore, it's going to be because of his love for Lily and Dumbledore's interference in that. His love hasn't diminished at all.

Comment author: DanArmak 24 December 2012 12:10:40PM 2 points [-]

Well, that's not what I said in the part you quoted, but as a matter of fact my suggestion was that he tried and failed to teach others the secret. Because the conditions for immortality are narrow and rooted in virtue ethics and the like. That's my theory, anyway.

It was a rhetorical question. My point was that I believe Flamel has not dedicated his life to either teaching people to make Stones, or creating more Stones himself for others to use, or even using his one Stone on others. And that is because he doesn't value the immortality of others, which is probably because he is a hypocrite deathist. And that will bring him into conflict with Harry when Harry learns of it.

It's also possible that Flamel will have a background story of trying and failing to teach others to make Stones. But if he had Harry's values, he would have dedicated all his life over several centuries, all his (putative) unlimited wealth and all the friends he could make with the promise of more Stones, to overcoming this failure. I predict that if there was such a failure, he has not Tried Really Hard to overcome it - he did not behave as though literally the lives of everyone in the world depended on it.

However, I have't gotten the impression that Flamel was being set up as a villain.

Not a deliberate villain, but almost inevitably someone who can be blamed for not making lots of people immortal.

it's worth having, but doesn't actually have a massive impact on the world

Incidentally, if it really grants unlimited wealth, that is also sufficient to have a massive impact on the world. Think what someone could achieve, just by influencing others, if he had the power to produce and withhold arbitrary amounts of money, and lived for several centuries and so could enact very long term plans.

Comment author: anotherblackhat 24 December 2012 04:14:59PM 3 points [-]

The P.S. doesn't grant unlimited wealth, it grants unlimited gold and/or silver. A large part of the value of Gold is related to it's scarcity, so teaching others how to make stones would affect Flamel's personal wealth - oh, and probably destroy society too. And making everyone immortal includes the Voldemorts, the Grinwalds, and Baba Yagas of the world. and it's not like he personally is killing those people...

See how easy it is to rationalize letting everyone die? And I came up with those in just a few minutes - imagine having six centuries to make excuses.

Comment author: anotherblackhat 02 October 2012 06:31:22PM 1 point [-]

The "All possible worlds" picture doesn't include the case of a marble in both the basket and the box.

Comment author: DanielLC 06 September 2012 05:50:36AM 0 points [-]

It didn't block them "then" because they weren't going to send the information further back.

They weren't planning on it, but the information was sent nonetheless. P(Someone is going to go back and stop them from going back|They came back) < P(Someone is going to go back and stop them from going back|They did not came back)

But that only works up to a point.

Not really. The amount of time you can send back increases exponentially with the number of people sent back. If you only get it right a third of the time, sending one guy back only works a third of the time, but sending a hundred people back, you'd get about 67 +- 5 people sending the right bit, and you'd get it right about 99.98% of the time. If you have two hundred people, you'd get it right about 0.9999997% of the time.

Comment author: anotherblackhat 06 September 2012 11:02:16PM 0 points [-]

They weren't planning on it, but the information was sent nonetheless. P(Someone is going to go back and stop them from going back|They came back) < P(Someone is going to go back and stop them from going back|They did not came back)

That presupposes that P(Bob came back) is not affected by your decision to send the information further on. I'm postulating that IF you would have sent the information further back, THEN P(Bob came back) = 0. Of course, it might not actually work that way, but if my supposition is correct, then Bob not coming back tells you nothing. The event only carries information if you aren't going to make use of that information.

Comment author: DanielLC 05 September 2012 03:15:05AM 0 points [-]

Perhaps the reason he didn't is because you would have sent that information back in time, and so he couldn't.

But every time someone uses a time turner, they send that information into the past. If it didn't block them then, why would it block them now?

Another possibility is that information loses "coherence" the further back it travels.

There are ways of fixing that. For example, you could send people back in groups of three. Then you have them go back unless they're stopped by at least two people.

Or maybe it is possible, but insanely dangerous.

That's possible. The longer the time stream, the more likely that the closed time loop you end up with involves a hurricane or worse. I believe there was a book where the world ended because someone didn't think about that. You could prevent it by allowing a "maybe", so long as you make it likely enough that something you didn't think of doesn't become more likely.

Comment author: anotherblackhat 05 September 2012 06:10:22PM *  1 point [-]

Perhaps the reason he didn't is because you would have sent that information back in time, and so he couldn't.

But every time someone uses a time turner, they send that information into the past. If it didn't block them then, why would it block them now?

Because you would have sent that information back in time. It didn't block them "then" because they weren't going to send the information further back. The effect could be more subtle - instead of preventing you from succeeding, it could prevent you from trying (don't mess with time) or even make you not think of trying.

Another possibility is that information loses "coherence" the further back it travels.

There are ways of fixing that.

No, you can't "fix" it, you can only reduce the effect. If a signal is weak, you can amplify it. But that only works up to a point. And apparently, that point is six hours, even with magical amplification and correction.

I believe there was a book where the world ended because ...

I remember a short story by Larry Niven - Rotating cylinders and the possibility of global causality violation. The short story first appeared in Analog, was reprinted in CONVERGENT SERIES, and it contains the immortal line "I imagine the sun has gone nova". Because the universe protects its cause-and-effect basis with humorless ferocity.

Comment author: DanielLC 04 September 2012 03:07:28AM 1 point [-]

Yeah. You can actually make arbitrarily long chains. You have each person go back in time and stop the next person if they're not stopped. You "start" the chain at the end, and depending on when you do it, you can send back one bit. For example, you give Alice, Bob, Carol, and Daniel time turners. At midnight, Alice goes back to stop Daniel if Bob doesn't stop her, at 6:00 AM, Bob goes back to stop Alice unless Carol stops him, etc. If the enemy attacks on the night of the 28th, Daniel stops Carol. If they don't, he doesn't. This means that if they attack, Daniel and Bob go back every day. If he does, Carol and Alice go back. You'd actually need a fifth person to make up for the fact that none of this is instantaneous, but if you have enough time-turners, you can send arbitrarily long messages arbitrarily far into the past.

Comment author: anotherblackhat 05 September 2012 01:10:08AM 2 points [-]

You don't actually know that Bob didn't see the enemy at the pass, you only know that for some reason, Bob didn't come back and tell you. Perhaps the reason he didn't is because you would have sent that information back in time, and so he couldn't.

Another possibility is that information loses "coherence" the further back it travels. (or forward, depending on which side your standing on) Think of it as a signal to noise problem - six hours isn't the limit, it's the limit of what we can correct for with the magic of the time turners. Prophecy seems to defeat the limit, but only by being nearly incomprehensible.

Or maybe it is possible, but insanely dangerous. There are hints that Atlantis was destroyed by something involving the time stream.

Comment author: moritz 15 May 2012 02:03:45PM 0 points [-]

I dimly recall that in canon, Squibs are actually the children of two wizards. That contradicts Harry's finding directly.

But then Rowling probably didn't have any rules in mind about how magic inherits, so it might be impossible to come up with a good theory that explains everything we know from canon.

Comment author: anotherblackhat 16 May 2012 05:46:33PM 2 points [-]

If Harry's theory is right, squibs can't be normal genetic descendants (mutation not withstanding) of wizards, but adultery is a very real, very common thing. Cannon does not rule out the possibility, though given that the books were meant to be accessible to children it's not surprising that Rowling doesn't go into detail on the matter.

View more: Prev | Next