I just noticed the Recent Karma Awards link in the sidebar. Has it been there for long?
Ringmione seems to be the most popular hypothesis at the moment. It strikes me as extremely careless a plan for Harry to attempt; recall Quirrell's comments after the battle with the transfigured armor, and the first battle where passing out ended the transfiguration on the marshmallow he was practicing with.
However, I took "Harry's parents come to Hogwarts" as a completely insane move that Dumbledore/Macgonnagle would be highly unlikely to pull, and yet the elder wizards thought rather differently from me and did it anyway. I still think using the ring involves Harry assigning way too little cleverness to Dumbledore, but in light of the ordeal with his parents I'll give it a little more probability weight.
(Note also that Harry did something under the covers when Flitwick showed up.)
I took "Harry's parents come to Hogwarts" as a completely insane move
I did too at first, but when Harry reads the follow-up letter from his father we see that it turned out for the best.
I’ll mention what I’ll call the “radio theory” of brains. Imagine that you are a Kalahari Bushman and that you stumble upon a transistor radio in the sand. You might pick it up, twiddle the knobs, and suddenly, to your surprise, hear voices streaming out of this strange little box. If you’re curious and scientifically minded, you might try to understand what is going on. You might pry off the back cover to discover a little nest of wires. Now let’s say you begin a careful, scientific study of what causes the voices. You notice that each time you pull out the green wire, the voices stop. When you put the wire back on its contact, the voices begin again. The same goes for the red wire. Yanking out the black wire causes the voices to get garbled, and removing the yellow wire reduces the volume to a whisper. You step carefully through all the combinations, and you come to a clear conclusion: the voices depend entirely on the integrity of the circuitry. Change the circuitry and you damage the voices.
Proud of your new discoveries, you devote your life to developing a science of the way in which certain configurations of wires create the existence of magical voices. At some point, a young person asks you how some simple loops of electrical signals can engender music and conversations, and you admit that you don’t know—but you insist that your science is about to crack that problem at any moment.
Your conclusions are limited by the fact that you know absolutely nothing about radio waves and, more generally, electromagnetic radiation. The fact that there are structures in distant cities called radio towers—which send signals by perturbing invisible waves that travel at the speed of light—is so foreign to you that you could not even dream it up. You can’t taste radio waves, you can’t see them, you can’t smell them, and you don’t yet have any pressing reason to be creative enough to fantasize about them. And if you did dream of invisible radio waves that carry voices, who could you convince of your hypothesis? You have no technology to demonstrate the existence of the waves, and everyone justifiably points out that the onus is on you to convince them.
So you would become a radio materialist. You would conclude that somehow the right configuration of wires engenders classical music and intelligent conversation. You would not realize that you’re missing an enormous piece of the puzzle.
I’m not asserting that the brain is like a radio—that is, that we’re receptacles picking up signals from elsewhere, and that our neural circuitry needs to be in place to do so—but I am pointing out that it could be true. There is nothing in our current science that rules this out. Knowing as little as we do at this point in history, we must retain concepts like this in the large filing cabinet of ideas that we cannot yet rule in favor of or against. So even though few working scientists will design experiments around eccentric hypotheses, ideas always need to be proposed and nurtured as possibilities until evidence weighs in one way or another.
--David Eagleman, Incognito: The Secret Lives of the Brain, Random House, pp. 221-222
I like the premise. Last month's Douglas Hofstadter quote comes to mind. Some problems:
At some point, a young person asks you how some simple loops of electrical signals can engender music and conversations... you insist that your science is about to crack that problem at any moment.
Why would I insist this? I don't even know how the electrical signals (the what?!) change the volume. I just know how to make the wires change the volume, and I know how to make them change the music too.
You would conclude that somehow the right configuration of wires engenders classical music and intelligent conversation. You would not realize that you’re missing an enormous piece of the puzzle.
Some inquisitive Bushman I turned out to be. This is still a very magical radio.
Also, I think a clever Bushman could figure out that the radio is transmitting sounds from somewhere else. It is the reality after all so there are clues. He hears a person talking when no one's there; the circuitry is too simple to write symphonies and simulate most human discussion; the radio doesn't work in caves...
OK. In that scenario, the correct thing to do would be:
1) If I currently believe in ghosts (that is, if my confidence that ghosts exist rises above the threshold of belief), get the hell out of there.
2) Ask myself what I would differentially expect to observe if ghosts existed or didn't, and look for those things (while continuing to follow #1), and modify my confidence that ghosts exist based on my observations. If at any point my confidence crosses the threshold of belief in either direction, re-evaluate rule #1.
I don't see what value committing to a belief (either way) without reference to observed evidence would provide in that scenario.
2) Ask myself what I would differentially expect to observe if ghosts existed or didn't, and look for those things
The tricky part about this is establishing how much weird stuff you'd expect to see in the absence of ghosts. There will always be unexplained phenomena, but how many is too many?
Many things in AI that look like they ought to be easy have hidden gotchas which only turn up once you start trying to code them
I don't disagree (though I think that I'm less confident on this point than you are).
Human beings don't make billions of sequential self-modifications, so they're not existence proofs that human-quality reasoning is good enough for that.
Why do you think that an AI would need to make billions of sequential self-modifications when humans don't need to?
I'm not sure how to go about convincing you that stable-goals self-modification is not something which can be taken for granted to the point that there is no need to try to make the concepts crisp and lay down mathematical foundations. If
I agree that it can't be taken for granted. My questions are about the particular operationalization of a self-modifying AI that you use in your publication. Why do you think that the particular operationalization is going to be related to the sorts of AIs that people might build in practice?
"...need to make billions of sequential self-modifications when humans don't need to" to do what? Exist, maximize utility, complete an assignment, fulfill a desire...? Some of those might be better termed as "wants" than "needs" but that info is just as important in predicting behavior.
I think I understand that a little better now. So thank you for taking the time to explain that to me.
Even so, it seems all I must do is add to my counterexample a prior track record of the little boy changing strategies while pretending to go along with authority. Reconsidering my little boy example with that in mind, does that change your reply?
Also, I fail to see how your response ameliorates my objection to the claim "it is impossible for A and ~A to both be evidence for B." By your own explanation, they are both evidence, albeit offering unequal relative probabilities (forgive me if I'm getting the password wrong there, but I think you can surmise what it is I'm getting at). Maybe if we say that "It is impossible for A and ~A to both offer the same relative probability for B at the same time and concerning the same situation and given the same subjective view of the facts, etc," we have something that doesn't lead us to claim things that are not true about someone else's argument, as in the case above, that their argument depends on A and ~A at the same time and in the same way, when the precise claim in question is actually that A can be evidence for B in one situation; and based upon the expectation set upon the observance of subsequent facts, at some later date, ~A could also end up being evidence for B. I'm not sure if I've explained that clearly, but I'll keep trying until either I get what I'm missing, or I manage to express clearly what may well be coming out as gibberish. Either way, I get a little slice of the self-improvement I'm looking for.
Thanks again, and I hope you can forgive my wet ears on this and bear with me. The benefits of our exchanges here will probably be pretty one sided; I have almost nothing to offer a more experienced rationalist here, and lots to gain... and I realize that, so bear with me, and please know I am grateful for the feedback.
based upon the expectation set upon the observance of subsequent facts, at some later date, ~A could also end up being evidence for B
Here's a contradiction with A and ~A both being evidence for the same thing. You could tell your spouse "Go up and check if little Timmy went to bed". Before ze comes back you already have an estimate of how likely Timmy is to go to bed on time (your prior belief). But then your spouse, who was too tired to climb the stairs, comes back and tells you "Little Timmy may or may not have gone to bed". Now, if both of those possibilities would be evidence of Timmy's staying up late then you should update your belief accordingly. But how can you do that without receiving any new information?
Math works well with the kids. I write "1 + 1 = 2" and say "simple", then write the quadratic equation and say "not simple".
Finally, Lucas implicitly assumes that if the mind is a formal systems, then our “seeing” a statement to be true involves the statement being proved in that formal system.
To me this seems like the crux of the issue (in fact, I perceive it to be the crux of the issue, so QED). Of course there are LW posts like Your Intuitions are not Magic, but surely a computer could output something like "arithmetic is probably consistent for the following reasons..." instead of a formal proof attempt if asked the right question.
Same here, utterly sacrilicious.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Steven Landsburg at TBQ has posted a seemingly elementary probability puzzle that has us all scratching our heads! I'll be ignominiously giving Eliezer's explanation of Bayes' Theorem another read, and in the mean time I invite all you Bayes-warriors to come and leave your comments.