What a clusterfuck. I love it. Reminds me of how Sam Hughes made his heroine summon a demon into her bed, explaining that stories are more interesting when characters don't have perfect reasoning.
I've enjoyed the Mindhacks tip to write in books -- If you can see how to write it better, summarize it better, index it better, or organize it better, doing so is an active use of the information.
I'm getting that another common speculation is that HPMOR!Voldemort has to 'synchronize' himself to all of his Horcruxes.
If so, what are the chances that part of the motivation of drawing Harry into the spell, is that he can synchronize himself to the plaque and Harry at the same time?
Incremental synchronizations are interesting -- if Horcruxes can get out of sync, then the "soul" recovered from each may develop conflicting objectives.
Prediction for Chapter 90: Time Pressure, Part 3:
"Wait a moment," you say. "Time Pressure, Part 3? Harry already lost his race against the clock. Why would Chap. 90 be called 'Time Pressures'?"
Because Harry's race against the clock to save Hermione's life has only just begun, and he has slightly less than six hours left. Eliezer mentioned that one of his most significant purposes of Chap. 86 was to update characters' states of knowledge before the next arc. If you recall, in that chapter, Harry learned the word "horcrux." And in Chap. 87, Harry learned of the philosopher's stone.
So what will Harry do? Get the shell removed from his time turner, or obtain a time turner from someone else. Learn about the Horcrux ritual as quickly as possible, travel back in time, get Hermione to create a horcrux, and erase her memory of doing so thus that her death plays out just as before. Then start working on the stone to restore Hermione to life. (He could also take the "bone of the father, flesh of the servant, blood of the enemy" route, but positively identifying Hermione's enemy could be difficult. Lucius Malfoy and Company, who were tricked into antagonizing Hermione, might not count for purposes of the ritual.)
The hard part, of course, will be getting Hermione to kill, but Harry can probably find someone in a hospital who has only days to live and convince Hermione that creating a horcrux is a net ethical positive.
Without Hermione's death, murder would have been a line Harry was unwilling to cross. I think that whoever is behind this plot really wants Harry to cross the Moral Event Horizon and/or create the stone (the second possibility is less likely though, since Hermione was already working on the stone, but that fact could have been unknown to the plotter).
Edit: As of Chapter 101, this prediction has probably been proven wrong, unless Harry's memory of executing this plan has been erased (not completely impossible; there's a moment when he becomes momentarily disoriented.) But I think this would make a totally awesome piece of recursive fanfiction. After HPMoR is finished, I might write this.
Perhaps Harry will do something with his personal copy of Hermione and a hack of Merlin's computer.
Just hours before:
"Of course there is!" Harry said. The boy suddenly looked a bit more vulnerable. "You mean there isn't a copy of me living in your head?"
There was, she realized; and not only that, it talked in Harry's exact voice.
Given Voldemort's novel formatting of his brain, Harry's apparently already got the hardware to contain or access one extra soul, how much more would he need for another?
It seems that it would be easier to keep one's identity small the less one deviates from the norms.
Literally screaming racial slurs in a person's face is an offensive act. Acting cool may be one good defensive strategy, but other strategies are not unwarranted.
Maybe I'm having a problem with 'offended' as a mental state as opposed to something like 'angry'. 'Angry' seems more of a mental state or feeling within yourself, while 'offended' seems less of a feeling but more a description of an act that you are attributing to the other person.
I read this post more as "Don't get angry" than as "Don't get offended" or "Don't feel attacked"
How much loss is acceptable in the reconstruction?
I'd imagine the reconstructed minds would be happier with their own fidelity than the deconstructed minds. And that the reconstructor might trade off some fidelity for utility towards whatever purpose they had in doing the reconstruction.
I see http://lesswrong.com/lw/b93/brain_preservation/ and http://lesswrong.com/lw/bg0/cryonics_without_freezers_resurrection/ -- are there other good discussions?
The gap between creating a working mind and producing an exact reconstruction seems large.
A more mundane example:
The Roomba cleaning robot is scarcely an agent. While running, it does not build up a model of the world; it only responds to immediate stimuli (collisions, cliff detection, etc.) and generates a range of preset behaviors, some of them random.
It has some senses about itself — it can detect a jammed wheel, and the "smarter" ones will return to dock to recharge if the battery is low, then resume cleaning. But it does not have a variable anywhere in its memory that indicates how clean it believes the room is — an explicit representation of a utility function of cleanliness, or "how well it has done at its job". It does, however, have a sensor for how dirty the carpet immediately below it is, and it will spend extra time on cleaning especially dirty patches.
Because it does not have beliefs about how clean the room is, it can't have erroneous beliefs about that either — it can't become falsely convinced that it has finished its job when it hasn't. It just keeps sweeping until it runs out of power. (We can imagine a paperclip-robot that doesn't think about paperclips; it just goes around finding wire and folding it. It cannot be satisfied, because it doesn't even have a term for "enough paperclips"!)
It is scarcely an agent. To me it seems even less "agenty" than an arbitrage daemon, but that probably has more to do with the fact that it's not designed to interact with other agents. But you can set it on the floor and push the go button, and in an hour come back to a cleaner floor. It doesn't think it's optimizing anything, but its behavior has the result of being useful for optimizing something.
Whether an entity builds up a model of the world, or is self-aware or self-protecting, is to some extent an implementation detail, which is different from the question of whether we want to live around the consequences of that entity's actions.
The agent/tool distinction is in the map, not the territory — it's a matter of adopting the intentional stance toward whatever entity we're talking about. To some extent, saying "agent" means treating the entity as a black box with a utility function printed on the outside: "the print spooler wants to send all the documents to the printer" — or "this Puppet config is trying to put the servers in such-and-so state ..."
My roomba does not just keep sweeping until it runs out of power. It terminates quickly in a small space and terminates slower in a large space. To terminate it must somehow sense the size of the space it is working in and compare it to some register of how long it has operated.
Roombas try to build up a (very limited) model of how big the room is from the longest uninterrrupted traversal it can sense. See "Can you tell me more about the cleaning algorithm that the Roomba uses?" in http://www.botjunkie.com/2010/05/17/botjunkie-interview-nancy-dussault-smith-on-irobots-roomba/
I'm confused about the "hide" part of the initial task, or the "fooling" that needs to be unfooled. The objective function rewards ineffective fooling.
It seems you simply mean "store" such that you can find it.
A simple GLUT cannot be conscious and or intelligent because it has no working memory or internal states. For example, suppose the GLUT was written at t = 0. At t = 1, the system has to remember that "x = 4". No operation is taken since the GLUT is already set. At t = 2 the system is queried "what is x?". Since the GLUT was written before the information that "x = 4" was supplied, the GLUT cannot know what x is. If the GLUT somehow has the correct answer then the GLUT goes beyond just having precomputed outputs to precomputed inputs. Somehow the GLUT author also knew an event from the future, in this case that "x = 4" would be supplied at t = 1.
It would have to be a Cascading Input Giant Lookup Table(CIGLUT). eg: At t = 1, input = "1) x = 4" at t = 2, input = "1) x = 4 //previous inputs what is x?" //+ new inputs We would have to postulate infinite storage and reaffirm our commitment to ignoring combinatorial explosions.
Think about it. I need to go to sleep now, it's 3 AM.
Eliezer covered some of this in description of the twenty-ply GLUT being not infinite, but still much larger than the universe. The number of plys in the conversation is the number of "iterations" simulated by the GLUT. For an hour-long Turing test, the GLUT would still not be infinite, (i.e., still describe the Chinese Room thought experiment) and, for the purposes of the thought experiment, it would still be computable without infinite resources.
Certainly, drastic economies could be had by using more complicated programming, but the outputs would be indistinguishable.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If the models depend on factors which cannot be reliably forecast (e.g. "PDO, AMO, and solar cycles" above), then it is a bit of a fake explanation and you can't use them as reliable inputs to a forecast model. Would it be it reasonable to use Akasofu's sine-wave extrapolation of the multi-decadal oscillation in light of the prior two observed "cycles" ?
Also the Pacific Decadal Oscillation and the Atlantic Multidecadal Oscillation indices are measures of the response of the system, and treating them as a driver of the system smuggles some of the dependent response variables into the supposedly independent predictor variables.