Correct me if I'm wrong, but there seem to be two separate challenges on the Potions room parchment: a simple one consistent with canon and the skills and abilities of the target audience, and a complex one requiring an hour or so of careful and precise work. Looks like Harry and Quirrelmort focus exclusively on the long formula, ignoring the puzzle.
On rereading the relevant part of Ch. 107, it appears that Harry has an idea he doesn't want to share shortly after the broomstick conversation. On a close reading, it appears that he manages to avoid the topic...
Hmm. How about having someone else die in Hermione's place?
I don't recall offhand if the death burst was recognizable as Hermione, but otherwise it seems doable. Dumbledore said he felt a student die and only realized it was Hermione once he saw her.
You'd need polyjuice for the visual appearance, and either Hermione's presence or a fake Patronus for past-Harry to follow. Hermione is unlikely to go along with the plan willingly sho she'd need to be tricked or incapacitated. Hard to tell which would be easier.
Given the last words, Hermione's doppelganger mig...
Lesswrongers are surprised by this? It appears figuring out metabolism and nutrition is harder than I thought.
I believe that obesity is a problem of metabolic regulation, not overeating, and this result seems to support my belief. Restricting calories to regulate your weight is akin to opening the fridge door to regulate its temperature. It might work for a while but in the long run you'll end up breaking both your fridge and your budget. Far better to figure out how to adjust the termostat.
Some of the things that upregulate your fat set point are a histor...
You have it backwards. The bet you need to look at is the risk you're insuring against, not the insurance transaction.
Every day you're betting that your house won't burn down today. You're very likely to win but you're not making much of a profit when you do. What fraction of your bankroll is your house worth, how likely is it to survive the day and how much will you make when it does? That's what you need to apply the Kelly criterion to.
Imagine that two exact replicas of a person exist in different locations, exactly the same except for an antagonism in one of their values. Both could not be correct at the same time about that value.
The two can't be perfectly identical if they disagree. You have to additionally assume that the discrepancy is in the parts that reason about their values instead of the values themselves for the conclusion to hold.
Wouldn't the failure to acknowledge all the excitement nuclear war would cause be an example of the horns effect?
I immediately answered no and rated everyone who said yes as completely undateable
I can understand answering no for emotional or political reasons, but rating the epistemically correct answer as undateable? That's... a good reason for me to answer such questions honestly, actually.
Having such enemies in the first place? Definitely not.
There are entire cultural systems of tracking prestige based around having such enemies; the vestiges of them survive today as modern "macho" culture. Having enemies to crush mercilessly, and then doing so, is an excellent way to signal power to third parties.
I think that you are underestimating the efficiency of intersystem communication in a world where a lot of organizational communication is handled through information technology.
Speech and reading seem to be at most 60 bits per second. A single neuron is faster than that.
Compare to the human brain. The optic nerve transmits 10 million bits per second and I'd expect interconnections between brain areas to generally fall within a few orders of magnitude.
I'd call five orders of magnitude a serious bottleneck and don't really see how it could be significant...
An organization could be viewed as a type of mind with extremely redundant modular structure. Human minds contain a large number of interconnected specialized subsystems, in an organization humans would be the subsystems. Comparing the two seems illuminating.
Individual subsystems of organizations are much more powerful and independent, making them very effective at scaling and multitasking. This is of limited value, though: it mostly just means organizations can complete parallelizable tasks faster.
Intersystem communication is horrendously inefficient in o...
One of the advantages of bureaucracy is creating value from otherwise low-value inputs. The collection of people working in the nearest McDonalds probably isn't capable of figuring out from scratch how to run a restaurant. But following the bureaucratic blueprint issued from headquarters allows those same folks to produce a hamburger on demand, and getting paid for it.
That's a major value of bureaucratic structure - lowering the variance and raising the downside (i.e. a fast food burger isn't great, but it meets some minimum quality and won't poison you).
I haven't seen one example of a precise definition of what constitutes an "observation" that's supposed to collapse the wavefunction in Copenhagen interpretation. Decoherence, OTOH, seems to perfecty describe the observed effects, including the consistency of macro-scale history.
This in my opinion proves that memory sticks with the branch my consciousness is in.
Actually it just proves that memory sticks with the branch it's consistent with. For all we know, our consciousnesses are flitting from branch to branch all the time and we just don't ...
Of course, this has its own moral dilemmas as well - such as the fact that you're as good as dead for your loved ones in the timeline that you just left - but generally smaller than erasing a universe entirely.
You could get around this by forking the time traveler with the universe: in the source universe it would simply appear that the attempted time travel didn't work.
That would create a new problem, though: you'd never see anyone leave a timeline but every attempt would result in the creation of a new one with a copy of the traveler added at the dest...
Interesting, I've occasionally experimented with something similar but never thought of contacting Autopilot this way. Yeah, that's what I'll call him.
I get the feeling that this might be useful in breaking out of some of my procrastination patterns: just call Autopilot and tell him which routine to start. Not tested yet, as then I'd forget about writing this reply.
I get the feeling that if Harry learns the Killing Curse he'll manage to tweak it somehow, on the order of Patronus 2.0 or partial Transfiguration.
I arrived at this idea by intuition - it seems to fit, but I don't think there's much explicit support. AFAICT I'm mostly pattern-matching on story logic, AK's plot significance and symmetry with Patronus, and Harry's talent for breaking things by thinking at them.
I think my probability estimate for this (given that Harry learns AK in the first place) is around 30%, but I suspect I'm poorly calibrated.
I've been meditating for about two weeks now, and been progressing surprisingly quickly. Concentration came easy, and I started having interesting experiences pretty much straight away. I'd like to share my latest sitting and hopefully get some input.
I sat cross-legged on my couch and started concentrating on my breath as usual. Soon there was a discontinuity: my concentration lapsed, it felt as if my attention was fully in a nonsensical, dreamlike thought for just a second and suddenly I was in a clearer, lighter, easier state. It's happened similarly sev...
My biggest problem when meditating is that when I focus on my breath, I switch to breathing consciously[...]
I've started to suspect that this difficulty is actually a feature. Observing without interfering seems like an important skill to learn if the goal is to be more aware of your thoughts and actions in general.
Imagine, say, being consciously aware of every detail of your leg movements while walking; it becomes a lot more difficult if you don't know how to stay out of your own way.
It seems to me as if you view terminal goals as universal, not mind-specific. Is this correct, or have I misunderstood?
The point, as I understand it, that some humans seem to have happiness as a terminal goal. If you truly do not share this goal, then there is nothing left to explain. Value is in the mind, not inherent in the object it is evaluating. If one person values a thing for its own sake but another does not, this is a fact about their minds, not a disagreement about the properties of the thing.
Was this helpful?
Another data point: I wasn't too bothered with the general sales-pitchiness of the first two posts, possibly because I've occasionally gained useful knowledge by reading actual sales pitches from the self-help crowd.
That said, you had me hooked by the third paragraph of Part I and I've been going "get to the POINT already" since then. I do see some value in personal testimony, but it should be far more condensed.
You seem to currently have exactly one downvoted comment outside the HPMOR discussion and that at only -1. What makes you think the effects you see aren't simply a result of people actively participating in these threads noticing and responding to comments they deem poorly supported? No following around required.
As for the downvotes, I suspect an overwelming majority of them result from your adversarial reactions to criticism, not the HPMOR content. How many downvotes had this received before you added this edit?
...What the hell with the random neg reps, se
Here's a vote for not-mind-reading. This seems deliberately written to suggest Quirrell's reacting to body language, not thought:
Without any conscious decision, she shifted her weight to the other foot, her body moving away from the Defense Professor -
"So you think I am the one responsible?" said Professor Quirrell.
What's the in-story justification for the dementor's presence anyway? I thought it seemed awfully convenient in case Harry decided to demonstrate his Patronus 2.0 but I couldn't figure out how it'd help enough.
I'd forgotten about the potential for ruining others' patronuses, though. That makes a lot more sense, especially considering he'd just reached into his dark side - possibly deeper than he'd ever willingly done before.
My guess: it wouldn't be enough at this point to just demonstrate a superior patronus or tell people about the possibility of ruining ...
Both of those seem to fit the pattern perfectly when you consider evolution as an actor.
Maybe we should be discussing optimization power instead of intelligence; evolution seems a pretty decent manipulator considering how stupid it is.
(ch56)
Has the nature of Harry's mysterious dark side been established yet? If not, the latest chapter gives a strong hint toward it being a shard of Voldemort.
In chapter 56, Harry discovers that his vulnerability to Dementors is due to his dark side's fear of death. And, back in chapter 39, in the discussion between Harry and Dumbledore it was suggested that Voldemort was motivated by fear of death. Not quite proof, but interesting nonetheless.
Where do people get this "no depth cues" claim? The way her lifted leg moves suggests so obviously a clockwise motion that even Alicorn's link can't make my brain see the counterclockwise motion for longer than a round or two at most.
I mean, the only way the perspective makes any sense is if the lifted leg is furthest away when it's the highest up in the 2d image. Yes, the shadow/reflection is all wrong but for some reason my brain just refuses to give that priority.
What cues do others use? I'd love to see variations of this image with different ...
Sounds like your definition of "well-socialized" is closer to "well-adjusted" than RobinZ's.
As I understand them, skill in navigating social situations, epistemic rationality and psychological well-being are all separate features. They do seem to correlate, but the causal influences are not obvious.
ETA: Depends a lot on the standard you use, too. RobinZ is probably correct if you look at the upper quartile but less so for the 99th percentile.
In short, I used to believe that social skills are a talent you're born with, not a skill to be developed. Luckily just being around people and paying attention improved my eye for social cues enough that I eventually noticed.
This relates to Carol S. Dweck's book Mindset, which I've mentioned before. I'm thinking of writing more about it sometime soon.
It becomes a bit less surprising when you consider that I attribute my low score mostly to relatively recent changes in my social skills and preferences, and the criteria I checked were the ones about all-absorbing narrow interests and imposition of routines and interests. As a matter of fact, the changes I mention came about as a result of months of near-obsessive study and accidental practice. (I did not grasp the importance of practice at the time but that's a subject for another comment.)
Maybe I wasn't that far toward the autism end of the spectrum to begin with but it does make me wonder just how much others could improve their social skills given the right circumstances.
I originally looked at the poll but didn't answer until now.
I fit two of the Gillberg diagnostic criteria and scored 19 on the wired test. I would've definitely scored much higher a few years back when I suspected I might have asperger's. My social skills have developed a lot since then and I'm now more inclined to attribute my social deficiencies to lack of practice than anything else.
For what it's worth, I do seem to follow a pattern of intense pursuit of relatively few interests that change over time.
(Note: This post is speculation based on memory and introspection and possibly completely mistaken. Any help in clarifying my thinking and gathering evidence on this would be greatly appreciated.)
I suspect that I'm also affected by this and just haven't conciously noticed. Feels like I'm a lot more comfortable with analytical modes than more intuitive/social ones and probably spending more time inducing them than I should.
I'd like to be more aware of my mental modes and find more effective ways of influencing them. Any suggestions?
ETA: Now that I think abo...
Blow up the paradox-causing FTL? Sounds like that could be weaponized.
I was about to go into detail about the implications of FTL and relativity but realized that my understanding is way too vague for that. Instead, I googled up a "Relativity and FTL travel" FAQ.
I love the idea of strategically manipulating FTL simultaneity landscape for offensive/defensive purposes. How are you planning to decide what breaks and how severely if a paradox is detected?
Interesting. I thought that my thinking would be mostly words, like inner monologue or talking to myself. Now that I pay attention it is more like images, emotions, concepts constantly flashing through my head, most gone before I even notice them.
Introspectively it seems that my thinking has changed and I just haven't noticed until now. Or that my conscious mind has finally learned to shut up and pay attention.
aversion to discomfort
This made me think of what pjeby calls the pain brain. In short, our actions can be motivated by either getting closer to what we want (pull) or away from what we try to avoid (push). Generally, push overrides pull, so you may not even notice what you want if you're too busy avoiding what you don't.
It may be useful to explore your goals and motivations with relaxed mental inquiry and critically examine any fears or worries that may come up.
I recently finished the book Mindset by Carol S. Dweck. I'm currently rather wary of my own feelings about the book; I feel like a man with a hammer in a happy death spiral. I'd like to hear others' reactions.
The book seems to explain a lot about people's attitudes and reactions to certain situations, with what seems like unusually strong experimental support to boot. I recommend it to anyone (and I mean anyone - I've actually ordered extra copies for friends and family) but teachers, parents and people with interest in self-improvement will likely benefit...
How do I know I'm not simulated by the AI to determine my reactions to different escape attempts? How much computing power does it have? Do I have access to its internals?
The situation seems somewhat underspecified to give a definite answer, but given the stakes I'd err on the side of terminating the AI with extreme prejudice. Bonus points if I can figure out a safe way to retain information on its goals so I can make sure the future contains as little utility for it as feasible.
The utility-minimizing part may be an overreaction but it does give me an idea: Maybe we should also cooperate with an unfriendly AI to such an extent that it's better for it to negotiate instead of escaping and taking over the universe.
As I understand Eliezer's position, when babyeater-humans say "right", they actually mean babyeating. They'd need a word like "babysaving" to refer to what's right.
Morality is what we call the output of a particular algorithm instantiated in human brains. If we instantiated a different algorithm, we'd have a word for its output instead.
I think Eliezer sees translating babyeater word for babyeating as "right" as an error similar to translating their word for babyeaters as "human".
You worry about that all-important status when you fear losing it.
Want to win? Then focus on winning, not on not-losing. You need to if you want to be seen as high-status, anyway. Fear of loss is low-status, so is worrying about what others think.
Navigate the minefield, sure. But do it from a position of strength, not of weakness.