Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: [deleted] 13 May 2015 07:54:13PM 2 points [-]

I started a HPMOR follow up fanfic called Ginny Weasley and the Methods of Rationality and finished the first chapter. I am not a native english speaker and have still decidet to write in english. While i needet quite a lot of help with mistake correction I was told by someone that my writing was better than what he had seen from some native speakers.

In response to comment by [deleted] on Bragging Thread May 2015
Comment author: bogdanb 02 June 2015 09:14:26PM *  0 points [-]

For what it’s worth, the grammar and spelling was much better than is usual for even the native English part of the Internet. That’s probably fainter praise than it deserves, I don’t remember actually noticing any such fault, which probably means there are few of them.

The phrasing and wording did sound weird, but I guess that’s at least one reason why you’re writing, so congratulations and I hope you keep it up! I’m quite curious to see where you’ll take it.

Comment author: tohu 28 February 2015 08:38:03PM 14 points [-]

Beneath the moonlight glints a tiny fragment of silver, a fraction of a line... (black robes, falling) ...blood spills out in litres, and someone screams a word.

I'm relatively confident that this quote is a part of the solution. Maybe Harry partially transfigures a monofilament blade and starts cutting down everything.

Comment author: bogdanb 28 February 2015 11:19:20PM 2 points [-]

Indeed, the only obvious “power” Harry has that is (as far as we know) unique to him is Partial Transfiguration. I’m not sure if Voldie “knows it not”; as someone mentioned last chapter, Harry used it to cut trees when he had his angry outburst in the Forbidden Forest, and in Azkhaban as well. In the first case Voldie was nearby, allegedly to watch out for Harry, but far enough that to be undetectable via their bond, so it’s possible he didn’t see what exact technique Harry used. In Azkhaban as well he was allegedly unconscious.

I can’t tell if he could have deduced the technique only by examining the results. (At least for the forest occasion he could have made time to examine the scene carefully, and I imagine that given the circumstances he’d have been very interested to look into anything unusual Harry seemed to be able to do.)

On the plus side, Harry performed PT by essentially knowing that objects don’t exist; so it could well be possible to transfigure a thin slice of thread of air into something strong enough to cut. For that matter, that “illusion of objects” thing should allow a sort of “reverse-Partial” transfiguration, i.e. transfigure (parts of) many objects into a single thing. Sort of like what he did to the troll’s head, but applied simultaneously to a slice of air, wands, and Death Eaters. Dumbledore explicitly considers it as a candidate against Voldemort (hint, Minerva remembers Dumbledore using transfiguration in combat). And, interestingly, it’s a wordless* spell (I’m not even sure if Harry can cast anything else wordlessly), and Harry wouldn’t need to raise his wand, or even move at all, to cast it on air (or on the time-space continuum, or world wave-function, whatever).

On the minus side, I’m not sure if he could do it fast enough to kill the Death Eaters before he’s stopped. He did get lots of transfiguration training, and using it in anger in the forest suggests he can do it pretty fast, but he is watched, and IIRC transfiguration is not instantaneous. He probably can’t cast it on Voldie nor on his wand, though he might be able to destroy the gun. And Voldemort can certainly find lots of ways to kill him without magic or touching him directly; hell, he probably knows kung fu and such. And even if Harry managed to kill this body, he’d have to find a way to get rid of the Horcruxes. (I still don’t understand exactly what the deal is with those. Would breaking the Resurrection Stone help?)

Comment author: Benito 17 February 2015 12:49:00AM 0 points [-]

And I'd managed to forget about their magics not touching. So yes, maybe it was body language after all.

Comment author: bogdanb 17 February 2015 10:18:34PM 0 points [-]

Well, we only know that Harry feels doom when near Q and/or his magic, and that in one case in Azkhaban something weird happened when Harry’s Patronus interacted with what appeared to be an Avada Kedavra bolt, and that Q appears to avoid touching Harry.

Normally I’d say that faking the doom sensations for a year, and faking being incapacitated while trying to break someone out of Azkhaban, would be too complicated. But in this case...

Comment author: VAuroch 06 September 2014 08:48:40AM 2 points [-]

Any unbounded goal in the vein of 'Maximize concentration of <thing X> in this area' has local scope but potentially unbounded expenditure necessary.

Also, as has been pointed out for general satisficing goals (which most naturally local-scale goals will be); acquiring more resources lets you do the thing more to maximize the chances that you have properly satisfied your goal. Even if the target is easy to hit, being increasingly certain that you've hit it can use arbitrary amounts of resource.

Comment author: bogdanb 17 February 2015 08:50:56PM 0 points [-]

Both good points, thank you.

Comment author: TylerJay 06 September 2014 05:20:27AM 2 points [-]

It was in a paper I read. Here it is

Comment author: bogdanb 17 February 2015 08:38:22PM 0 points [-]

Thank you, that was very interesting!

In response to comment by Qwake on Truth vs Utility
Comment author: RichardKennaway 16 August 2014 06:47:34AM 0 points [-]

If that's the case it would be obvious (to me) to choose Option 2 and ask a question with a view to determining if this is a simulation and if so how to get out of it.

But I think you're just putting a hand on the scales here. In the OP you wrote that a perfect simulation is "reality for" the people living in it. There is no such thing as "reality for", only "reality". Their simulation is still a simulation. They just do not know it. If I believe the Earth is flat, is a flat Earth "my reality"? No, it is my error, whether I ever discover it or not.

Comment author: bogdanb 05 September 2014 11:57:56PM 0 points [-]

I sort of get your point, but I’m curious: can you imagine learning (with thought-experiment certainty) that there is actually no reality at all, in the sense that no matter where you live, it’s simulated by some “parent reality” (which in turn is simulated, etc., ad infinitum)? Would that change your preference?

Comment author: TylerJay 01 September 2014 06:26:07PM 3 points [-]

Even within the Milky Way, most "earthlike" planets in habitable zones around sunlike stars are on average 1.8 Billion years older than the Earth. If the "heavy bombardment" period at the beginning of a rocky planet's life is approximately the same length for all rocky stars, which is likely, then each of those 11 Billion potentially habitable planets still had 1.8 billion years during which life could have formed. On Earth, life originated almost immediately after the bombardment ended and the earth was allowed to cool. Even if the probability of each planet developing life in a period of 1 Billion years is mind-bogglingly low, we still should expect to see life forming on some of them given 20 Billion Billion planet-years.

Comment author: bogdanb 05 September 2014 11:23:43PM 2 points [-]

most "earthlike" planets in habitable zones around sunlike stars are on average 1.8 Billion years older than the Earth

How do you know? (Not rhethorical, I have no idea and I’m curious.)

Comment author: VAuroch 01 September 2014 04:54:27AM 2 points [-]

Energy acquisition is a useful subgoal for nearly any final goal and has non-starsystem-local scope. This makes strong AIs which stay local implausible.

Comment author: bogdanb 05 September 2014 11:19:41PM *  1 point [-]

If the final goal is of local scope, energy acquisition from out-of-system seems to be mostly irrelevant, considering the delays of space travel and the fast time-scales a strong AI seems likely to operate at. (That is, assuming no FTL and the like.)

Do you have any plausible scenario in mind where an AI would be powerful enough to colonize the universe, but do it because it needs energy for doing something inside its system of origin?

I might see one perhaps extending to a few neighboring systems in a very dense cluster for some strange reason, but I can’t imagine likely final goals (again, for its birth star-system) that it would need to spend hundreds of millenia even to take over a single galaxy, let alone leave it. (Which is of course no proof there isn’t; my question above wasn’t rhethorical.)

I can imagine unlikely accidents causing some sort of papercliper-scenario, and maybe vanishingly rare cases where two or more AIs manage to fight each other over long periods of time, but it’s not obvious to me why this class of scenarios should be assigned a lot of probability mass in aggregate.

Comment author: ChristianKl 31 August 2014 08:24:58PM 0 points [-]

My example about the Spanish Inquisition was supposed to indicate that it assumes that God exists does certain things. Those aren't beliefs that any reasonable person holds. If you judge the actions of the Spanish inquisition while presuming that their beliefs are true you miss the core issue, that their beliefs aren't true.

The OP did advocate certain beliefs about the nature of memory and experience that I consider wrong. We live in a world where people make real decisions about tradeoff between experience and memories. I do think you are likely to get those decisions wrong if you train yourself to think about memory based on thought experiments that ignore how memory and experience works.

You don't get an accurate idea about memory by ignoring scientific research about memory. If you want to discuss examples, there are a bunch of real world examples where you increase the pain that people experience but don't give them painful memories. Discussing them based on what we know from scientific research would bring you much more relevant knowledge about the nature of memory.

Saying that you are unsure about memory and then assume that memory works a certain way is not a good road to go if you want to understand it better. Especially when you are wrong about how memory works in the first place.

Comment author: bogdanb 05 September 2014 11:06:02PM 0 points [-]

Honestly, I can’t really find anything significant in this comment I disagree with.

Comment author: ChristianKl 31 August 2014 09:44:16AM 1 point [-]

If you are confused about memory then go read cognitive psychology. It's a science that among other things studies memory.

Don't engage in thought experiments based on flawed folk psychology concepts of memory when science is available.

The part about torturing children I don’t even get at all.

It's simply the history of the subject. Doctors did surgery on small children without full anesthesia because children won't remember anyway.

We do live today (or at least a decade ago) in a world where people inflict pain and then erase the memories of the experience and argue that it means that the pain they inflicted doesn't matter.

It's a bit like opening a thread arguing that the Spanish inquisition was right for torturing nonbelievers because they they acted under the assumption that they could save souls from eternal damnation by doing so.

Comment author: bogdanb 31 August 2014 01:41:03PM 0 points [-]

It's a bit like opening a thread arguing that the Spanish inquisition was right for torturing nonbelievers because they they acted under the assumption that they could save souls from eternal damnation by doing so.

But the OP didn’t argue in support of torturing people, as far as I can tell. In the terms of your analogy, my reading was of the OP was a bit like:

“Hey, if the Spanish Inquisition came to you and offered the following two options, would you pick either of them, or refuse both? The options are (1) you’re excommunicated, then you get all the cake you want for a week, then you forget about it, or (2) you’re sanctified, then you’re tortured for a week, then you forget about it. Option (3) means nothing happens, they just leave.”

Which sounds completely different to my ears.

View more: Next