Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: cousin_it 15 March 2015 09:44:00PM *  3 points [-]

That was a great fanfic. The characters were amazingly written, and many of the scenes were genuinely emotional and moving. What's more, the ending actually worked, in a way that I hadn't expected it to work. We didn't get a resolution to the prophecies about apocalypse, the prophecies about ending death, the nature of magic, the nature of time travel, phoenix fire, and many other things... but that actually feels okay. We've been reading an origin story all along, and a great origin story it is. Eliezer, thank you!

That said, now I have a wishlist for some other fanfics I'd like to read:

1) A fanfic where Harry and Hermione use the scientific method to research the nature of magic.

2) A fanfic where smarter versions of canon characters fight each other with complicated plots.

3) A fanfic where most of Harry's successes come from making surprising but rational decisions.

Much of the promise of HPMOR was that it hinted at being all of these things, but now I feel that it doesn't do any of them really well. Maybe my standards have gone up, or maybe the promise is still there to be realized by some other author. Or has it already been done?

Comment author: khafra 16 March 2015 11:42:44AM 2 points [-]

2) A fanfic where smarter versions of canon characters fight each other with complicated plots.

Hogwarts Battle School

Comment author: khafra 03 March 2015 04:18:41PM 7 points [-]

...supporters say the opposition leader was assassinated to silence him...

I see headlines like this fairly regularly.

Does anybody know of a list of notable opposition leaders, created when all members of the list were alive? Seems like it could be educational to compare the death rate of the list (a) across countries, and (b) against their respective non-notable demographics.

Comment author: palladias 14 December 2014 02:56:16PM 8 points [-]

an atheist turned Catholic

Ditto. :)

Comment author: khafra 15 December 2014 11:58:25AM 0 points [-]

I just want to know about the actuary from Florida; I didn't think we had any other LW'ers down here.

Comment author: Lumifer 04 December 2014 09:12:05PM 0 points [-]

On your account, is my observation false?

Your observation of the reading on the scale is true, of course. Your observation that the weight is 51 grams is false.

The distinction between accuracy and precision is relevant here. I am assuming your scale is sufficiently precise.

does your judgment change if it's a standard weight that I'm using to calibrate the scale?

No, it does not. I am using "false" in the sense of the map not matching the territory. A miscalibrated scale doesn't help you with that.

Comment author: khafra 05 December 2014 11:42:54AM 2 points [-]

Your observation of the reading on the scale is true, of course. Your observation that the weight is 51 grams is false.

"This weight masses 51 grams" is not an observation, it's a theory attempting to explain an observation. It just seems so immediate, so obvious and inarguable, that it feels like an observation.

Comment author: Nominull 21 November 2014 08:33:37AM 3 points [-]

That seems like a failure of noticing confusion; some clear things are actually false.

Comment author: khafra 04 December 2014 07:44:11PM 1 point [-]

No observation is false. Any explanation for a given observation may, with finite probability, be false; no matter how obvious and inarguable it may seem.

Comment author: khafra 02 December 2014 12:33:21PM 0 points [-]

an AI will not have a fanatical goal of taking over the world unless it is programmed to do this.

It is true that an AI could end up going “insane” and trying to take over the world, but the same thing happens with human beings

Are you asserting that all the historic conquerors and emperors who've taken over the world were insane? Is it physically impossible to for an agent to rationally plan to take over the world, as an intermediate step toward some other, intrinsic goal?

there is no reason that humans and AIs could not work together to make sure this does not happen

If the intelligence difference between the smartest AI and other AIs and humans remains similar to the intelligence difference between an IQ 180 human and an IQ 80 human, Robin Hanson's malthusian hellworld is our primary worry, not UFAI. A strong singleton taking over the world is only a concern if a strong singleton is possible.

If you program an AI with an explicit or implicity utility function which it tries to maximize...

FTFY.

But if you program an AI without an explicit utility function, just programming it to perform a certain limited number of tasks, it will just do those tasks.

Yes, and then someone else will, eventually, accidentally create an AI which behaves like a utility maximizer, and your AI will be turned into paperclips just like everything else.

Comment author: khafra 30 October 2014 06:29:33PM 2 points [-]

Are there lists of effective charities for specific target domains? For social reasons, I sometimes want to donate to a charity focused on some particular cause; but given that constraint, I'd still like to make my donation as effective as possible.

Comment author: [deleted] 28 October 2014 05:51:18PM -3 points [-]

Who is talking about spamming anyone? You are completely missing my point. The goal is not to help Elon navigate the terrain. I know he can do that. The point is to humbly ask for his advice as to what we could be doing given his track record of good ideas in the past.

Comment author: khafra 29 October 2014 11:55:23AM 7 points [-]

Do not spam high-status people, and do not communicate with high-status people in a transparent attempt to affiliate with them and claim some of their status for yourself.

Comment author: khafra 23 October 2014 01:14:03PM 45 points [-]

I would have given a response for digit ratio if I'd known about the steps to take the measurement before opening the survey, or if it were at the top of the survey, or if I could answer on a separate form after submitting the main survey. I didn't answer because I was afraid that if I took the time to do so, the survey form, or my https connection to it, or something else would time out, and I would lose all the answers I had entered.

Comment author: Skeptityke 04 October 2014 06:38:26PM 1 point [-]

Question for AI people in the crowd: To implement Bayes' Theorem, the prior of something must be known, and the conditional likelihood must be known. I can see how to estimate the prior of something, but for real-life cases, how could accurate estimates of P(A|X) be obtained?

Also, we talk about world-models a lot here, but what exactly IS a world-model?

Comment author: khafra 10 October 2014 02:43:59PM 0 points [-]

To implement Bayes' Theorem, the prior of something must be known

Not quite the way I'd put it. If you know the exact prior for the unique event you're predicting, you already know the posterior. All you need is a non-pathologically-terrible prior, although better ones will get you to a good prediction with fewer observations.

View more: Next