Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: palladias 14 December 2014 02:56:16PM 9 points [-]

an atheist turned Catholic

Ditto. :)

Comment author: khafra 15 December 2014 11:58:25AM 0 points [-]

I just want to know about the actuary from Florida; I didn't think we had any other LW'ers down here.

Comment author: Lumifer 04 December 2014 09:12:05PM 0 points [-]

On your account, is my observation false?

Your observation of the reading on the scale is true, of course. Your observation that the weight is 51 grams is false.

The distinction between accuracy and precision is relevant here. I am assuming your scale is sufficiently precise.

does your judgment change if it's a standard weight that I'm using to calibrate the scale?

No, it does not. I am using "false" in the sense of the map not matching the territory. A miscalibrated scale doesn't help you with that.

Comment author: khafra 05 December 2014 11:42:54AM 2 points [-]

Your observation of the reading on the scale is true, of course. Your observation that the weight is 51 grams is false.

"This weight masses 51 grams" is not an observation, it's a theory attempting to explain an observation. It just seems so immediate, so obvious and inarguable, that it feels like an observation.

Comment author: Nominull 21 November 2014 08:33:37AM 3 points [-]

That seems like a failure of noticing confusion; some clear things are actually false.

Comment author: khafra 04 December 2014 07:44:11PM 1 point [-]

No observation is false. Any explanation for a given observation may, with finite probability, be false; no matter how obvious and inarguable it may seem.

Comment author: khafra 02 December 2014 12:33:21PM 0 points [-]

an AI will not have a fanatical goal of taking over the world unless it is programmed to do this.

It is true that an AI could end up going “insane” and trying to take over the world, but the same thing happens with human beings

Are you asserting that all the historic conquerors and emperors who've taken over the world were insane? Is it physically impossible to for an agent to rationally plan to take over the world, as an intermediate step toward some other, intrinsic goal?

there is no reason that humans and AIs could not work together to make sure this does not happen

If the intelligence difference between the smartest AI and other AIs and humans remains similar to the intelligence difference between an IQ 180 human and an IQ 80 human, Robin Hanson's malthusian hellworld is our primary worry, not UFAI. A strong singleton taking over the world is only a concern if a strong singleton is possible.

If you program an AI with an explicit or implicity utility function which it tries to maximize...

FTFY.

But if you program an AI without an explicit utility function, just programming it to perform a certain limited number of tasks, it will just do those tasks.

Yes, and then someone else will, eventually, accidentally create an AI which behaves like a utility maximizer, and your AI will be turned into paperclips just like everything else.

Comment author: khafra 30 October 2014 06:29:33PM 2 points [-]

Are there lists of effective charities for specific target domains? For social reasons, I sometimes want to donate to a charity focused on some particular cause; but given that constraint, I'd still like to make my donation as effective as possible.

Comment author: [deleted] 28 October 2014 05:51:18PM -3 points [-]

Who is talking about spamming anyone? You are completely missing my point. The goal is not to help Elon navigate the terrain. I know he can do that. The point is to humbly ask for his advice as to what we could be doing given his track record of good ideas in the past.

Comment author: khafra 29 October 2014 11:55:23AM 7 points [-]

Do not spam high-status people, and do not communicate with high-status people in a transparent attempt to affiliate with them and claim some of their status for yourself.

Comment author: khafra 23 October 2014 01:14:03PM 44 points [-]

I would have given a response for digit ratio if I'd known about the steps to take the measurement before opening the survey, or if it were at the top of the survey, or if I could answer on a separate form after submitting the main survey. I didn't answer because I was afraid that if I took the time to do so, the survey form, or my https connection to it, or something else would time out, and I would lose all the answers I had entered.

Comment author: Skeptityke 04 October 2014 06:38:26PM 1 point [-]

Question for AI people in the crowd: To implement Bayes' Theorem, the prior of something must be known, and the conditional likelihood must be known. I can see how to estimate the prior of something, but for real-life cases, how could accurate estimates of P(A|X) be obtained?

Also, we talk about world-models a lot here, but what exactly IS a world-model?

Comment author: khafra 10 October 2014 02:43:59PM 0 points [-]

To implement Bayes' Theorem, the prior of something must be known

Not quite the way I'd put it. If you know the exact prior for the unique event you're predicting, you already know the posterior. All you need is a non-pathologically-terrible prior, although better ones will get you to a good prediction with fewer observations.

Comment author: bramflakes 03 October 2014 10:06:28PM 2 points [-]

"They exist but we don't have the tech to detect them"?

Comment author: khafra 04 October 2014 01:03:12AM 2 points [-]

That one shows up in fiction every now and then, but If they're galaxy-spanning, there's no particular reason for them to have avoided eating all the stars unless we're completely wrong about the laws of physics. The motivation might not exactly be "hiding," but it'd have to be something along the lines of a nature preserve; and would require a strong singleton.

Comment author: khafra 03 October 2014 05:17:38PM 3 points [-]

Alien-wise, most of the probability-mass not in the "Great Filter" theory is in the "they're all hiding" theory, right? Are there any other big events in the outcome space?

I intuitively feel like the "they're all hiding" theories are weaker and more speculative than the Great Filter theories, perhaps because including agency as a "black box" within a theory is bad, as a rule of thumb.

But, if most of the proposed candidates for the GF look weak, how do the "they're all hiding" candidates stack up? What is there, besides the Planetarium Hypothesis and Simulationism? Are there any that don't require a strong Singleton?

View more: Next