That seems like a failure of noticing confusion; some clear things are actually false.
No observation is false. Any explanation for a given observation may, with finite probability, be false; no matter how obvious and inarguable it may seem.
an AI will not have a fanatical goal of taking over the world unless it is programmed to do this.
It is true that an AI could end up going “insane” and trying to take over the world, but the same thing happens with human beings
Are you asserting that all the historic conquerors and emperors who've taken over the world were insane? Is it physically impossible to for an agent to rationally plan to take over the world, as an intermediate step toward some other, intrinsic goal?
there is no reason that humans and AIs could not work together to make sure this does not happen
If the intelligence difference between the smartest AI and other AIs and humans remains similar to the intelligence difference between an IQ 180 human and an IQ 80 human, Robin Hanson's malthusian hellworld is our primary worry, not UFAI. A strong singleton taking over the world is only a concern if a strong singleton is possible.
If you program an AI with an explicit or implicity utility function which it tries to maximize...
FTFY.
But if you program an AI without an explicit utility function, just programming it to perform a certain limited number of tasks, it will just do those tasks.
Yes, and then someone else will, eventually, accidentally create an AI which behaves like a utility maximizer, and your AI will be turned into paperclips just like everything else.
Are there lists of effective charities for specific target domains? For social reasons, I sometimes want to donate to a charity focused on some particular cause; but given that constraint, I'd still like to make my donation as effective as possible.
Who is talking about spamming anyone? You are completely missing my point. The goal is not to help Elon navigate the terrain. I know he can do that. The point is to humbly ask for his advice as to what we could be doing given his track record of good ideas in the past.
Do not spam high-status people, and do not communicate with high-status people in a transparent attempt to affiliate with them and claim some of their status for yourself.
I would have given a response for digit ratio if I'd known about the steps to take the measurement before opening the survey, or if it were at the top of the survey, or if I could answer on a separate form after submitting the main survey. I didn't answer because I was afraid that if I took the time to do so, the survey form, or my https connection to it, or something else would time out, and I would lose all the answers I had entered.
Question for AI people in the crowd: To implement Bayes' Theorem, the prior of something must be known, and the conditional likelihood must be known. I can see how to estimate the prior of something, but for real-life cases, how could accurate estimates of P(A|X) be obtained?
Also, we talk about world-models a lot here, but what exactly IS a world-model?
To implement Bayes' Theorem, the prior of something must be known
Not quite the way I'd put it. If you know the exact prior for the unique event you're predicting, you already know the posterior. All you need is a non-pathologically-terrible prior, although better ones will get you to a good prediction with fewer observations.
"They exist but we don't have the tech to detect them"?
That one shows up in fiction every now and then, but If they're galaxy-spanning, there's no particular reason for them to have avoided eating all the stars unless we're completely wrong about the laws of physics. The motivation might not exactly be "hiding," but it'd have to be something along the lines of a nature preserve; and would require a strong singleton.
Alien-wise, most of the probability-mass not in the "Great Filter" theory is in the "they're all hiding" theory, right? Are there any other big events in the outcome space?
I intuitively feel like the "they're all hiding" theories are weaker and more speculative than the Great Filter theories, perhaps because including agency as a "black box" within a theory is bad, as a rule of thumb.
But, if most of the proposed candidates for the GF look weak, how do the "they're all hiding" candidates stack up? What is there, besides the Planetarium Hypothesis and Simulationism? Are there any that don't require a strong Singleton?
What does this mean?
Don't let a summary of reality distract you from reality, even if it's an accurate summary.
-- Steven Kaas
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Your observation of the reading on the scale is true, of course. Your observation that the weight is 51 grams is false.
The distinction between accuracy and precision is relevant here. I am assuming your scale is sufficiently precise.
No, it does not. I am using "false" in the sense of the map not matching the territory. A miscalibrated scale doesn't help you with that.
"This weight masses 51 grams" is not an observation, it's a theory attempting to explain an observation. It just seems so immediate, so obvious and inarguable, that it feels like an observation.