You could write crosswords for the NYT... They pay $200 per crossword XD
As an aspiring scientist, I hold the Truth above all.
That will change!
More seriously though...
As one can see, the biggest problem is determining burden of proof. Statistically speaking, this is much like the problem of defining the null hypothesis.
Well, not really. The null and alternative hypotheses in frequentist statistics are defined in terms of their model complexity, not our prior beliefs (that would be Bayesian!). Specifically, the null hypothesis represents the model with fewer free parameters.
You might still face some sort of statistical disagreement with the theist, but it would have to be a disagreement over which hypothesis is more/less parsimonious--which is really a rather different argument than what you've outlined (and IMO, one that the theist would have a hard time defending).
I'm not saying that the frequentist statistical belief logic actually goes like that above. What I'm saying is that is how many people tend to wrongly interpret such statistics to define their own null hypothesis in the way I outlined in the post.
As I've said before, the MOST common problem is not the actual statistics, but how the ignorant interpret that statistics. I am merely saying, I would prefer Bayesian statistics to be taught because it is much harder to botch up and read our own interpretation into it. (For one, it is ruled by a relatively easy formula)
Also, isn't model complexity quite hard to determine with the statements "God exists" and "God does not exist". Isn't the complexity in this sense subject to easy bias?
Michael:
Underestimating the significance of superintelligence. People have a delusion that humanity is some theoretically optimum plateau of intelligence (due to brainwashing from Judeo-Christian theological ideas, which also permeate so-called “secular humanism”), which is the opposite of the truth. We’re actually among the stupidest possible species smart enough to launch a civilization.
This doesn't seem to be a part of standard Christian or Jewish theology, so blaming that attitude on this seems misguided. His last sentence is also problematic- how does he know that with a sample size of one?
Michael:
Would you rather your AI be based on Hitler or Gandhi?
Seems to understate the case. Mindspace is large. The problem isn't an AI that acts like Hitler. That's not such a bad failure as things go. The worst case scenario more resembles Cthulhu than Hitler.
Anissimov correctly calls out Goertzel on his claim that he's sure he could design a properly functioning nanny AI.
Goertzel does correctly point out that it seems likely that a nanny AI would take less understanding than a full Friendly AI with stable goals under self-modification.
"This doesn't seem to be a part of standard Christian or Jewish theology,"
~Actually even if there is no outright statement in the Bible, through the years, it is commonly accepted that human supremacy is stated in Genesis 1:26 - "Then God said, 'Let us make mankind in our image'". Man is created in God's image - making him superior to all. Also in Gen. 3:22 - “The man has now become like one of us, knowing good and evil". Man is like a God - the only difference is that they are not immortal.
Not necessarily my opinion, just what I believe the theology says, and what I have heard from theist friends that the theology says.
I've tried to explain this when arguing with theists, and it sometimes creates the following unintentional side effect:
Me: /explains Bayesian framework
Theist: Ah ha! So if we use a simple, uniform prior, then we start out with a 50% chance that God exists!
The problem, of course, is that the theist forgot (or doesn't understand) the distinction between "(the Judeo-Christian) God exists" and "at least one deity exists." It's really important to stress that the search space of possible gods is huge, otherwise you will create even more confusion.
Overall, though, I definitely agree with the main point of this post. Upvoted.
Quite honestly, I think a bigger problem is theists assuming that P(E|D) = 100%. That given a deity or more exists, they automatically assume the world would turn out like this - I would actually argue the opposite, that the number is very low.
Even assuming an omniscient, omnipotent, omnibenevolent God, he could have still, I argue at least, have made the choice to remove our free will "for our own good". Even if P(E|D) is high, in no way is it close to 100%.
Furthermore, you can never assume a 100% probability!!! (http://yudkowsky.net/rational/technical). You could go to rationalist hell for that!
Wife, child, family, friends, business.
that is sad - I know a friend in rural Nebraska who is in a similar predicament (college) and he says if it wasn't for LW, he might have just concluded that people were just un-awesome.
It is sad that demographics limits potential awesome-seekers. That is another reason why I admire Eliezer so much for making this online community.
Lukeprog's explanation about the ancestral setting makes sense, but I still believe that modern capacity for spreading information is great. Take a modern college setting - Person A asks B out, gets rejected - she gossips to all her friends, goes all around the college, reducing the number further possible dates.
I am not trying to say that said fear is rational, because the possibility that she is that much of a gossip is relatively low, but I am merely saying that when huge negative utilities are in consideration, it should not be taken lightly. When there is a even a 0.1% chance of death, (rational) people refuse to attempt the activity. Similarly, if getting shutdown will ruin your future chances - and we have been conditioned in a school setting for most of our lives, where it can affect future chances- we develop an instinctive hesitation to making the first step.
Part of the great trouble of being a rationalist is the great, great trouble of finding like minded people. I am thrilled at the news of such successful meetups taking place - the reason rationalists don't have the impact they should is poor organization
On the other hand, I really like what Eliezer says about courage. It is one thing to preach and repeat meaningless words about being courageous and facing the Truth, but if we are too afraid to look like a fool in society - who says we won't be too afraid to speak the Truth in the scientific community?
Dust specks – I completely disagree with Eliezer’s argument here. The hole in Yudkowsky’s logic, I believe, is not only the curved utility function, but also the main fact that discomfort cannot be added like numbers. The dust speck incident is momentary. You barely notice it, you blink, its gone, and you forget about it for the rest of your life. Torture, on the other hand, leaves lasting emotional damage on the human psyche. Futhermore, discomfort is different than pain. If, for example the hypothetical replaced the torture with 10000 people getting a non-painful itch for the rest of their life, I would agree with Eliezer’s theory. But pain, I believe and this is where my logic might be weak, is different than discomfort, and Eliezer treats pain as just an extreme discomfort. Another argument would be the instantaneous utilitarian framework. Let us now accept the assumption that pain is merely extreme discomfort. Eliezer’s framework is that the total “discomfort” in Scenario 1 is less than that in scenario 2. And if you simply add up the discomfit-points, then maybe such a conclusion would be reached. But now consider, during that 50 year time period, we take an arbitrary time Ti, more than say 2 minutes from the start. The total discomfort in Scenario 1 is some Pi1. The total discomfort in Scenario 2 is 0. This will go on until the end of the 50 year time period. Scenario 1: Discomfort|T=ti = P(ti) Scenario 2: Discomfort = 0. Integrating both functions with respect to dt. Total Discomfort in Scenario1 – INTEG(P(t)dt) Total Discomfort in Scenario 2 – 0. Put in terms of a non-mathematician, the pain of the torture is experience continuously. The pain of the dust is momentary.
One can argue the 0*infinity argument – that the small number produced by integration can be negated by the huge 3^^^3… However, this can be explained by my earlier thesis that pain is different than discomfort. I could measure the Kantian societal “categorical imperative” as my third piece of logic, but everyone else has already mentioned it. If there is any error in judgment made, please let me know.
On Less Wrong, I found thoroughness. Society today advocates speed over effectiveness - 12 year old college students over soundly rational adults. People who can Laplace transform diff-eqs in their heads over people who can solve logical paradoxes. In Less Wrong, I found people that could detach themselves from emotions and appearances, and look at things with an iron rationality.
I am sick of people who presume to know more than they do. Those that "seem" smart rather than actually being smart.
People on less wrong do not seem to be something they are not ~"Seems, madam! nay it is; I know not 'seems.'" (Hamlet)
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Conditional probabilities are allowed to be 100%, because they are probability ratios. In particular, P(A|A) is 100% by definition.
But P(E|D) is not 100% by any definition. Conditional probabilities are only 100% if
D-->E. And if that was true, why does this argument exist?