How could "true randomness" be required, given that it's computationally indistinguishable from pseudorandomness?
If there is a feasible psuedorandomness generator that is computationally indistinguishable from randomness, then randomness is indeed not necessary. However, the existence of such a pseudorandomness generator is still an open problem.
Wouldn't it be still possible for a constructivist to embrace classical logic and the theoremhood of TND? The constructivist would just have to admit that (A or B) could be true even if neither A nor B is true. (A or B) would still not be meaningless, its truth would imply that there is proof for neither (not A) nor (not B), so this reinterpretation of "or" doesn't seem to be a big deal.
Constructively, (not ((not A) and (not B))) is weaker than (A or B). While you could call the former "A or B", you then have to come up with a new name for the latter.
The Metamorphosis of Prime Intellect. The chapters aren't in chronological order; the bootstrapping and power leveling happen in chapters two and four.
Having no training in probability, and having come upon the present website less than a day ago, I'm hoping someone here will be able to explain to me something basic. Let's assume, as is apparently assumed in this post, a 50-50 boy-girl chance. In other words, the chance is one out of two that a child will be a boy -- or that it will be a girl. A woman says, "I have two children." You respond, "Boys or girls?" She says, "Well, at least one of them is a boy. I haven't yet been informed of the sex of the other, to whom I've just given birth." You're saying that the chance that the newborn is a boy is one out of three, not one out of two? That's what I gather from the present post, near the beginning of which is the following:
In the correct version of this story, the mathematician says "I have two children", and you ask, "Is at least one a boy?", and she answers "Yes". Then the probability is 1/3 that they are both boys.
No. To get the 1/3 probability you have to assume that she would be just as likely to say what she says if she had 1 boy as if she had 2 (and that she wouldn't say it if she had none). In your scenario she's only half as likely to say what she says if she has one boy as if she has two boys, because if she only has one there's a 50% chance it's the one she's just given birth to.
I don't think that's quite a terminal value.
Then you and I do not share terminal values. I value being "right" because it is "right". I disvalue being "wrong" because it is wrong. These are practically tautological. :)
I say this is vague and sounds true but not important.
Well... there is significance in judging the ability of an arbitrary individual from a given culture to achieve excellence in either quality by examining the availability of the other, especially when attempting to compare the 'greatness' of their achievements. But that has less to do with the hardness of the limits and more to do with the strength of correlation. (With the caveat that the correlations are only statistical; individuals can and do violate those correlations quite frequently -- a testament to how skilled human beings are at being inherently contradictory.)
The body of knowledge that today comprises cognitive science and behavioral economics is something our predecessors of a century ago did not have. I should expect, as a result of this -- should the information be widely disseminated (with fidelity) over time, to see something equivalent to the Flynn Effect in terms of what Eliezer calls the "sanity waterline". (With people like Cialdini and Ariely newly entering into the arena of the 'grand marketplace of ideas', we might see a superior result to that goal than the folks at Snopes have achieved with their individual/piecemeal approach.)
Correspondence of beliefs to reality being desirable is no closer to being a tautology than financial institutes being on the side of rivers, undercover spies digging tunnels in the ground, or spectacles being drinking vessels.
The paper gives what it describes as the “AGI Apocalypse Argument” - which ends with the following steps:
_12. For almost any goals that the AGI would have, if those goals are pursued in a way that would yield an overwhelmingly large impact on the world, then this would result in a catastrophe for humans.
_13. Therefore, if an AGI with almost any goals is invented, then there will be a catastrophe for humans.
_14. If humans will invent an AGI soon and if an AGI with almost any goals is invented, then there will be a catastrophe for humans, then there will be an AGI catastrophe soon.
_15. Therefore, there will be an AGI catastrophe soon.
It is hard to tell whether anyone took this seriously - but it seems that an isomorphic argument 'proves' that computer programs will crash - since "almost any" computer program crashes. The “AGI Apocalypse Argument” as stated thus appears to be rather silly.
If the stated aim was: "to convince my students that all of us are going to be killed by an artificial intelligence" - why start with such a flawed argument?
It is hard to tell whether anyone took this seriously - but it seems that an isomorphic argument 'proves' that computer programs will crash - since "almost any" computer program crashes. The “AGI Apocalypse Argument” as stated thus appears to be rather silly.
I don't see why this makes the argument seem silly. It seems to me that the isomorphic argument is correct, and that computer programs do crash.
My question is whether he meant to say
- moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A faster than the speed of light in another reference frame
or
- moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A slower than the speed of light in another reference frame
both of which involve moving faster than light.
He's not talking about impossibility
I know Owen was not talking about impossibility, I brought up impossibility to show that what you thought Owen meant could not be true.
both of which involve moving faster than light.
Moving from B to A slower than the speed of light does not involve moving faster than light.
Second 'faster' should be 'slower', I think.
It shouldn't. Moving from B to A slower than light is possible*, moving from A to B faster than light isn't, and you can't change whether something is possible by changing reference frames.
*(Under special relativity without tachyons)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I am bothered by the fact that the reasoning that leads to PrudentBot seems to contradict the reasoning of decision theory. Specifially, the most basic and obvious fact of behavior in these competitive games is: if you can prove that the opponent cooperates if and only if you do, then you should cooperate. But this reasoning gives the wrong answer vs. CooperateBot, for Lobian reasons. Is there an explanation for this gap?
It's true that if you can prove that your opponent will cooperate counterfactual-if you cooperate and defect counterfacual-if you defect, then you should cooperate. But we don't yet have a good formalization of logical counterfactuals, and the reasoning that cooperates with cooperatebot just uses material-if instead of conterfactual-if.