Comment author: Will_Sawin 10 June 2013 04:24:33AM 2 points [-]

I am bothered by the fact that the reasoning that leads to PrudentBot seems to contradict the reasoning of decision theory. Specifially, the most basic and obvious fact of behavior in these competitive games is: if you can prove that the opponent cooperates if and only if you do, then you should cooperate. But this reasoning gives the wrong answer vs. CooperateBot, for Lobian reasons. Is there an explanation for this gap?

Comment author: shinoteki 11 June 2013 10:24:31AM 2 points [-]

It's true that if you can prove that your opponent will cooperate counterfactual-if you cooperate and defect counterfacual-if you defect, then you should cooperate. But we don't yet have a good formalization of logical counterfactuals, and the reasoning that cooperates with cooperatebot just uses material-if instead of conterfactual-if.

Comment author: jsteinhardt 29 May 2013 05:27:07PM 3 points [-]

How could "true randomness" be required, given that it's computationally indistinguishable from pseudorandomness?

Comment author: shinoteki 29 May 2013 06:11:51PM 2 points [-]

If there is a feasible psuedorandomness generator that is computationally indistinguishable from randomness, then randomness is indeed not necessary. However, the existence of such a pseudorandomness generator is still an open problem.

Comment author: prase 28 February 2013 07:21:02PM 0 points [-]

Wouldn't it be still possible for a constructivist to embrace classical logic and the theoremhood of TND? The constructivist would just have to admit that (A or B) could be true even if neither A nor B is true. (A or B) would still not be meaningless, its truth would imply that there is proof for neither (not A) nor (not B), so this reinterpretation of "or" doesn't seem to be a big deal.

Comment author: shinoteki 28 February 2013 07:47:06PM 0 points [-]

Constructively, (not ((not A) and (not B))) is weaker than (A or B). While you could call the former "A or B", you then have to come up with a new name for the latter.

In response to Singularity Fiction
Comment author: shinoteki 27 February 2013 11:30:30PM 3 points [-]

The Metamorphosis of Prime Intellect. The chapters aren't in chronological order; the bootstrapping and power leveling happen in chapters two and four.

Comment author: JohnBonaccorsi 02 December 2012 06:23:31PM 0 points [-]

Having no training in probability, and having come upon the present website less than a day ago, I'm hoping someone here will be able to explain to me something basic. Let's assume, as is apparently assumed in this post, a 50-50 boy-girl chance. In other words, the chance is one out of two that a child will be a boy -- or that it will be a girl. A woman says, "I have two children." You respond, "Boys or girls?" She says, "Well, at least one of them is a boy. I haven't yet been informed of the sex of the other, to whom I've just given birth." You're saying that the chance that the newborn is a boy is one out of three, not one out of two? That's what I gather from the present post, near the beginning of which is the following:

In the correct version of this story, the mathematician says "I have two children", and you ask, "Is at least one a boy?", and she answers "Yes". Then the probability is 1/3 that they are both boys.

Comment author: shinoteki 02 December 2012 08:50:51PM 1 point [-]

No. To get the 1/3 probability you have to assume that she would be just as likely to say what she says if she had 1 boy as if she had 2 (and that she wouldn't say it if she had none). In your scenario she's only half as likely to say what she says if she has one boy as if she has two boys, because if she only has one there's a 50% chance it's the one she's just given birth to.

Comment author: shinoteki 10 November 2012 03:49:20PM 15 points [-]

I took it.

Comment author: Logos01 24 November 2011 07:40:41AM -1 points [-]

I don't think that's quite a terminal value.

Then you and I do not share terminal values. I value being "right" because it is "right". I disvalue being "wrong" because it is wrong. These are practically tautological. :)

I say this is vague and sounds true but not important.

Well... there is significance in judging the ability of an arbitrary individual from a given culture to achieve excellence in either quality by examining the availability of the other, especially when attempting to compare the 'greatness' of their achievements. But that has less to do with the hardness of the limits and more to do with the strength of correlation. (With the caveat that the correlations are only statistical; individuals can and do violate those correlations quite frequently -- a testament to how skilled human beings are at being inherently contradictory.)

The body of knowledge that today comprises cognitive science and behavioral economics is something our predecessors of a century ago did not have. I should expect, as a result of this -- should the information be widely disseminated (with fidelity) over time, to see something equivalent to the Flynn Effect in terms of what Eliezer calls the "sanity waterline". (With people like Cialdini and Ariely newly entering into the arena of the 'grand marketplace of ideas', we might see a superior result to that goal than the folks at Snopes have achieved with their individual/piecemeal approach.)

Comment author: shinoteki 26 November 2011 12:23:39PM 0 points [-]

Correspondence of beliefs to reality being desirable is no closer to being a tautology than financial institutes being on the side of rivers, undercover spies digging tunnels in the ground, or spectacles being drinking vessels.

Comment author: timtyler 18 October 2011 01:45:48PM *  5 points [-]

The paper gives what it describes as the “AGI Apocalypse Argument” - which ends with the following steps:

_12. For almost any goals that the AGI would have, if those goals are pursued in a way that would yield an overwhelmingly large impact on the world, then this would result in a catastrophe for humans.

_13. Therefore, if an AGI with almost any goals is invented, then there will be a catastrophe for humans.

_14. If humans will invent an AGI soon and if an AGI with almost any goals is invented, then there will be a catastrophe for humans, then there will be an AGI catastrophe soon.

_15. Therefore, there will be an AGI catastrophe soon.

It is hard to tell whether anyone took this seriously - but it seems that an isomorphic argument 'proves' that computer programs will crash - since "almost any" computer program crashes. The “AGI Apocalypse Argument” as stated thus appears to be rather silly.

If the stated aim was: "to convince my students that all of us are going to be killed by an artificial intelligence" - why start with such a flawed argument?

Comment author: shinoteki 18 October 2011 04:35:56PM *  3 points [-]

It is hard to tell whether anyone took this seriously - but it seems that an isomorphic argument 'proves' that computer programs will crash - since "almost any" computer program crashes. The “AGI Apocalypse Argument” as stated thus appears to be rather silly.

I don't see why this makes the argument seem silly. It seems to me that the isomorphic argument is correct, and that computer programs do crash.

Comment author: PhilGoetz 24 September 2011 02:33:14PM *  0 points [-]

My question is whether he meant to say

  • moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A faster than the speed of light in another reference frame

or

  • moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A slower than the speed of light in another reference frame

both of which involve moving faster than light.

Comment author: shinoteki 24 September 2011 02:53:28PM *  0 points [-]

He's not talking about impossibility

I know Owen was not talking about impossibility, I brought up impossibility to show that what you thought Owen meant could not be true.

both of which involve moving faster than light.

Moving from B to A slower than the speed of light does not involve moving faster than light.

Comment author: PhilGoetz 23 September 2011 10:07:49PM 0 points [-]

Second 'faster' should be 'slower', I think.

Comment author: shinoteki 23 September 2011 10:37:32PM 1 point [-]

It shouldn't. Moving from B to A slower than light is possible*, moving from A to B faster than light isn't, and you can't change whether something is possible by changing reference frames.

*(Under special relativity without tachyons)

View more: Prev | Next