Comment author: [deleted] 25 March 2015 08:20:27PM *  0 points [-]

Or the benefits could slightly outweigh the harm.

You have to treat this option as a net win of 0 then, because you have no more info to go on so the probs. are 50/50. Option A: Torture. Net win is negativ. Option B: Spec dust. Net win is zero. Make you choice.

In response to comment by [deleted] on Torture vs. Dust Specks
Comment author: Quill_McGee 25 March 2015 09:43:26PM 4 points [-]

In the Least Convenient Possible World of this hypothetical, every dust speck causes a constant small amount of harm with no knock-on effects(no avoiding buses, no crashing cars...)

Comment author: V_V 25 March 2015 06:46:46PM *  1 point [-]

I don't think he was talking about self-PA, but rather an altered decision criteria, such that rather that "if I can prove this is good, do it" it is "if I can prove that if I am consistent then this is good, do it"

Yes.

and it still can't /increase/ in proof strength.

Mmm, I think I can see it.
What about "if I can prove that if a version of me with unbounded computational resources is consistent then this is good, do it". (*) It seems to me that this allows increase in proof strength up to the proof strength of that particular ideal reference agent.

(* there should be probably additional constraints that specify that the current agent, and the successor if present, must be provably approximations of the unbounded agent in some conservative way)

Comment author: Quill_McGee 25 March 2015 09:42:00PM 1 point [-]

"if I can prove that if a version of me with unbounded computational resources is consistent then this is good, do it"

In this formalism we generally assume infinite resources anyway. And even if this is not the case, consistent/inconsistent doesn't depend on resources, only on the axioms and rules for deduction. So this still doesn't let you increase in proof strength, although again it should help avoid losing it.

Comment author: orthonormal 25 March 2015 06:24:50PM 1 point [-]

Good question! Translating your question to the setting of the logical model, you're suggesting that instead of using provability in Peano Arithmetic as the criterion for justified action, or provability in PA + Con(PA) (which would have the same difficulty), the agent uses provability under the assumption that its current formal system (which includes PA) is consistent.

Unfortunately, this turns out to be an inconsistent formal system!

Thus, you definitely do not want an agent that makes decisions on the criterion "if I assume that my own deductions are reliable, then can I show that this is the best action?", at least not until you've come up with a heuristic version of this that doesn't lead to awful self-fulfilling prophecies.

Comment author: Quill_McGee 25 March 2015 06:31:24PM 2 points [-]

I don't think he was talking about self-PA, but rather an altered decision criteria, such that rather that "if I can prove this is good, do it" it is "if I can prove that if I am consistent then this is good, do it" which I think doesn't have this particular problem, though it does have others, and it still can't /increase/ in proof strength.

Comment author: christopherj 08 April 2014 04:53:44AM *  1 point [-]

I'm having trouble understanding how something generally intelligent in every respect except failure to understand death or that it has a physical body, could be incapable of ever learning or at least acting indistinguishable from one that does know.

For example, how would AIXI act if given the following as part of its utility function: 1) utility function gets multiplied by zero should a certain computer cease to function 2) utility function gets multiplied by zero should certain bits be overwritten except if a sanity check is passed first

Seems to me that such an AI would act as if it had a genocidally dangerous fear of death, even if it doesn't actually understand the concept.

Comment author: Quill_McGee 25 March 2015 01:00:51AM 0 points [-]

That AI doesn't drop an anvil on its head(I think...), but it also doesn't self-improve.

Comment author: [deleted] 24 March 2015 09:49:33AM 0 points [-]

Failing to halt and going into an infinite loop are not the same thing.

I'd appreciate some explanation on that, to see if you're saying something I haven't heard before or if we're talking past each-other. I don't just include while(true); under "infinite loop", I also include infinitely-expanding recursions that cannot be characterized as coinductive stepwise computations. Basically, anything that would evaluate to the \Bot type in type-theory counts for "infinite loop" here, just plain computational divergence.

In response to comment by [deleted] on Second-Order Logic: The Controversy
Comment author: Quill_McGee 24 March 2015 05:13:03PM *  0 points [-]

I think that what Joshua was talking about by 'infinite loop' is 'passing through the same state an infinite number of times.' That is, a /loop/, rather than just a line with no endpoint. although this would rule out (some arbitrary-size int type) x = 0; while(true){ x++; } on a machine with infinite memory, as it would never pass through the same state twice. So maybe I'm still misunderstanding.

Comment author: Kutta 24 March 2015 05:02:07PM *  7 points [-]

Suppose "mathemathics would never prove a contradiction". We can write it out as ¬Provable(⊥). This is logically equivalent to Provable(⊥) → ⊥, and it also implies Provable(Provable(⊥) → ⊥) by the rules of provability. But Löb's theorem expresses ∀ A (Provable(Provable(A) → A) → Provable(A)), which we can instantiate to Provable(Provable(⊥) → ⊥)→ Provable(⊥), and now we can apply modus ponens and our assumption to get a Provable(⊥).

Comment author: Quill_McGee 24 March 2015 05:11:04PM 5 points [-]

Wasn't Löb's theorem ∀ A (Provable(Provable(A) → A) → Provable(A))? So you get Provable(⊥) directly, rather than passing through ⊥ first. This is good, as, of course, ⊥ is always false, even if it is provable.

Comment author: [deleted] 14 March 2015 07:45:38PM 1 point [-]

10 would still be incorrect.

In response to comment by [deleted] on Rationality: From AI to Zombies
Comment author: Quill_McGee 19 March 2015 11:51:19PM 1 point [-]

Darn it, and I counted like five times to make sure there really were 10 visible before I said anything. I didn't realize that the stone the middle-top stone was on top of was one stone, not two.

Comment author: Coscott 13 March 2015 03:17:28PM *  22 points [-]

The cover is incorrect :(

EDIT: If you do not understand this post, read essay 268 from the book!

Comment author: Quill_McGee 13 March 2015 06:30:11PM 1 point [-]

There might be one more stone not visible?

Comment author: JonahSinick 20 February 2015 06:38:34PM *  2 points [-]

Thanks for the detailed comment.

  • I don't think that exceptional intelligence is either necessary or sufficient to be an exceptional mathematician. Tao's statement "But an exceptional amount of intelligence has almost no bearing on whether one is an exceptional mathematician." is a very strong statement: if he had said "plays only a moderate role in whether one is an exceptional mathematician" he would have been on much more solid ground.

  • I agree that the Langlands quote is by itself not strong evidence against Tao's assertion for the reasons that you give, but it's still evidence. I'm relying on many weak arguments. I'll gradually flesh them out in my sequence.

  • I share your intuition re: combinatorialists vs.geometers. One of my friends spent a lot of time with Chern, who struck him as being quite ordinary with respect to R, while being exceptional on a number of other dimensions. Grothendieck's self-assessment suggests that it is in fact possible to be amongst the greatest mathematicians without exceptional R.

  • A key point that you might be missing (certainly I did for many years) is that there just aren't many people of exceptional intelligence. Suppose that it were true that IQ is normally distributed: then the number of people of IQ 145+ would be 60x larger than the number of people of IQ 160+. Under this hypothesis, even if only 1 in 20 exceptional mathematicians had IQ 160+, that would mean that people in that range were 3x as likely as their IQ 145+ counterparts. to become exceptional mathematicians. It's been suggested that the distribution of IQ is in fact fat-tailed because of assortative mating, and this blunts the force of the aforementioned argument, but it's also true that more than 5% of exceptional mathematicians have IQ 160+: I think the actual figure is closer to 50%.

Comment author: Quill_McGee 20 February 2015 07:02:35PM 1 point [-]

It should be noted that if measured IQ is fat-tailed, this is because there is something wrong with IQ tests. IQ is defined to be normally distributed with a mean of 100 and a standard deviation of either 15 or 16 depending on which definition you're using. So if measured IQ is fat-tailed, then the tests aren't calibrated properly(of course, if your test goes all the way up to 160, it is almost inevitably miscalibrated, because there just aren't enough people to calibrate it with).

Comment author: Quill_McGee 19 February 2015 03:30:04AM 0 points [-]

I would disagree with the phrasing you use regarding 'human terminal values.' Now, I don't disagree that evolution optimized humans according to those criteria, but I am not evolution, and evolution's values are not my values. I would expect that only a tiny fraction of humans would say that evolution's values should be our values(I'd like to say 'none,' but radical neo-darwinians might exist). Now, if you were just saying that those are the values of the optimization process that produced humanity, I agree, but that was not what I interpreted you as saying.

View more: Prev | Next