Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: bigjeff5 03 March 2011 06:53:16PM 19 points [-]

I hate to break it to you, but if setting two things beside two other things didn't yield four things, then number theory would never have contrived to say so.

Numbers were invented to count things, that is their purpose. The first numbers were simple scratches used as tally marks circa 35,000 BC. The way the counts add up was derived from the way physical objects add up when grouped together. The only way to change the way numbers work is to change the way physical objects work when grouped together. Physical reality is the basis for numbers, so to change number theory you must first show that it is inconsistent with reality.

Thus numbers have a definite relation to the physical world. Number theory grew out of this, and if putting two objects next to two other objects only yielded three objects when numbers were invented over forty thousand years ago, then number theory must reflect that fact or it would never have been used. Consequently, suggesting 2+2=4 would be completely absurd, and number theorists would laugh in your face at the suggestion. There would, in fact, be a logical proof that 2+2=3 (much like there is a logical proof that 2+2=4 in number theory now).

All of mathematics are, in reality, nothing more than extremely advanced counting. If it is not related to the physical world, then there is no reason for it to exist. It follows rules first derived from the physical world, even if the current principles of mathematics have been extrapolated far beyond the bounds of the strictly physical. I think people lose sight of this far too easily (or worse, never recognize it in the first place).

Mathematics are so firmly grounded in the physical reality that when observations don't line up with what our math tells us, we must change our understanding of reality, not of math. This is because math is inextricably tied to reality, not because it is separate from it.

Comment author: Peterdjones 09 October 2014 03:32:14AM 0 points [-]

Mathematics are so firmly grounded in the physical reality that when observations don't line up with what our math tells us, we must change our understanding of reality, not of math. This is because math is inextricably tied to reality, not because it is separate from it.

On the other hand...

http://en.m.wikipedia.org/wiki/Is_logic_empirical%3F

Comment author: Stuart_Armstrong 16 May 2012 03:37:56PM *  2 points [-]

"There are sets of objective moral truths such that any rational being that understood them would be compelled to follow them". The arguments seem mainly to be:

1) Playing around with the meaning of rationality until you get something ("any rational being would realise their own pleasure is no more valid than that of others" or "pleasure is the highest principle, and any rational being would agree with this, or else be irrational")

2) Convergence among human values.

3) Moral progress for society: we're better than we used to be, so there needs to be some scale to measure the improvements.

4) Moral progress for individuals: when we think about things a lot, we make better moral decisions than when we were young and naive. Hence we're getting better a moral reasoning, so these is some scale on which to measure this.

5) Playing around with the definition of "truth-apt" (able to have a valid answer) in ways that strike me, uncharitably, as intuition-pumping word games. When confronted with this, I generally end up saying something like "my definitions do not map on exactly to yours, so your logical steps are false dichotomies for me".

6) Realising things like "if you can't be money pumped, you must be an expected utility maximiser", which implies that expected utility maximisation is superior to other reasoning, hence that there are some methods of moral reasoning which are strictly inferior. Hence there must be better ways of moral reasoning and (this is the place where I get off) a single best way (though that argument is generally implicit, never explicit).

Comment author: Peterdjones 13 March 2014 08:08:50PM 1 point [-]

I could add: Objective punishments and rewards need objective justification.

Comment author: Stuart_Armstrong 16 May 2012 03:37:56PM *  2 points [-]

"There are sets of objective moral truths such that any rational being that understood them would be compelled to follow them". The arguments seem mainly to be:

1) Playing around with the meaning of rationality until you get something ("any rational being would realise their own pleasure is no more valid than that of others" or "pleasure is the highest principle, and any rational being would agree with this, or else be irrational")

2) Convergence among human values.

3) Moral progress for society: we're better than we used to be, so there needs to be some scale to measure the improvements.

4) Moral progress for individuals: when we think about things a lot, we make better moral decisions than when we were young and naive. Hence we're getting better a moral reasoning, so these is some scale on which to measure this.

5) Playing around with the definition of "truth-apt" (able to have a valid answer) in ways that strike me, uncharitably, as intuition-pumping word games. When confronted with this, I generally end up saying something like "my definitions do not map on exactly to yours, so your logical steps are false dichotomies for me".

6) Realising things like "if you can't be money pumped, you must be an expected utility maximiser", which implies that expected utility maximisation is superior to other reasoning, hence that there are some methods of moral reasoning which are strictly inferior. Hence there must be better ways of moral reasoning and (this is the place where I get off) a single best way (though that argument is generally implicit, never explicit).

Comment author: Peterdjones 13 March 2014 07:35:20PM -1 points [-]

From my perspective, treating rationality as always instrumental, and never a terminal value is playing around with it's traditional meaning. (And indiscriminately teaching instrumental rationality is like indiscriminately handing out weapons. The traditional idea, going back to st least Plato, is that teaching someone to be rational improves them...changes their values)

Comment author: RobbBB 12 March 2014 10:41:32PM *  3 points [-]

a suicidally foolish AIXI is only a waste of money.

It's also a waste of time and intellectual resources. I raised this point with Adele last month.

It isn't perfect, but it's good enough to train most humans to avoid doing suicidal stupid things. Why would an AIXI need anything better?

It's good enough for some purposes, but even in the case of humans it doesn't protect a lot of people from suicidally stupid behavior like 'texting while driving' or 'drinking immoderately' or 'eating cookies'. To the extent we don't rely on our naturalistic ability to reason abstractly about death, we're dependent on the optimization power (and optimization targets) of evolution. A Cartesian AI would require a lot of ad-hoc supervision and punishment from a human, in the same way young or unreflective humans depend for their survival on an adult supervisor or on innate evolved intelligence. This would limit an AI's ability to outperform humans in adaptive intelligence.

if drops an anvil on that, it actually will survive as a mind!

Sure. In that scenario, the robot body functions like the robot arm I've used in my examples. Destroying the robot (arm) limits the AI's optimization power without directly damaging its software. AIXI will be unusually bad at figuring out for itself not to destroy its motor or robot, and may make strange predictions about the subsequent effects of its output sequence. If AIXI can't perceive most of its hardware, that exacerbates the problem.

Comment author: Peterdjones 13 March 2014 06:43:47PM *  1 point [-]

I am aware that humans hav a non zero level of life threatening behaviour. If we wanted it to be lower, we could make it lower, at the expense of various costs. We don't which seems to mean we are happy with the current cost benefit ratio. Arguing, as you have, that the risk of AI self harm can't be reduced to zero doesn't mean we can't hit an actuarial optimum.

It is not clear to me why you think safety training would limit intelligence.

Comment author: Peterdjones 12 March 2014 06:37:51PM *  -1 points [-]

Regarding the anvil problem: you have argued with great thoroughness that one can't perfectly prevent an AIXI from dropping an anvil on its head. However, I can't see the necessity. We would need to get the probability of a dangerously unfriendly SAI as close to zero as possible, because it poses an existential threat. However, a suicidally foolish AIXI is only a waste of money.

Humans have a negative reinforcement channel relating to bodily harm called pain. It isn't perfect, but it's good enough to train most humans to avoid doing suicidal stupid things. Why would an AIXI need anything better? Yout might want to answer that there is some danger related to an AIXI s intelligence, but it's clock speed, or whatever, could be throttle, during training.

Also any seriously intelligent .AI made with the technology of today, or the near future, is going to require a huge farm of servers. The only way it could physically interact with the world is through remote controlled body...and if drops an anvil on that, it actually will survive as a mind!

Comment author: RobbBB 12 September 2013 06:46:39PM 6 points [-]

Software that initially appears to care what you mean will be selected by market forces. But nearly all software that superficially looks Friendly isn't Friendly. If there are seasoned AI researchers who can't wrap their heads around the five theses, then how can I be confident that the Invisible Hand will both surpass them intellectually and recurrently sacrifice short-term gains on this basis?

Comment author: Peterdjones 13 September 2013 08:29:14AM -2 points [-]

Software that initially appears to care what you mean will be selected by market forces. But nearly all software that superficially looks Friendly isn't Friendly.

So? Yudkowsky to the rescue, or people get more discerning?

If there are seasoned AI researchers who can't wrap their heads around the five theses,

Don't confuse disagreement with misunderstanding.

Comment author: ArisKatsaris 12 September 2013 05:24:32PM 5 points [-]

Self-correcting software is possible if there's a correct implementation of what "correctness" means, and the module that has the correct implementation has control over the modules that don't have the correct implementation.

Self-improving software are likewise possible if there's a correct implementation of the definition of "improvement".

Right now, I'm guessing that it'd be relatively easy to programmatically define "performance improvement" and difficult to define "moral and ethical improvement".

Comment author: Peterdjones 13 September 2013 08:07:14AM 0 points [-]

And also difficutl to mathematically solve morality.

But self-correcting AGIs are still a neglected possibility.

Comment author: ArisKatsaris 12 September 2013 05:14:05PM 3 points [-]

I'm superintelligent in comparison to wasps, and I still chose to kill them all.

Comment author: Peterdjones 12 September 2013 05:31:53PM 0 points [-]

Then a general directive towards friendliness would be needed as well...but I already said that.

Comment author: Fronken 12 September 2013 02:42:59PM *  1 point [-]

Humans are made to do that by evolution AIs are not. So you have to figure what the heck evolution did, in ways specific enough to program into a computer.

Also, who mentioned giving AIs a priori knowledge of our preferences? It doesn't seem to be in what you replied to.

Comment author: Peterdjones 12 September 2013 05:16:46PM -2 points [-]

So you have to figure what the heck evolution did, in ways specific enough to program into a computer.

Is that going to be harder that coming up with a mathematical expension of morality and preloading it?

Humans are made to do that by evolution A

Yes. But that doens't mean it is necessarily complicated or arbitrary. We were made to be able to do arithmetic by evolution too.

Also, who mentioned giving AIs a priori knowledge of our preferences?

EY. It's his answer to friendliness.

Comment author: nshepperd 12 September 2013 12:28:16PM *  1 point [-]

What ever is the brand, any "impossibilities" that happen should lower your confidence in the reasoning that deemed them "impossibilities" in the first place. I don't think IQ is so strongly protective against deception, for example, and I do not think that you can assess something based on how the postings look to you with sufficient reliability as to overcome Gaussian priors very far from the mean.

Further, in this case the whole purpose of the experiment was to demonstrate that an AI could "take over a gatekeeper's mind through a text channel" (something previously deemed "impossible"). As far as that goes it was, in my view, successful.

Comment author: Peterdjones 12 September 2013 12:48:15PM 0 points [-]

something previously deemed "impossible"

It's clearly possible for some values of "gatekeeper", since some people fall for 419 scams. The test is a bit meaningless without information about the gatekeepers

View more: Next