Comment author: 03 March 2011 06:53:16PM 19 points [-]

I hate to break it to you, but if setting two things beside two other things didn't yield four things, then number theory would never have contrived to say so.

Numbers were invented to count things, that is their purpose. The first numbers were simple scratches used as tally marks circa 35,000 BC. The way the counts add up was derived from the way physical objects add up when grouped together. The only way to change the way numbers work is to change the way physical objects work when grouped together. Physical reality is the basis for numbers, so to change number theory you must first show that it is inconsistent with reality.

Thus numbers have a definite relation to the physical world. Number theory grew out of this, and if putting two objects next to two other objects only yielded three objects when numbers were invented over forty thousand years ago, then number theory must reflect that fact or it would never have been used. Consequently, suggesting 2+2=4 would be completely absurd, and number theorists would laugh in your face at the suggestion. There would, in fact, be a logical proof that 2+2=3 (much like there is a logical proof that 2+2=4 in number theory now).

All of mathematics are, in reality, nothing more than extremely advanced counting. If it is not related to the physical world, then there is no reason for it to exist. It follows rules first derived from the physical world, even if the current principles of mathematics have been extrapolated far beyond the bounds of the strictly physical. I think people lose sight of this far too easily (or worse, never recognize it in the first place).

Mathematics are so firmly grounded in the physical reality that when observations don't line up with what our math tells us, we must change our understanding of reality, not of math. This is because math is inextricably tied to reality, not because it is separate from it.

Comment author: 09 October 2014 03:32:14AM 0 points [-]

Mathematics are so firmly grounded in the physical reality that when observations don't line up with what our math tells us, we must change our understanding of reality, not of math. This is because math is inextricably tied to reality, not because it is separate from it.

On the other hand...

http://en.m.wikipedia.org/wiki/Is_logic_empirical%3F

Comment author: 16 May 2012 03:37:56PM *  2 points [-]

"There are sets of objective moral truths such that any rational being that understood them would be compelled to follow them". The arguments seem mainly to be:

1) Playing around with the meaning of rationality until you get something ("any rational being would realise their own pleasure is no more valid than that of others" or "pleasure is the highest principle, and any rational being would agree with this, or else be irrational")

2) Convergence among human values.

3) Moral progress for society: we're better than we used to be, so there needs to be some scale to measure the improvements.

4) Moral progress for individuals: when we think about things a lot, we make better moral decisions than when we were young and naive. Hence we're getting better a moral reasoning, so these is some scale on which to measure this.

5) Playing around with the definition of "truth-apt" (able to have a valid answer) in ways that strike me, uncharitably, as intuition-pumping word games. When confronted with this, I generally end up saying something like "my definitions do not map on exactly to yours, so your logical steps are false dichotomies for me".

6) Realising things like "if you can't be money pumped, you must be an expected utility maximiser", which implies that expected utility maximisation is superior to other reasoning, hence that there are some methods of moral reasoning which are strictly inferior. Hence there must be better ways of moral reasoning and (this is the place where I get off) a single best way (though that argument is generally implicit, never explicit).

Comment author: 13 March 2014 08:08:50PM 1 point [-]

I could add: Objective punishments and rewards need objective justification.

Comment author: 16 May 2012 03:37:56PM *  2 points [-]

"There are sets of objective moral truths such that any rational being that understood them would be compelled to follow them". The arguments seem mainly to be:

1) Playing around with the meaning of rationality until you get something ("any rational being would realise their own pleasure is no more valid than that of others" or "pleasure is the highest principle, and any rational being would agree with this, or else be irrational")

2) Convergence among human values.

3) Moral progress for society: we're better than we used to be, so there needs to be some scale to measure the improvements.

4) Moral progress for individuals: when we think about things a lot, we make better moral decisions than when we were young and naive. Hence we're getting better a moral reasoning, so these is some scale on which to measure this.

5) Playing around with the definition of "truth-apt" (able to have a valid answer) in ways that strike me, uncharitably, as intuition-pumping word games. When confronted with this, I generally end up saying something like "my definitions do not map on exactly to yours, so your logical steps are false dichotomies for me".

6) Realising things like "if you can't be money pumped, you must be an expected utility maximiser", which implies that expected utility maximisation is superior to other reasoning, hence that there are some methods of moral reasoning which are strictly inferior. Hence there must be better ways of moral reasoning and (this is the place where I get off) a single best way (though that argument is generally implicit, never explicit).

Comment author: 13 March 2014 07:35:20PM -1 points [-]

From my perspective, treating rationality as always instrumental, and never a terminal value is playing around with it's traditional meaning. (And indiscriminately teaching instrumental rationality is like indiscriminately handing out weapons. The traditional idea, going back to st least Plato, is that teaching someone to be rational improves them...changes their values)

In response to comment by on The Problem with AIXI
Comment author: 12 March 2014 10:41:32PM *  3 points [-]

a suicidally foolish AIXI is only a waste of money.

It's also a waste of time and intellectual resources. I raised this point with Adele last month.

It isn't perfect, but it's good enough to train most humans to avoid doing suicidal stupid things. Why would an AIXI need anything better?

It's good enough for some purposes, but even in the case of humans it doesn't protect a lot of people from suicidally stupid behavior like 'texting while driving' or 'drinking immoderately' or 'eating cookies'. To the extent we don't rely on our naturalistic ability to reason abstractly about death, we're dependent on the optimization power (and optimization targets) of evolution. A Cartesian AI would require a lot of ad-hoc supervision and punishment from a human, in the same way young or unreflective humans depend for their survival on an adult supervisor or on innate evolved intelligence. This would limit an AI's ability to outperform humans in adaptive intelligence.

if drops an anvil on that, it actually will survive as a mind!

Sure. In that scenario, the robot body functions like the robot arm I've used in my examples. Destroying the robot (arm) limits the AI's optimization power without directly damaging its software. AIXI will be unusually bad at figuring out for itself not to destroy its motor or robot, and may make strange predictions about the subsequent effects of its output sequence. If AIXI can't perceive most of its hardware, that exacerbates the problem.

In response to comment by on The Problem with AIXI
Comment author: 13 March 2014 06:43:47PM *  1 point [-]

I am aware that humans hav a non zero level of life threatening behaviour. If we wanted it to be lower, we could make it lower, at the expense of various costs. We don't which seems to mean we are happy with the current cost benefit ratio. Arguing, as you have, that the risk of AI self harm can't be reduced to zero doesn't mean we can't hit an actuarial optimum.

It is not clear to me why you think safety training would limit intelligence.

Comment author: 12 March 2014 06:37:51PM *  -1 points [-]

Regarding the anvil problem: you have argued with great thoroughness that one can't perfectly prevent an AIXI from dropping an anvil on its head. However, I can't see the necessity. We would need to get the probability of a dangerously unfriendly SAI as close to zero as possible, because it poses an existential threat. However, a suicidally foolish AIXI is only a waste of money.

Humans have a negative reinforcement channel relating to bodily harm called pain. It isn't perfect, but it's good enough to train most humans to avoid doing suicidal stupid things. Why would an AIXI need anything better? Yout might want to answer that there is some danger related to an AIXI s intelligence, but it's clock speed, or whatever, could be throttle, during training.

Also any seriously intelligent .AI made with the technology of today, or the near future, is going to require a huge farm of servers. The only way it could physically interact with the world is through remote controlled body...and if drops an anvil on that, it actually will survive as a mind!

Comment author: 11 September 2013 06:06:39PM 1 point [-]

But to be a good epistemic rationalist, and entity must value certain things, like consistency and lack of contradiction.

You appear to not understand the Orthogonality Thesis, since you have misstated it. The orthogonality thesis deliberately refers to preferences, not values, because values could also refer to instrumental values, whereas preferences can only refer to terminal values. (Obviously, consistency and lack of contradiction are only generally valued instrumentally.)

an entity that thinks contradictions are valuable will be a poor epistemic ratiionalist and therefore a poor instrumental rationalist.

No; if the entity values itself believing contradictions, then it having contradicting beliefs would mean it is a good instrumental rationalist.

Comment author: 12 March 2014 02:09:43PM *  1 point [-]

An entity that has contradictory beliefs will be a poor instrumental rationalist. It looks like you would need to engineer a distinction between instrumental beliefs and terminal beliefs. While we're on the subject, you might need a firewall to stop an .AI acting on intrinsically motivating ideas, if they exist. In any case, orthogonality is an architecture choice, not an ineluctable fact about minds.

The OT has multiple forms, as Armstrong notes. An OT that says you could make arbitrary combinations of preference and power if you really wanted to, can't plug into an argument that future .AI will ,with high probability, be a Lovecraftian horror, at least not unless you also aargue that an orthogonal architecture will be chosen, with high probability.

Comment author: 12 September 2013 05:14:05PM 3 points [-]

I'm superintelligent in comparison to wasps, and I still chose to kill them all.

Comment author: 12 September 2013 05:31:53PM 0 points [-]

Then a general directive towards friendliness would be needed as well...but I already said that.

Comment author: 12 September 2013 02:42:59PM *  1 point [-]

Humans are made to do that by evolution AIs are not. So you have to figure what the heck evolution did, in ways specific enough to program into a computer.

Also, who mentioned giving AIs a priori knowledge of our preferences? It doesn't seem to be in what you replied to.

Comment author: 12 September 2013 05:16:46PM -2 points [-]

So you have to figure what the heck evolution did, in ways specific enough to program into a computer.

Is that going to be harder that coming up with a mathematical expension of morality and preloading it?

Humans are made to do that by evolution A

Yes. But that doens't mean it is necessarily complicated or arbitrary. We were made to be able to do arithmetic by evolution too.

Also, who mentioned giving AIs a priori knowledge of our preferences?

EY. It's his answer to friendliness.

Comment author: 12 September 2013 12:28:16PM *  1 point [-]

What ever is the brand, any "impossibilities" that happen should lower your confidence in the reasoning that deemed them "impossibilities" in the first place. I don't think IQ is so strongly protective against deception, for example, and I do not think that you can assess something based on how the postings look to you with sufficient reliability as to overcome Gaussian priors very far from the mean.

Further, in this case the whole purpose of the experiment was to demonstrate that an AI could "take over a gatekeeper's mind through a text channel" (something previously deemed "impossible"). As far as that goes it was, in my view, successful.

Comment author: 12 September 2013 12:48:15PM 0 points [-]

something previously deemed "impossible"

It's clearly possible for some values of "gatekeeper", since some people fall for 419 scams. The test is a bit meaningless without information about the gatekeepers

Comment author: 12 September 2013 10:59:45AM *  4 points [-]

People manage to be friendly without apriori knowledge of everyone else's preferences. Human values are very complex...and one person's preferences are not another's.

Being the same species comes with certain advantages for the possiibility of cooperation. But I wasn't very friendly towards a wasp-nest I discovered in my attic. People aren't very friendly to the vast majority of different species they deal with.

Comment author: 12 September 2013 12:36:10PM 0 points [-]

Being superintelligent is just the thing for bridging inferential gaps.

View more: Next