In response to Mundane Magic
Comment author: Ben_Jones 03 November 2008 11:24:48AM 5 points [-]

Oh, and don't forget the Mystical Intertubes of Communication, which allow any person with access to the Tubes to 'post' their opinions for others to peruse. Even better, other Intertube users can append inanities to any of these essays with the minimum of thought and effort!

Comment author: Ben_Jones 02 November 2008 10:28:13AM 1 point [-]

Wow. On around 20 minutes Jaron wraps his irrationality up in so much floral language it's impossible to follow. There's no arguing with that, but you had a really good stab, Eliezer. I'd have snapped at all the implied barbs. Fascinating all the way through. Three cheers for physical reality!

In response to Mundane Magic
Comment author: Ben_Jones 01 November 2008 11:04:38AM 2 points [-]

Possession of a single Eye is said to make the bearer equivalent to royalty.

Very good.

How about the miraculous ability to synthesise or isolate compounds of chemicals from the world that recreate sensations, or even push perception beyond the sensations for which it was designed? I'm always pretty impressed by that one.

Comment author: Ben_Jones 31 October 2008 04:07:49PM 0 points [-]

A general theory of intelligence designed for constructing AI's does not need to be universally applicable.

I think the idea is that once that AI is running, it would be nice to have an objective measure of just how pwerful it is, over and above how efficiently it can build a car.

Comment author: Ben_Jones 28 October 2008 12:41:37PM 1 point [-]

From The Bedrock of Morality:

For every mind that thinks that terminal value Y follows from moral argument X, there will be an equal and opposite mind who thinks that terminal value not-Y follows from moral argument X.

Does the same apply to optimisation processes? In other words, for every mind that sees you flicking the switch to save the universe, does another mind see only the photon of 'waste' brain heat and think 'photon maximiser accidentally hits switch'? Does this question have implications for impartial measurements of, say, 'impressiveness' or 'efficiency'?

Emile, that's what I thought when I read Tim's comment, but then I immediately asked myself at what point between water flowing and neurons firing does a process become simple and deterministic? As Eliezer says, to a smart enough mind, we would look pretty basic. I mean, we weren't even designed by a mind, we sprung from simple selection! But yes, it's possible that optimisation isn't involved at all in water, whereas it pretty obviously is with going to the supermarket etc.

peeper, you score 2 on the comment incoherency criterion but an unprecedented 12 for pointlessness, giving you also an average of 7.0. Congrats!

In response to Aiming at the Target
Comment author: Ben_Jones 27 October 2008 12:59:59PM 0 points [-]

an outcome that ranks high in your preference ordering

Well if Garry's wins are in the centre of your preference ordering circle of course you'll lose! Some fighting spirit please!

Oh, and if something maximising entropy is a valid optimisation process, then surely everything is an optimisation process and the term becomes useless? Optimisation processes lead (locally) away from maximal entropy, not towards it, right?

In response to Crisis of Faith
Comment author: Ben_Jones 13 October 2008 08:36:00AM 2 points [-]

I would be rather not be around people who kept telling me true minutiae about the world and he cosmos, if they have no bearing on the problems I am trying to solve.

Will, not wishing to be told pointless details is not the same as deluding yourself.

I was discussing the placebo effect with a friend last night though, and found myself arguing that this could well be an example of a time when more true knowledge could hurt. Paternalistic issues aside, people appear to get healthier when they believe falsehoods about the effectiveness of, say, homeopathy or sugar pills.

Would I rather live in a world where doctors seek to eliminate the placebo effect by disseminating more true knowledge; or one where they take advantage of it, save more lives, but potentially give out misinformation about what they're prescribing? I honestly don't know.

Comment author: Ben_Jones 09 October 2008 04:42:29PM 3 points [-]

From a strictly Bayesian point of view that seems to me to be the overwhelmingly more probably explanation.

Now that's below the belt.... ;)

Too much at stake for that sort of thing I reckon. All it takes is a quick copy and paste of those lines and goodbye career. Plus, y'know, all that ethics stuff.

Comment author: Ben_Jones 08 October 2008 08:41:13AM 0 points [-]

David,

Throttling an AI to human intelligence is like aiming your brand new superweapon at the world with the safety catch on. Potentially interesting, but really not worth the risk.

Besides, Eliezer would probably say that the F in FAI is the point of the code, not a module bolted into the code. There's no 'building the AI and tweaking the morality'. Either it's spot on when it's switched on, or it's unsafe.

Comment author: Ben_Jones 07 October 2008 03:40:19PM 0 points [-]

David, the concept behind the term Singularity refers to our inability to predict what happens on the other side.

However, you don't even have to hold with the theory of a technological Singularity to appreciate the idea that an intelligence even slightly higher than our own (not to mention orders of magnitudes faster, and certainly not to mention self-optimizing) would probably be able to do things we can't imagine. Is it worth taking the risk?

View more: Prev | Next