Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to You Only Live Twice
Comment author: bambi 12 December 2008 10:19:54PM 0 points [-]

burger flipper, making one decision that increases your average statistical lifespan (signing up for cryonics) does not compel you to trade off every other joy of living in favor of further increases. and, if the hospital or government or whoever can't be bothered to wait for my organs until i am done with them, that's their problem not mine.

Comment author: bambi 11 December 2008 10:45:41PM 0 points [-]

Carl, Robin's response to this post was a critical comment about the proposed content of Eliezer's AI's motivational system. I assumed he had a reason for making the comment, my bad.

Comment author: bambi 11 December 2008 08:57:10PM 0 points [-]

Oh, and Friendliness theory (to the extent it can be separated from specific AI architecture details) is like the doomsday device in Dr. Strangelove: it doesn't do any good if you keep it secret! [in this case, unless Eliezer is supremely confident of programming AI himself first]

Comment author: bambi 11 December 2008 08:49:03PM 2 points [-]

Regarding the 2004 comment, AGI Researcher probably was referring to the Coherent Extrapolated Volition document which was marked by Eliezer as slightly obsolete in 2004, and not a word since about any progress in the theory of Friendliness.

Robin, if you grant that a "hard takeoff" is possible, that leads to the conclusion that it will eventually be likely (humans being curious and inventive creatures). This AI would "rule the world" in the sense of having the power to do what it wants. Now, suppose you get to pick what it wants (and program that in). What would *you* pick? I can see arguing with the feasibility of hard takeoff (I don't buy it myself), but if you accept that step, Eliezer's intentions seem correct.

Comment author: bambi 04 December 2008 05:35:48PM 0 points [-]

When Robin wrote: "It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions." he gets it exactly right (though it is not necessarily so easy to make *good* ones, that isn't really the point).

This should have been clear from the sequence on the "timeless universe" -- just as that interesting abstraction is not going to convince more than a few credulous fans of the *truth* of that abstraction, the *truth* of the magical super-FOOM is not going to convince anybody without more substantial support than an appeal to a very specific way of looking at "things in general", which few are going to share.

On a historical time frame, we can grant pretty much everything you suppose and still be left with a FOOM that "takes" a century (a mere eyeblink in comparison to everything else in history). If you want to frighten us sufficiently about a FOOM of shorter duration, you're going to have to get your hands dirtier and move from abstractions to specifics.

Comment author: bambi 19 November 2008 10:25:32PM 0 points [-]

The issue, of course, is not whether AI is a game-changer. The issue is whether it will be a game-changer soon and suddenly. I have been looking forward to somebody explaining why this is likely, so I've got my popcorn popped and my box of wine in the fridge.

Comment author: bambi 17 November 2008 04:59:19PM 2 points [-]

Perhaps Eliezer goes to too many cocktail parties:

X: "Do you build neural networks or expert systems?" E: "I don't build anything. Mostly I whine about people who do." X: "Hmm. Does that pay well?"

Perhaps Bayesian Networks are the hot new delicious lemon glazing. Of course they have been around for 23 years.

Comment author: bambi 25 June 2008 05:24:56PM 0 points [-]

Silas: you might find this paper of some interest:

http://www.agiri.org/docs/ComputationalApproximation.pdf

Comment author: bambi 25 June 2008 02:35:48PM 3 points [-]

Perhaps "mind" should just be tabooed. It doesn't seem to offer anything helpful, and leads to vast fuzzy confusion.

Comment author: bambi 25 June 2008 02:32:11PM 1 point [-]

What do you mean by a mind?

All you have given us is that a mind is an optimization process. And: what a human brain does counts as a mind. Evolution does not count as a mind. AIXI may or may not count as a mind (?!).

I understand your desire not to "generalize", but can't we do better than this? Must we rely on Eliezer-sub-28-hunches to distinguish minds from non-minds?

Is the FAI you want to build a mind? That might sound like a dumb question, but why should it be a "mind", given what we want from it?

View more: Next