Comment author: Gray_Area 11 November 2007 11:27:54PM 1 point [-]

Tom McGabe: "Evolution sure as heck never designed people to make condoms and birth control pills, so why can't a computer do things we never designed it to do?"

That's merely unpredictability/non-determinism, which is not necessarily the same as free will.

Comment author: Gray_Area 11 November 2007 11:13:20PM 0 points [-]

Stefan Pernar said: "I argue that morality can be universally defined."

As Eliezer points out, evolution is blind, and so 'fitness' can have as a side-effect what we would intuitively consider unimaginable moral horrors (much worse than parasitic wasps and cats playing with their food). I think if you want to define 'the Good' in the way you do, you need to either explain how such horrors are to be avoided, or educate the common intuition.

In response to Fake Selfishness
Comment author: Gray_Area 08 November 2007 07:15:21AM 2 points [-]

Stephen: the altruist can ask the Genie the same thing as the selfish person. In some sense, though, I think these sorts of wishes are 'cheating,' because you are shifting the computational/formalization burden from the wisher to the wishee. (Sorry for the thread derail.)

In response to Fake Selfishness
Comment author: Gray_Area 08 November 2007 05:10:34AM 9 points [-]

"My definition of an intelligent person is slowly becoming 'someone who agrees with Eliezer', so that's all right."

That's not in the spirit of this blog. Status is the enemy, only facts are important.

Comment author: Gray_Area 05 November 2007 05:29:24AM 1 point [-]

Scott said: "25MB is enough for pretty much anything!"

Have people tried to measure the complexity of the 'interpreter' for the 25MB of 'tape' of DNA? Replication machinery is pretty complicated, possibly much more so than any genome.

Comment author: Gray_Area 29 October 2007 03:26:39AM 0 points [-]

Eliezer, are you familiar with Russell and Wefald's book "Do the Right Thing"?

It's fairly old (1991), but it's a good example of how people in AI view limited rationality.

Comment author: Gray_Area 23 October 2007 09:16:41AM 7 points [-]

This reminds me of teaching. I think good teachers understand short inferential distances at least intuitively if not explicitly. The 'shortness' of inference is why good teaching must be interactive.

Comment author: Gray_Area 20 October 2007 06:09:42AM -1 points [-]

Pascal's wager type arguments fail due to their symmetry (which is preserved in finite cases).

Comment author: Gray_Area 17 October 2007 04:40:23AM 4 points [-]

What circles do you run in Eliezer? I meet a fair number of people who work in AI, (you can say I "work in AI" myself) and so far I can't think of a single person who was sure of a way to build general intelligence. Is this attitude you observe a common one among people who aren't actually doing AI research, but who think about AI?

Comment author: Gray_Area 16 October 2007 08:14:55AM 1 point [-]

Apparently what works fairly well in Go is to evaluate positions based on 'randomly' running lots games to completion (in other words you evaluate a position as 'good' if in lots of random games which start from this position you win). Random sampling of the future can work in some domains. I wonder if this method is applicable to answering specific questions about the future (though naturally I don't think science fiction novels are a good sampling method).

View more: Prev | Next