Comment author: Recovering_irrationalist 02 June 2008 10:33:23PM 1 point [-]

Ooo I love this game! How many inconsistencies can you get in a taxi...

2008: "Fifteen percent," said Nick. "I would have said twenty percent," I said.

2004: He named the probability he thought it was his (20%), I named the probability I thought it was mine (15%)

Any more? :)

We're all flawed and should bear that in mind in disagreements, even when the mind says it's sure.

Comment author: Recovering_irrationalist 01 June 2008 11:28:55AM 0 points [-]

@HA: Eliza has fooled grown-ups. It arrived 42 years ago.

@Eliezer: I oppose Venkat, please stick to the logically flowing, inferential distance strategy. Given the subject, the frustration's worth it to build a solid intuitive foundation.

In response to Timeless Causality
Comment author: Recovering_irrationalist 29 May 2008 10:34:48PM 0 points [-]

Dynamically Linked, that's cheating because M1 always equals M2. It's like those division by zero proofs.

Regardless, Eliezer's point here is utterly beautiful and blew my mind, but I just want to check it's applicability in practice:

Suppose that we do know L1 and L2, but we do not know R1 and R2. Will learning M1 tell us anything about M2?

That is, will we observe the conditional dependence

P(M2|L1,L2) ≠ P(M2|M1,L1,L2)

to hold? The answer, on the assumption that causality flows to the right, and on the other assumptions previously given, is no.

True if we're sure we're perfectly reading L1/L2 and perfectly interpreting them to predict M2. But if not then I think the answer's yes because M1 provides additional implicit evidence about L1/L2 than we get from an imperfect reading or interpretation of L1/L2 alone.

Then again, you still get evidence about the direction of causality by how much P(M2|L1,L2) and P(M2|M1,L1,L2) tend to approximately equality in each direction, so even very imperfect knowledge could be got around with statistical analysis. I haven't read Judea Pearl's book yet so sorry if I this is naive or already discussed.

In response to Timeless Physics
Comment author: Recovering_irrationalist 27 May 2008 10:15:52PM 0 points [-]
Comment author: Recovering_irrationalist 26 May 2008 12:58:13PM 2 points [-]

The ideas in today's post are taken seriously by serious physicists

Roughly what proportion?

relative configuration space, which is not standard

Why isn't it? Is non-relative configuration space thought more representative of reality or just more practical to use?

I just want to know how non-standard you're getting, I don't expect justification yet. Thanks.

Comment author: Recovering_irrationalist 26 May 2008 10:33:00AM 0 points [-]

@Robin: Would you agree that what we label "intelligence" is essentially acting as a constructed neural category relating a bunch of cognitive abilities that tend to strongly correlate?

If so, it shouldn't be possible to get an exact handle on it as anything more than arbitrary weighted average of whatever cognitive abilities we chose to measure, because there's nothing else there to get a handle on.

But, because of that real correlation between measurable abilities that "intelligence" represents, it's still meaningful to make rough comparisons, certainly enough to say humans > chimps > mice.

Comment author: Recovering_irrationalist 23 May 2008 09:47:55PM 1 point [-]

Caledonian:The following link is quite illuminative on Hofstadter's feelings on things: Interview

He's rather skeptical of the sort of transhumanist claims that are common among certain sorts of futurists.

I'm a Hofstadter fan too, but look at your evidence again, bearing in mind how existing models and beliefs shape perception and judgment...

"I think it's very murky"

"the craziest sort of dog excrement mixed with very good food."

"Frankly, because it sort of disgusts me"

"The car driving across the Nevada desert still strikes me as being closer to the thermostat or the toilet that regulates itself"

"and the whole idea of humans is already down the drain?"

Comment author: Recovering_irrationalist 23 May 2008 08:28:52PM 1 point [-]

Eliezer's scale's more logarithmic, Carl Shulman's academics' is more linear, but neither quite makes it's mind up which it is. Please take your origin point away from that poor mouse.

I wonder how much confusion and miscommunication comes from people being unaware they're comparing in different scales. I still remember being shocked when I realized 60 decibels was a thousand times louder than 30.

In response to That Alien Message
Comment author: Recovering_irrationalist 23 May 2008 08:25:00AM 0 points [-]

Unknown:I'm not sure that RI's scenario, where the AI is conscious and friendly, is immoral at all

No time to answer properly now, but I wasn't objecting to it being friendly, I was objecting to it's enslavement without due care given to it's well-being. Eliezer's convinced me he cares, so I'll keep donating :)

In response to That Alien Message
Comment author: Recovering_irrationalist 22 May 2008 10:44:07PM 1 point [-]

Eliezer; it sounds like one of the most critical parts of Friendliness is stopping the AI having nightmares! Blocking a self-improving AI from most efficiently mapping anything with consciousness or qualia, ever, without it knowing first hand what they are? Checking it doesn't happen by accident in any process?

I'm glad it's you doing this. It seems many people are only really bothered by virtual unpleasantness if it's to simulated people.

View more: Prev | Next