All of Tom_Breton_(Tehom)'s Comments + Replies

@Retired Urologist: ISTM it's a combination of two things:

  1. It's a bit of nerd cultural heritage most of us have in common.
  2. Science fiction, more than other fiction, tends to deal with ideas, especially science ideas, and future predictions. That's not to say that it usually deals well with them - science flubs are more common, and the aforementioned Star Wars is a big offender. But it's usually wrestling with them at least a little bit, and therein lies another reason it comes up frequently in idea discussions.

As a machine-learning problem, it would be straightforward: The second learning algorithm (scientist) did it wrong. He's supposed to train on half the data and test on the other half. Instead he trained on all of it and skipped validation. We'd also be able to measure how relatively complex the theories were, but the problem statement doesn't give us that information.

As a human learning problem, it's foggier. The second guy could still have honestly validated his theory against the data, or not. And it's not straightforward to show that one human-rea... (read more)

"Gentlemen, I do not mind being contradicted, and I am unperturbed when I am attacked, but I confess I have slight misgivings when I hear myself being explained." -- Lord Balfour, to the English Parliament

C S Lewis termed this "Bulverism", this device of explaining why X is so {dumb, crazy, misinformed, w/e} as to claim Y, without lowering oneself to arguing against Y. Lewis however was not above committing Bulverism himself.

The novice thinks that Friendly AI is a problem of coercing an AI to make it do what you want, rather than the AI following its own desires. But the real problem of Friendly AI is one of communication - transmitting category boundaries, like "good", that can't be fully delineated in any training data you can give the AI during its childhood.

Or more generally, not just a binary classification problem but a measurement issue: How to measure benefit to humans or human satisfaction.

It has sometimes struck me that this FAI requirement has a lot i... (read more)

Fascinating that you could present Lob's theorem as a cartoon, Eliezer.

One tiny nitpick: The support for statement 9 seems to be wrong. It reads (1, MP) but that doesn't follow. Perhaps you mean (8, 1, MP)

2Cyan
I was just working through the cartoon guide pursuant to this recent discussion post, and I found this minor mistake too.
"Yes, I am the last man to have walked on the moon, and that's a very dubious and disappointing honor. It's been far too long." -- Gene Cernan

That doesn't seem like a pro-rationality quote to me. It has a space-y, science-y theme, which may connote rationality, but its content seems anti-rational to me.