In response to Chaotic Inversion
Comment author: Vladimir_Slepnev 29 November 2008 01:26:24PM 3 points [-]

The self help route. I've seen good bloggers succumb to it. Please don't go there.

Comment author: Vladimir_Slepnev 21 November 2008 03:08:11PM 0 points [-]

Eric, it's more amusing that both often cite a theorem that agreeing to disagree is impossible. And even more amusing that in "Nature of Logic" Eliezer practically explained agreeing to disagree: our mind is more cognition than logic. Eliezer and Robin generalize from facts to concepts differently which leads them to different predictions. When they try using logic to reconcile, logic kinda bottoms out at the concepts and there doesn't seem to be any way out except to test both theories. The argument goes on because both are polite and respectful, but it doesn't seem to shed any light.

(I apologize to the hosts for harping on the same topic repeatedly.)

Comment author: Vladimir_Slepnev 04 November 2008 08:10:08AM -1 points [-]

+1 to Anatoly Vorobey. Using K-complexity to capture the human notion of complexity seems to be even worse than using game-theoretic rationality to capture human rationality - something that's been attacked to death already.

Comment author: Vladimir_Slepnev 03 November 2008 07:52:10PM 1 point [-]

> So you're telling me I ought to stop doing that?

Cute counter but fallacious IMO. There are systems of oughts that don't look and sound like religions. For example, I don't write sermons for mine. Anyway, you're not engaging my central point, just nitpicking on an illustratory phrase.

Comment author: Vladimir_Slepnev 03 November 2008 07:28:04AM 1 point [-]

> Vladimir, you haven't been reading this blog for long, have you?

Eliezer, I've lurked here for about a year. The quantum sequence was great (turned me on to many-worlds), but already pretty religious, e.g. the rationale or "it came time to break your allegiance to Science". I ate the tasty intellectual parts and mentally discarded the nasty religious parts. (For example, attacking science by attacking the Copenhagen interpretation was pretty low - most physicists don't even consider interpretations science.) Your recent posts however are all nasty, no tasty. Talmudic.

Thanks for reminding about "Is Humanism A Religion-Substitute?", it's a perfect example of what I'm talking about. You seem to be instinctively religious - you want to worship something, while e.g. for me it's just distasteful.

Religions don't go bad because they are false and stupid. Religions go bad because they live on the "ought" side of is/ought, where there is no true and false. (Cue your morality sequence.)

Comment author: Vladimir_Slepnev 02 November 2008 09:28:59PM 0 points [-]

Eliezer, I wanna tell you something that will sound offensive, please keep in mind I'm not trying to offend you...

You're making a religion out of your stuff.

Your posts are very different from Robin's - he shows specific applications of rationality, while you preach rationality as a Way. Maybe it has to do with your ethnicity: inventing religions is the #1 unique, stellar specialty of Jews. (Quick examples: Abrahamic religions, socialism, Ayn Rand's view of capitalism. Don't get offended, don't.)

Not saying your personal brand of rationality is wrong - far from it! It's very interesting, and you have taught me much. But as the blog title says, notice and overcome the bias.

Because religions have a way of becoming ugly in the long run.

In response to Aiming at the Target
Comment author: Vladimir_Slepnev 27 October 2008 07:39:51AM 0 points [-]

+1 to Will Pearson and Richard Kennaway. Humans mostly follow habit instead of optimizing.

Eliezer, this is interesting:

> my general theory of Newcomblike problems

Some kind of bounded rationality? Could you give us a taste?

In response to Aiming at the Target
Comment author: Vladimir_Slepnev 26 October 2008 08:22:57PM 1 point [-]

This is very similar to an earlier post. Eliezer, go faster. I, for one, am waiting for some non-trivial FAI math - is there any?

In response to Ethical Injunctions
Comment author: Vladimir_Slepnev 21 October 2008 09:58:40PM 0 points [-]

Tim Tyler, IMO you're wrong: a human mind does not act as if maximizing any utility function on world states. The mind just goes around in grooves. Nice things like culture and civilization fall out accidentally as side effects. But thanks for the "bright light" idea, it's intriguing.

In response to Ethical Injunctions
Comment author: Vladimir_Slepnev 21 October 2008 12:31:02PM 0 points [-]

So AIs are dangerous, because they're blind optimization processes; evolution is cruel, because it's a blind optimization process... and still Eliezer wants to build an optimizer-based AI. Why? We human beings are not optimizers or outcome pumps. We are a layered cake of instincts, and precisely this allows us to be moral and kind.

No idea what I'm talking about, but the "subsumption architecture" papers seem to me much more promising - a more gradual, less dangerous, more incrementally effective path to creating friendly intelligent beings. I hope something like this this will be Eliezer's next epiphany: the possibility of non-optimizer-based high intelligence, and its higher robustness compared to paperclip bombs.

View more: Prev | Next