Wiki Contributions

Comments

My visualization ability improves the closer I am to sleep, being near perfect during a lucid dream.

You can generally throw unfalsifiable beliefs into your utility function but you might consider this intellectually dishonest.

As a quick analogy, a solipsist can still care about other people.

I escape by writing a program that simulates 3^^3 copies of myself escaping and living happily ever after (generating myself by running Solomonoff Induction on a large amount of text I type directly into the source code).

I'm guessing Eliezer would lose most of his advantages against a demographic like that.

Oh god, remind me to never play the part of the gatekeeper… This is terrifying.

The lifespan dilemma applies to all unbounded utility functions combined with expected value maximization, it does not require simple utilitarianism.

Would your post on eating babies count, or is it too nonspecific?

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=1

(I completely agree with the policy, I'm just curious)

There are people who claim to be less confused about this than I am

Solipsists should be able to dissolve the whole thing easily.

Thanks, can you recommend a textbook for this stuff? I've mostly been learning off Wikipedia.

I can't find a textbook on logic in the lesswrong textbook list.

Load More