Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Incorrect 22 February 2013 03:12:12PM 0 points [-]

My visualization ability improves the closer I am to sleep, being near perfect during a lucid dream.

Comment author: Incorrect 19 February 2013 09:44:34AM -1 points [-]

You can generally throw unfalsifiable beliefs into your utility function but you might consider this intellectually dishonest.

As a quick analogy, a solipsist can still care about other people.

Comment author: Incorrect 15 February 2013 04:22:26AM 1 point [-]

I escape by writing a program that simulates 3^^3 copies of myself escaping and living happily ever after (generating myself by running Solomonoff Induction on a large amount of text I type directly into the source code).

Comment author: Incorrect 28 January 2013 09:49:34PM 3 points [-]

You might be able to contain it with a homomorphic encryption scheme

Comment author: shminux 21 January 2013 11:35:22PM 4 points [-]

This seems like a quick way to make money for CFAR/SI. After all, there are plenty of rich people around who would consider your proposal a guaranteed win for them, regardless of the stakes: "You mean I can say "I win" at any point and win the challenge? What's the catch?"

Comment author: Incorrect 22 January 2013 12:22:23AM 11 points [-]

I'm guessing Eliezer would lose most of his advantages against a demographic like that.

Comment author: Incorrect 21 January 2013 01:58:35PM 2 points [-]

Oh god, remind me to never play the part of the gatekeeper… This is terrifying.

Comment author: crap 03 January 2013 11:33:01AM *  2 points [-]

Look. Simple utilitarianism doesn't have to be correct. It looks like a wrong idea to me. Often, when reasoning informally, people confabulate wrong formal sounding things that loosely match their intuitions. And then declare that normative.

Is a library of copies of one book worth the same to you? Is a library of books of 1 author worth as much? Does variety ever truly count for nothing? There's no reason why u("AB") should be equal to u("A")+u("B"). People pick + because they are bad at math , or perhaps bad at knowing when they are being bad at math. edit: When you try to math-ize your morality, poor knowledge of math serves as Orwellian newspeak, it defines the way you think. It is hard to choose correct function even if there was any, and years of practice on too simple problems make wrong functions pop into your head.

Comment author: Incorrect 03 January 2013 07:33:42PM 0 points [-]

The lifespan dilemma applies to all unbounded utility functions combined with expected value maximization, it does not require simple utilitarianism.

Comment author: Incorrect 24 December 2012 04:40:09AM *  8 points [-]

Would your post on eating babies count, or is it too nonspecific?

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=1

(I completely agree with the policy, I'm just curious)

Comment author: Eliezer_Yudkowsky 01 December 2012 01:08:45AM 3 points [-]

1) If we ask whether the entities embedded in strings watched over by the self-consistent universe detector really have experiences, aren't we violating the anti-zombie principle?

We're not asking if they have experiences; obviously if they exist, they have experiences. Rather we're asking if their entire universe gains any magical reality-fluid from our universe simulating it (e.g., that mysterious stuff which, in our universe, manifests in proportion to the integrated squared modulus in the Born probabilities) which will then flow into any conscious agents embedded within.

Sadly, my usual toolbox for dissolving questions about consciousness does not seem to yield results on reality-fluid as yet - all thought experiments about "What if I simulate / what if I see..." either don't vary with the amount of reality-fluid, or presume that the simulating universe exists in the first place.

There are people who claim to be less confused about this than I am. They appear to me to be jumping the gun on what constitutes lack of confusion, and ought to be able to answer questions like e.g. "Would straightforwardly simulating the quantum wavefunction in sufficient detail automatically give rise to sentients experiencing outcomes in proportion to the Born probabilities, i.e., reproduce our current experience?" by something other than e.g. "But people in branches like ours will have utility functions that go by squared modulus" which I consider to be blatantly silly for reasons I may need to talk about further at some point.

Comment author: Incorrect 01 December 2012 02:18:06AM 1 point [-]

There are people who claim to be less confused about this than I am

Solipsists should be able to dissolve the whole thing easily.

Comment author: elseif 13 November 2012 02:57:01AM *  0 points [-]

Perhaps LessWrong is a place where I can say "Your question is wrong" without causing unintended offense. (And none is intended.)

Yes, for ω-consistency to even be defined for a theory it must interpret the language of arithmetic. This is a necessary precondition for the statement you quoted, and does not contradict it.

Work in PA, and take a family of statements P(n) where each P(n) is true but independent of PA, and not overly simple statements themselves---say, P(n) is "the function epsilon n in the fast growing hierarchy is a total function". (The important thing here is that the statement is at least Pi2---true pure existence statements are always provable, and if the statements were universal there would be a different ω-consistency problem. The exact statement isn't so important, but not that these statements are true, but not provable in PA.)

Now consider the statement T="there is an n such that P(n) is false". PA+T has no standard model (because T is false), but PA+T doesn't prove any of the P(n), let alone all of them, so there's no ω-consistency problem.

Comment author: Incorrect 13 November 2012 11:23:05PM 0 points [-]

Thanks, can you recommend a textbook for this stuff? I've mostly been learning off Wikipedia.

I can't find a textbook on logic in the lesswrong textbook list.

View more: Next