Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 24 April 2016 12:29:48AM 1 point [-]

You are looking at the wrong meta level.

When I say "VNM doesn't offer any formulation of rational behavior" I'm not disagreeing with any particular axiom. It's like I'm saying that an orange is not an apple and you respond by asking me what kind of apples I dislike.

Comment author: solipsist 24 April 2016 02:53:58AM *  0 points [-]

Which (possibly all) of the VNM axioms do you think are not appropriate as part of a formulation of rational behavior?

I think the Peano natural numbers is a reasonable model for the number of steins I own (with the possible exception that if my steins fill up the universe a successor number of steins might not exist). But I don't think the Peano axioms are a good model for how much beer I drink. It is not the case that all quantities of beer can be expressed as successors to 0 beer, so beer does not follow the axiom of induction.

I think ZFC axioms are a poor model of impressionist paintings. For example, it is not the case that for every impressionist paintings x and y, there exists an impressionist painting that contains both x and y. Therefore impressionist paintings violate the axiom of pairing.

Comment author: Lumifer 22 April 2016 01:24:32AM *  2 points [-]

Both.

VNM doesn't offer any "formulation of rational behavior". VNM says that a function with a particular set of properties must exist and relies on assumptions that do not necessarily hold in real life.

I also don't think that a utility function that can condense the risk preferences into a single scalar is likely to be accurate enough for practical purposes.

Comment author: solipsist 23 April 2016 10:25:46PM 0 points [-]

Can you by chance pin down your disagreement to a particular axiom? You're modus tollensing where I expected you would modus ponens.

Comment author: solipsist 12 March 2016 03:14:02PM 0 points [-]

I didn't follow everything, but does this attempt to address self-fulfilling prophecies? Assume the oracle has good track record and releases its information publicly. If I ask it "What are the chances Russia and the US will engage in nuclear war in the next 6 months?", answers of "0.001" and "0.8" are probably both accurate.

Comment author: solipsist 18 February 2016 03:06:42PM *  0 points [-]

What what sorts of output strings are you missing?

Calculating Kolmogorov complexities is hard because because it is hard differentiate between programs that run a long time and halt and programs that run a long time and never halt.

If God gave you a 1.01 MB text file and told you "This program computes BB(1000000)", then you could easily write a program to find the Kolmogorov complexity of any string less then 1 MB.

kolmogorov_map = defaultdict(lambda x : infinity)
for all strings *p* less than 1000000:
    run *p* for at most BB(1000000) steps
    save output to *o*
    if (*p* halted and kolmogorov_map[*o*] > len(p):
        kolmogorov_map[*o*] = len(p) # found smaller program
    else:
        # *p* does not ever ever halt and has no output

Replace BB(1000000) with a smaller number, say A(Graham's number, Graham's number), and this calculator works for all programs which halt in less than A(Graham's number, Graham's number) steps. That includes pretty much every program I care about! For instance, it includes every program which could run under known physics in the age of the universe.

Comment author: lisper 09 February 2016 11:51:06PM 3 points [-]

I will take your advice to heart in the future. This was my first post to LW, and was actually a little unsure about how appropriate it was. Now I know.

Comment author: solipsist 12 February 2016 08:48:58PM 3 points [-]

Eh, don't take it personally. I'm guessing commenters are implicitly taking the title question as a challenge and are pouncing to poke holes in your argument. I thought your essay was well written and thought provoking. Keep posting!

Comment author: gjm 14 January 2016 05:37:27PM 0 points [-]

I do not believe the intention of the advice given is that emails in your inbox that you feel require some reponse, but that you don't see how to deal with completely in a few minutes, should be archived and forgotten. (Perhaps I misunderstood?)

Comment author: solipsist 14 January 2016 08:13:01PM 0 points [-]

Don't know, not the original author. What do you think the chances are than an email on the third page of your inbox will ever get a reply? Inbox purgatory seems to me like a way to give up on something without having to admit it yourself.

If my inbox has more than 40 or 50 items in it I feel demoralized and find it harder to work through newer items, so the easiest way for me to stay at steady-state is to keep my inbox at zero or close to it.

Counterpoint: I've kept to an empty inbox for many years, but know people with ever-growing inboxes whom I consider more organized and responsive. I've never declared email bankruptcy during my professional life and don't know the consequences.

Comment author: gjm 12 January 2016 01:01:03PM 5 points [-]

This seems to cover everything about getting to "inbox zero" except the nontrivial bits of actually getting to "inbox zero".

That is: I bet most people with overflowing inboxes have lots of things in those inboxes that they can neither classify immediately as "no more to do" nor resolve in a few minutes. And what stops those people getting their inboxes down to zero is (1) all the work required to deal with those things, and (2) the psychological discomfort caused by thinking about #1. And nothing in here says anything about how to deal with that situation.

Comment author: solipsist 14 January 2016 01:59:51PM 1 point [-]

And nothing in here says anything about how to deal with that situation.

I read the advice as:

If you still have unresolved emails from 2015 in your inbox then keeping emails in your inbox isn't causing them to get resolved. Accept that, get a clean slate, and move on.

Make a folder called "old inbox" and put all your old emails there. Now you have an empty inbox! The costs of putting your old emails out of sight are less than the benefits of keeping an empty inbox going forward.

Comment author: Lumifer 16 December 2015 05:53:59AM 3 points [-]

Counter-evidence: affirmative action.

In any case, it's interesting that Obama's SAT (or ACT) scores are sealed as are his college grades, AFAIK.

Comment author: solipsist 08 January 2016 06:49:20PM 0 points [-]

HLS students of any skin color have high IQs as measured by standardized tests. The school's 25th percentile LSAT score is 170, which is 97.5th percentile for the subset of college graduates who take the LSAT. 44% of HLS students are people of color.

Comment author: solipsist 08 January 2016 02:16:37PM 2 points [-]

The book to read is Reasons and Persons by Derek Parfit.

In response to comment by [deleted] on Open Thread, January 4-10, 2016
Comment author: Usul 05 January 2016 08:55:45AM *  6 points [-]

I appreciate the reply. I recognize both of those arguments but I am asking something different. If Omega tells me to give him a dollar or he tortures a simulation, a separate being to me, no threat that I might be that simulation (also thinking of the Basilisk here), why should I care if that simulation is one of me as opposed to any other sentient being?

I see them as equally valuable. Both are not-me. Identical-to-me is still not-me. If I am a simulation and I meet another simulation of me in Thunderdome (Omega is an evil bastard) I'm going to kill that other guy just the same as if he were someone else. I don't get why sim-self is of greater value than sim-other. Everything I've read here (admittedly not too much) seems to assume this as self-evident but I can't find a basis for it. Is the "it could be you who is tortured" just implied in all of these examples and I'm not up on the convention? I don't see it specified, and in "The AI boxes you" the "It could be you" is a tacked-on threat in addition to the "I will torture simulations of you", implying that the starting threat is enough to give pause.

Comment author: solipsist 06 January 2016 03:13:51PM 4 points [-]

If love your simulation as you love yourself, they will love you as they love themselves (and if you don't, they won't). You can choose to have enemies or allies with your own actions.

You and a thousand simulations of you play a game where pressing a button gives the presser $500 but takes $1 from each of the other players. Do you press the button?

View more: Next