Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: entirelyuseless 04 March 2017 03:28:16AM *  0 points [-]

It is not the responsibility of a decision theory to tell you how to form opinions about the world; it should tell you how to use the opinions you have. EDT does not mean reference class forecasting; it means expecting utility according to the opinions you would actually have if you did the thing, not ignoring the fact that doing the thing would give you information.

Or in other words, it means acting on your honest opinion of what will give you the best result, and not a dishonest opinion formed by pretending that your opinions wouldn't change if you did something.

Comment author: Manfred 04 March 2017 08:10:37AM 0 points [-]

I think this deflationary conception of decision theory has serious problems. First is that because it doesn't pin down a decision-making algorithm, it's hard to talk about what choices it makes - you can argue for choices but you can't demonstrate them without showing how they're generated in full. Second is that it introduces more opportunities to fool yourself with verbal reasoning. Third because historically I think it's resulted in a lot of wasted words in philosophy journals, although maybe this is just objection one again.

Comment author: Manfred 03 March 2017 06:35:16PM *  0 points [-]

My take on EDT is that it's, at its core, vague about probability estimation. If the probabilities are accurate forecasts based on detailed causal models of the world, then it works at least as well as CDT. But if there's even a small gap between the model and reality, it can behave badly.

E.g. if you like vanilla ice cream but the people who get chocolate really enjoy it, you might not endorse an EDT algorithm that thinks of of probabilities as frequency within a reference class. I see the smoking lesion as a more sophisticated version of this same issue.

But then if probabilities are estimated via causal model, EDT has exactly the same problem with Newcomb's Omega as CDT, because the problem with Omega lies in the incorrect estimation of probabilities when someone can read your source code.

So I see these as two different problems with two different methods of assigning probabilities in an underspecified EDT. This means that I predict there's an even more interesting version of your example where both methods fail. The causal modelers assume that the past can't predict their choice, and the reference class forecasters get sidetracked by options that put them in a good reference class without having causal impact on what they care about.

Comment author: Manfred 08 February 2017 07:56:58PM *  0 points [-]

What's the best way to invest in an UNU betting fund? Or is the answer to start one yourself?

Comment author: Thomas 06 February 2017 05:52:00PM *  0 points [-]

There is some wit here, but no proper solution.

Comment author: Manfred 06 February 2017 07:03:57PM 0 points [-]

Since you clearly have something in mind, once you reveal it, are we going to go "Oh, yeah, that's much more sensible than the gliders that are dropped near the boundary of glide vs. stall air pressure," or are we going to go "well, that's arbitrary."

Rigid hot-air-balloon shapes that start out at 1500 Celsius and fall to earth once they are no longer keeping the air under them hot. Seed crystals in a hailstorm. Any object that falls faster if broken and will break if dropped from the higher altitude. Solid-state electrostatic thrusters pointed downward, that arc and fail if the pressure is too high. Spinning propeller craft thrusting downward that undergo a laminar to turbulent transition if the pressure is too high.

Comment author: Thomas 06 February 2017 08:59:35AM 0 points [-]
Comment author: Manfred 06 February 2017 05:34:35PM 0 points [-]

Airplanes dropped in different orientations, or in a way that's sensitive to initial conditions and leads to B gliding while A stalls. B is dropped from right above a passing eagle and gets carried off to Mordor. They're "dropped" far out past geostationary orbit so that "stationary relative to the planet" in fact means that they're flung off into space, and only reach Earth by getting slingshotted around other planets. Both are dropped over Brazil with notes to please throw them into the Atlantic, and A is dropped so that its coriolis motion as it falls will push it to a more visible area. They're microscopic black holes dropped from the other side of the earth.

Comment author: The_Jaded_One 16 January 2017 04:49:03PM *  6 points [-]

Maybe you're just not rational enough to be shown that content? I see like 10 posts there.

MIRI has invented a proprietary algorithm that uses the third derivative of your mouse cursor position and click speed to predict your calibration curve, IQ and whether you would one-box on Newcomb's problem with a correlation of 95%. LW mods have recently combined those into an overall rationality quotient which the site uses to decide what level of secret rationality knowledge you are permitted to see.

Maybe you should do some debiasing, practice being well-calibrated, read the sequences and try again later?

EDIT: Some people seem to be missing that this is intended as humor ............

Comment author: Manfred 16 January 2017 06:09:25PM 1 point [-]

it's a shame downvoting is temporarily disabled.

Comment author: Manfred 13 January 2017 10:02:29PM *  3 points [-]

Found some other interesting blog posts by him: 1 2.

Comment author: username2 13 January 2017 04:15:44PM 0 points [-]

This thread seems to not fit that pattern. The only annoying content is related to moderation.

Comment author: Manfred 13 January 2017 06:41:47PM *  4 points [-]

This thread doesn't fit that pattern largely because LW users are aware of the problems with talking about politics and are more likely to stay on the meta-level as a response to that. There is, in fact, not a single argument for/against brexit in this thread, which I think is a shining advertisement for LW comment culture. On the other hand, I think this article is also particularly well-suited for not immediately inspiring object-level argument, at least as long as it's not posted on /r/news or similar.

Comment author: gjm 09 January 2017 02:30:29PM *  2 points [-]

Zvavzvmvat |fva(a)| vf rdhvinyrag gb zvavzvmvat |a-s(a)| jurer s(a) vf gur arnerfg zhygvcyr bs cv gb a; rdhvinyragyl, gb zvavzvmvat |a-z.cv| jurer a,z ner vagrtref naq 1<=a<=10^100; rdhvinyragyl, gb zvavzvmvat z|a/z-cv| jvgu gur fnzr pbafgenvag ba a. (Juvpu vf boivbhfyl zber be yrff rdhvinyrag gb fbzrguvat yvxr z<=10^100/cv+1.)

Gurer'f n fgnaqneq nytbevguz sbe guvf, juvpu lbh pna svaq qrfpevorq r.t. urer. V guvax gur erfhyg unf gur sbyybjvat qvtvgf:

bar fvk frira mreb svir gjb frira svir avar fvk guerr svir bar svir fvk svir bar svir fvk frira svir sbhe svir avar rvtug svir avar fvk bar bar mreb frira sbhe svir fvk fvk sbhe avar svir svir svir mreb frira guerr fvk gjb guerr bar guerr avar guerr rvtug bar rvtug rvtug rvtug svir avar mreb gjb frira bar mreb fvk gjb avar guerr frira rvtug sbhe svir gjb mreb avar svir gjb gjb avar svir mreb frira gjb sbhe mreb mreb rvtug gjb frira fvk bar frira svir fvk avar sbhe sbhe sbhe mreb fvk guerr

Comment author: Manfred 11 January 2017 05:38:57AM *  0 points [-]

I wonder if there's a simple worst-case proof that shows how complicated you need to let the seeds get in order to find the actual optimum. For example, if we look for the best integer under 10^85 rather than under 10^100, the seed that leads to this algorithm outputting the optimum is different, or at least the overlap seems small. But I'm having a hard time proving anything about this algorithm, because although small seed numerators could add up to almost anything, in practice they won't.

Comment author: Thomas 10 January 2017 08:09:10AM *  0 points [-]

Say its (decimal) name. Say it!

Comment author: Manfred 10 January 2017 07:58:43PM *  0 points [-]

View more: Next