# Open Thread September, Part 3

2 28 September 2010 05:21AM

The September Open Thread, Part 2 has got nearly 800 posts, so let's have a little breathing room.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Sort By: Best
Comment author: 29 September 2010 09:11:04PM 19 points [-]

I expect people will be interested to hear that Eliezer's TDT document has now been released for general consumption.

Comment author: 30 September 2010 02:00:35AM *  8 points [-]

Does anyone else agree that, as a piece of expository writing, that document sucks bigtime?

111 pages! I got through about 25 and I was wondering why Eliezer thought I needed to hear about how his four friends had decided when presented with the Newcomb's soda problem and how some people refer to this problem as Solomon's problem. So, I decided to skim ahead until he started talking about TDT. So I skimmed and skimmed.

Finally, I got to section 14, entitled "The timeless decision procedure". "Aha!", I think. "Finally." The first paragraph consists of a very long and confusing sentence which at least seems to deal with the timeless decision procedure.

The timeless decision procedure evaluates expected utility conditional upon the output of an abstract decision computation - the very same computation that is currently executing as a timeless decision procedure - and returns that output such that the universe will possess maximum expected utility, conditional upon the abstract computation returning that output.

It might be easier to understand if expressed as an equation or formula containing, you know, variables and things. So I read on, hoping to find something I can sink my teeth into. But then the second paragraph begins:

I delay the formal presentation of a timeless decision algorithm because of some significant extra steps I wish to add ...

and closes with

Before adding additional complexities, I wish to justify this critical innovation from first principles.

As far as I can tell, the remainder of this section entitled "The timeless decision procedure" consists of this justification, though not from first principles, but rather using an example. And it doesn't appear that Eliezer ever gets back to the task of providing a "formal presentation of a timeless decision algorithm".

So, I skip forward to the end, hoping to read the conclusions. Instead I find:

This manuscript was cut off here, but interested readers are suggested to look at these sources for more discussion:

Followed by a bibliography containing one entry - A chapter from a 1978 collection of articles on applications of decision theory.

"...was cut off here ..."? Give me a break!

Let me know when you get it down to a dozen pages or so.

ETA: A cleaned up copy of the paper exists with a more complete bibliography and without the "manuscript was cut off here" closing.

Comment author: 26 June 2011 08:11:27PM *  0 points [-]

The first paragraph consists of a very long and confusing sentence which at least seems to deal with the timeless decision procedure.

The timeless decision procedure evaluates expected utility conditional upon the output of an abstract decision computation - the very same computation that is currently executing as a timeless decision procedure - and returns that output such that the universe will possess maximum expected utility, conditional upon the abstract computation returning that output.

I think this needs rewriting so it doesn't sound so circular - and only mentions the word "conditional" once.

It seems to me that we can just say that it maximises utility - while maintaining an awareness that there may be other agents running its decision algorithm out there, in addition to all the other things it knows.

I think the stuff about "conditional upon the abstract computation returning that output" is pretty-much implied by the notion of utility maximisation.

Comment author: 30 September 2010 02:08:34AM 0 points [-]

It might be easier to understand if expressed as an equation or formula containing, you know, variables and things.

Easier? That's the opposite of true for this kind of material!

Comment author: 30 September 2010 05:56:39AM 8 points [-]

Easier if also expressed that way. You need the prose to know what the symbols mean, but the math itself is incredibly clearer when done as symbols.

Comment author: 30 September 2010 02:16:48AM 5 points [-]

I guess this is a case of "different strokes for different folks". I will point out that it is fairly traditional for technical communication to contain formulas, equations, and/or pseudo-code. I believe the assumption behind this tradition is that such formal means of presentation are often clearer than expository text.

Comment author: 30 September 2010 04:11:19AM *  0 points [-]

I will point out that it is fairly traditional for technical communication to contain formulas, equations, and/or pseudo-code.

I am aware of the tradition. Yes, Eliezer's piece does not include any semblance of technical rigour.

I believe the assumption behind this tradition is that such formal means of presentation are often clearer than expository text.

There is a reason the formal presentations include accompanying explanations. The mathematics for this kind of thing would be nigh incoherent and quite possibly longer than a verbal description. Expository text is utterly critical.

Incidentally, I have almost no doubt that "might be easier to understand" is not your real reason for demanding "you know, variables and things". Some of your real reasons may actually be better in this instance.

Comment author: 30 September 2010 01:19:07AM *  7 points [-]

Thanks for the link! I just read it all. The good: it's very, very smooth reading - I know how well Eliezer can write, and even I was surprised at the quality - and it has some very lucid explanations of tricky matters (like why Pearlean causality is useful). The bad: it's kinda rambling, contains many standard sci-fi LW arguments that feel out of place in a philosophy paper, and it doesn't make any formal advances beyond what we already know here (I'd hoped to see at least one). The verdict: definitely read the first half if you're confused about this whole "decision theory" controversy, it'll get you unconfused in a pinch. Take the second half with a grain of salt because it's still very raw (unmixed metaphor award!)

Comment author: 30 September 2010 09:17:56AM *  5 points [-]

I wonder if it should be reformatted in LaTeX to pass item #1 from here.

Comment author: 03 October 2010 04:25:43AM *  2 points [-]

It should be reformated in LaTeX so that it will look much much nicer.

Comment author: 02 October 2010 10:40:09PM *  2 points [-]

I wonder if it should be reformatted in LaTeX

I'm currently reading through the document, and yes, it definitely should. The present format is an unprofessional-looking eyesore, and the references are presented in a weird, clumsy, and inconsistent way. Using Latex/Bibtex would solve both problems easily and effectively.

(Personally, I can't fathom why anyone capable of grasping the notion of a markup language would ever want to write a document longer than five pages in Word instead of Latex.)

Comment author: 30 September 2010 10:01:20AM 1 point [-]

7 and 8 are already a lost cause. :)

Comment author: 01 October 2010 12:27:43AM *  3 points [-]

From a list of warning signs of a FAIL in an attempt to solve a famous problem:
7. The paper doesnâ€™t build on (or in some cases even refer to) any previous work.
8. The paper wastes lots of space on standard material.

I would disagree that this paper doesn't build on or take notice of previous work. It takes note of EDT and CDT and quite properly puts the focus on the point of departure of this work - specifically, the handling of contrafactuals. I'm quite happy with that aspect of the paper. My complaint was (8) that it wasted far too much space doing it. And, perhaps as a result of wasting so much time and space in preparation, it never reached its proper conclusion.

Also, it is not completely clear the Aronson's list of warning signs really applies here. Eliezer is not solving a famous problem here. Most non-philosophers don't think that a problem even exists. So, he does have to provide an explanation of why TDT is needed. Just not so much explanation.

Comment author: 01 October 2010 05:56:16AM *  0 points [-]

Also, it is not completely clear the Aronson's list of warning signs really applies here.

Nor do I, and I would in any case suggest that some of them are screened off. There's only so many times you can count 'non-conventional' as evidence.

I incidentally found some of the extra explanation handy purely as revision of various topics that it hadn't particularly occurred to me were relevant.

And, perhaps as a result of wasting so much time and space in preparation, it never reached its proper conclusion.

I do hope someone goes ahead and finishes it. Including things like writing out that bibliography at the end and writing up the maths.

Comment author: 30 September 2010 12:53:09PM *  4 points [-]

I must say I'm disappointed by the lack of rigor. On the other hand, I'm slightly relieved that he didn't beat me to any of the stuff in the decision theory document I'm writing myself. So far, I have yet to see any formalization of decision theory that I would consider usable, other my own unfinished one.

I notice there seems to be an issue with the bibliography - there's only one entry in it, but I've found at least one other citation in the text (Judea Pearl's Causality cited on page 58) that's not there. Are there any good collections of decision theory paper links out there?

Comment author: 01 October 2010 02:32:35PM *  4 points [-]

If you have new formal arguments about decision theory, it would be much more useful to me (and others, I think) if you just posted them here in their current state instead. Or emailed them to the interested people.

Comment author: 02 October 2010 10:10:44AM 0 points [-]

Give a quick soundbite without context?

Comment author: 02 October 2010 01:20:31PM *  3 points [-]

I'm approaching decision theory from from the perspective compilers approach optimizations: no approach is guaranteed to work always, but each one comes with a list of preconditions that you can check. I'm also summarizing some of the relevant work from compilers: automatic provably correct simplification, translation between forms, and a handy collection of normal forms to translate into.

For CDT, the precondition is a partial ordering over observation sets passed to the strategy such that the world program calls the strategy with observation sets only in increasing order, and there are finitely many possible observation sets. Then you can translate the program into continuation-passing style, and enumerate the possible invocations of the strategy function and their ordering. The last one in the order is guaranteed to have a continuation with no further invocations of the strategy function, which means you can try each possibility, simulate the results, and use that to determine the best answer. Then you can look at the second-to-last invocation, substitute the best answer to the last invocation into the continuation, and repeat; and so on for the set of all invocations to the strategy function. This works because you have a guarantee that when you compute your current position within the world-program and come up with a probability distribution over states to determine where you are, and then look at future continuations, changing result of any invocations of the strategy in those continuations does not affect the probability distribution over states.

I also have an example of a formalized decision-theory problem for which no optimal answer exists: name a number and that number is your utility. A corollary is that no decision theory can always give optimal answers, even given infinite computing power. This can be worked around by applying size bounds in various places.

I'm also drawing distinctions between strategies and decision theories (a strategy is an answer to one problem, a decision theory is an approach to generating strategies from problems); and between preference and utility (a preference is a partial order over outcomes; a utility function is a total order over outcomes where the outcomes are complete probability distributions, plus a linearity requirement).

Comment author: 02 October 2010 02:36:40PM 1 point [-]

So far, doesn't sound good.

Comment author: 02 October 2010 02:45:42PM *  1 point [-]

By that, do you mean that it sounds wrong, or that it sounds confused? If the former, I may need to reconsider; if the latter, I'm unsurprised because it's much too short and doesn't include any of the actual formalization. (That was not an excerpt from the draft I'm writing, but an attempt to summarize it briefly. I don't think I did it justice.)

Comment author: 02 October 2010 02:55:02PM 1 point [-]

Comment author: 02 October 2010 03:18:10PM 0 points [-]

Ok, in that case I'm inclined to think that impression is just an artifact of how I summarized it, since my summary didn't address the questions, but the longer paper I'm working on does, albeit only after building up proof and formalization techniques, which are the main focus.

Comment author: 03 October 2010 12:40:41PM 0 points [-]

Would something like UDT fit into your framework?

Comment author: 03 October 2010 07:49:06PM 0 points [-]

As far as I know, there are no cases where UDT suggests a decision and disagrees with mine. The differences are all in cases where UDT alone can't be used to reach a decision.

Comment author: 30 September 2010 03:52:54PM 2 points [-]

I'm glad to have this to read. I was surprised to find many examples and arguments that EY hadn't given before (or at least formalized this way). I liked the Newcomb's soda problem in particular. I had been worried that EY had presented enough of his TDT justification that someone could "scoop" him, but there's a lot more depth to it. (Anyone up-to-date on the chance that he could get a PhD just for this?)

And I also appreciated that the modified the smoking lesion problem to be one where people aren't distracted by their existing knowledge of smoking, and that this was the reason for transforming the example.

I read up to ~p. 35, and I think I have a better understanding now of the relevance of time consistency and how it varies across examples.

That said, I agree with the others who say it could use some mroe polish.

Comment deleted 01 October 2010 07:32:51AM *  [-]
Comment author: 01 October 2010 12:02:23PM *  0 points [-]

Maybe the CGTA gene gives you an itchy throat or makes you like to chew things. At any rate, chewing the gum is always the right choice (assuming the others costs of gum-chewing are negligible).

Comment deleted 01 October 2010 07:37:14PM [-]
Comment author: 02 October 2010 11:30:49PM *  4 points [-]

One intuition pump: if someone else forced you to chew gum, this wouldn't have any bearing on whether you have CGTA, and it would lower your chances of abcess in either case, and so you'd be glad they'd done so. However, if someone else forced you to two-box, you'd be quite angry at having missed out on the million dollars.

Comment author: 02 October 2010 10:12:11PM 1 point [-]

In Newcomb's problem, the result depends directly on your decision making process (by the definition of Omega/the Predictor), whereas with the gum example it doesn't.

Comment author: 01 October 2010 12:56:45AM 0 points [-]

... the chance that he could get a PhD just for this?

A Ph.D. in what? The subject matter fits into some odd interdisciplinary combination of Philosophy, Economics, Operations Research, AI/CompSci, and Statistics. In general, the research requirements for a PhD in CompSci are roughly equivalent to something like 4 published research papers plus a ~200 page dissertation containing material that can be worked into either a monograph or another half-dozen publishable papers. But there are other requirements besides research, and many institutions don't like to allow people to "test out" of those requirements because it looks bad to the accrediting agencies.

Comment author: 01 October 2010 08:22:33PM *  3 points [-]

I notice that the ideal causal diagram used in Part 2 (and based on pearls) is isomorphic to an example I use to teach CLIP, once you apply the substitution:

sprinkler on -> a paperclip truck has over turned
rain -> a clippy has haphazardly used up a lot of metal wire
sidewalk wet -> paperclips are scattered across the ground
sidewalk slippery -> many paperclips need to be moved to the safe zone

Comment author: 05 October 2010 03:48:55AM *  0 points [-]

I scanned it. My initial reactions:

• Surprise that the document existed;
• TL;DR;
• Surprise at the quantity of work that had gone into it.

Alas, I totally failed to see the claimed "strange singularity at the heart of decision theory".

My favourite bit was probably the speculations about agent boundaries - starting on p.108. Alas, from my POV, no mention of the wirehead problem.

Update 2011-06-26 regarding the new version. The bit that reads:

This manuscript was cut off here, but interested readers are suggested to look at these sources for more discussion:

...seems to have been deleted, and 3 pages worth of references have been added. The document seems to have had negligible additions, though - the bit on p.108 has moved back onto page 107. There seem to be a few more extra lines at the end about how "change" is a harmful concept in decision theory.

Comment author: 18 October 2010 05:51:20PM 10 points [-]

A Redditor recently posted asking all atheists what they thought happened after death. The standard, obvious, and true response was given -- your mind is annihilated and you experience nothing. The OP then responded with "doesn't that scare you?"

Comment author: 28 September 2010 06:12:31AM *  9 points [-]

((moved here from the suffocating depths of open thread part 2))

Back when I first heard of "timeless decision theory", I thought it must have been inspired by Barbour's timeless physics. Then I got the idea that it was about treating yourself as an instance of a set of structurally identical decision-making agents from across all possible worlds, and making your decision as if you had an equal chance of being any one of them (which might be psychologically presented to yourself as making the decision on behalf of all of them, though that threatens to become very confused causally). But if the motivation was to have a new theory of rationality which would produce the right answer for Newcomb's "paradox" (and maybe other problems? though I don't know what other problems there are), then it sounded like a good idea.

But the discussion in this thread and this thread makes it look as if people want this "new decision theory" to account for the supposed success of "superrationality", or of cooperative acts in general, such as voting in a bloc. There are statements in those threads which just bemuse me. E.g. at the start of the second thread where Vladimir Nesov says

since voters' decisions are correlated, your decision accounts for behavior of other people as well, and so you are not only casting one vote with your decision, but many votes simultaneously

I should know enough about the possibilities of smart people tripping up over the intricacies of their own thoughts not to boggle at this, but still, I boggle at it. The decision made by other people are caused by factors internal to their own brains. What goes on in your brain has nothing to do with it. Their guess or presumption of how you vote may affect their decision; your visible actions in the physical world may affect their decision; but the outcome of your decision process does not causally affect (or "acausally affect") other decision processes in the way that Vladimir seems to imply. At most, the outcome of your decision process provides you (not them) with very limited evidence about how similar agents may decide (Paul Almond may make this point in a forthcoming essay), but there is no way in which the particular decision-making process which you perform or instantiate is causally relevant to anyone else's in this magical way.

Then there are other dubious ideas in circulation, like "acausal trade" and its generalizations. I get the impression, therefore, that certain parties may be hoping for a grand synthesis which accommodates and justified timeless ontology, superrationality (and even democracy?!), acausal interaction between possible worlds, and one-boxing on Newcomb's problem. The last of these items is the only one I take seriously (democracy may or may not be worth it, but you certainly don't need a new fundamental decision theory to explain why people vote), and the grand synthesis looks more like a grand trainwreck to me. Maybe I'm wrong about what's happening in TDT-land, but I thought I'd better speak up.

Comment author: 28 September 2010 06:39:35AM 4 points [-]

Are you implying that there is an irrational focus on cooperation? I could see how this claim could be made about Eliezer or Drescher but less so about Nesov or Wei. It's not so much a focus on the aesthetics of the shiny idea of cooperation so much as the realization that if cooperation yields the best results, our decision theory should probably cooperate. It's not so much accommodating cooperation or acausal interaction as capitalizing on them. If it's impossible in practice, then the decision theory should reflect that. Currently, it seems incredibly difficult to find or define isomorphisms between computations an agent would consider itself, though people are working on it with interesting approaches. It's the ideal we'd like our decision theory to reach.

Also, I don't believe that timeless ontology is necessary -- at least, I'm not sure that it actually changes anything decision theoretically speaking. At any rate Wei Dai's and I think others' decision theory work is being done under the assumption that the agent in question will be operating in a Tegmark multiverse (or generally some kind of ensemble universe), and the notion of time doesn't really make sense in that case, even if it does make sense in 'our' multiverse (though I don't know what postulating this 'time' thing gets you, really).

Acausal trade is just a way to capitalize on comparative advantage over vast distances... it's a brilliant and frighteningly logical idea. (I believe Carl Shulman thought it up? I'm rather jealous at any rate.) Why do you think acausal trade wouldn't be a good idea, decision theoretically speaking? Or why is the concept confused, metaphysically speaking? Practically speaking, the combinatorial explosion of potential trading partners is difficult to work with, but if a human can choose between branches in the combinatorial explosion of a multiverse via basic planning on stupid faulty hardware like brains, an AGI might very well be able to do similar simulation of trading partners in an ensemble universe (or just limit the domain, of course). (I think Vladimir Nesov came up with this analogy, or something like it.)

Comment author: 28 September 2010 08:07:00AM 3 points [-]

Are you implying that there is an irrational focus on cooperation?

I don't know what's going on, except that peculiar statements are being made, even about something as mundane as voting.

if cooperation yields the best results, our decision theory should probably cooperate... If it's impossible in practice, then the decision theory should reflect that.

That's what ordinary decision theory does. The one example of a deficiency that I've seen is Newcomb's problem, which is not really a cooperation problem. Instead, I see people making magical statements about the consequences of an individual decision (Nesov, quoted above) or people wanting to explain mundane examples of coordination in exotic ways (Alan Crowe, in the other thread I linked).

I don't know what postulating this 'time' thing gets you, really

Empirical adequacy? Talking about "time" strays a little from the real issue, which is the denial of change (or "becoming" or "flow"). It ends up being yet another aspect of reality filed under "subjectivity" and "how things feel". You postulate a timeless reality, and then attached to various parts of that are little illusions or feelings of time passing. This is not plausible as an ultimate picture. In fact, it's surely an inversion of reality: fundamentally, you do change; you are "becoming", you aren't just "being"; the timeless reality is the imagined thing, a way to spatialize or logicize temporal relations so that a whole history can be grasped at once by mental modalities which specialize in static gestalts.

We need a little more basic conceptual and ontological progress before we can re-integrate the true nature of time with our physical models.

Why do you think acausal trade wouldn't be a good idea, decision theoretically speaking? Or why is the concept confused, metaphysically speaking?

To a first approximation, for every possible world where a simulation of you existed in an environment where your thought or action produced an outcome X, there would be another possible world where it has the opposite effect. Also, for every world where a simulation of you exists, there are many more worlds where the simulated entity differs from you in every way imaginable, minor and major. Also, what you do here has zero causal effect on any other possible world.

The fallacy may be to equate yourself with the equivalence class of isomorphic computations, rather than seeing yourself to be a member of that class (an instantiation of an abstract computation, if you like). By incorrectly identifying yourself with the schema rather than the instantiation, you imagine that your decision here is somehow responsible for your copy's decision there, and so on. But that's not how it is, and the fact that someone simulating you in another world can switch at any time to simulating a variant who is no longer you highlights the pragmatic error as well. The people running the simulation have all the power. If they don't like the deal you're offering them, they'll switch to another you who is more accommodating.

Another illusion which may be at work here is the desire to believe that the simulation is the thing itself - that your simulators in the other world really are looking at you, and vice versa. But I find it hard to refute the thinking here, because it's so fuzzy and the details are probably different for different individuals. I actually had ideas like this myself at various times in the distant past, so it may be a natural thing to think of, when you get into the idea of multiple worlds and simulations.

Do you know the expression, folie a deux? It means a shared madness. I can imagine acausal trade (or other acausal exchanges) working in that way. That is, there might be two entities in different worlds who really do have a mutually consistent relationship, in which they are simulating each other and acting on the basis of the simulation. But they would have to share the same eccentric value system or the same logical errors. Precisely because it's an acausal relationship, there is no way for either party to genuinely enforce anything, threaten anything, or guarantee anything, and if you dare to look into the possible worlds nearby the one you're fixated on, you will find variations of your partner in acausal trade doing many wacky things which break the contract, or getting rewarded for doing so, or getting punished for fulfilling it.

Comment author: 28 September 2010 11:47:29AM *  6 points [-]

1) Why do you pull subjective experience into the discussion at all? I view decision theory as a math problem, like game theory. Unfeeling robots can use it.

2) How can an "instantiation" of a class of isomorphic computations tell "itself" from all the other instantiations?

3) The opposing effects in all possible worlds don't have to balance out, especially after we weigh them by our utility function on the worlds. (This is the idea of "probability as degree of caring", I'm a little skeptical about it but it does seem to work in toy problems.)

4) The most important part. We already have programs that cooperate with each other in the Prisoner's Dilemma while being impossible to cheat, and all sorts of other shiny little mathematical results. How can your philosophical objections break them?

Comment author: 28 September 2010 12:01:01PM 2 points [-]

1) Why do you pull subjective experience into the discussion at all? I view decision theory as a math problem, like game theory. Unfeeling robots can use it.

If you're referring to the discussion about time, that's a digression that doesn't involve decision theory.

2) How can an "instantiation" of a class of isomorphic computations tell "itself" from all the other instantiations?

It's a logical distinction, not an empirical one. Whoever you are, you are someone in particular, not someone in general.

3) The opposing effects in all possible worlds don't have to balance out, especially after we weigh them by our utility function on the worlds. (This is the idea of "probability as degree of caring", I'm a little skeptical about it but it does seem to work in toy problems.)

I disagree with "probability as degree of caring", but your main point is correct independently of that. However, it is not enough just to say that the effects "don't have to balance out". The nearby possible worlds definitely do contain all sorts of variations on the trading agents for whom the logic of the trade does not work or is interpreted differently. But it seems like no-one has even thought about this aspect of the situation.

4) The most important part. We already have programs that cooperate with each other in the Prisoner's Dilemma while being impossible to cheat, and all sorts of other shiny little mathematical results. How can your philosophical objections break them?

Are these programs and results in conflict with ordinary decision theory? That's the issue here - whether we need an alternative to "causal decision theory".

Comment author: 28 September 2010 12:14:07PM *  2 points [-]

It's a logical distinction, not an empirical one. Whoever you are, you are someone in particular, not someone in general.

Can't parse.

Are these programs and results in conflict with ordinary decision theory?

Yes, UDT and CDT act differently in Newcomb's Problem, Parfit's Hitchhiker, symmetric PD and the like. (We currently formalize such problems along these lines.) But that seems to be obvious, maybe you were asking about something else?

Comment author: 29 September 2010 01:50:02AM 0 points [-]

Can't parse.

Even if there are infinitely many subjective copies of you in the multiverse, it's a matter of logic that this particular you is just one of them. You don't get to say "I am all of them". You-in-this-world are only in this world, by definition, even if you don't know exactly which world this is.

Are these programs and results in conflict with ordinary decision theory?

Yes, UDT and CDT act differently in Newcomb's Problem, Parfit's Hitchhiker, symmetric PD and the like.

Parfit's Hitchhiker seems like a pretty ridiculous reason to abandon CDT. The guy will leave you to die because he knows you won't keep your word. If you know that, and you are capable of sincerely committing in advance to give him the money when you reach the town, then making that sincere commitment is the thing to do, and CDT should say as much.

I also don't believe that a new decision theory will consistently do better than CDT on PD. If you cooperate "too much", if you have biases towards cooperation, you will be exploited in other settings. It's a sort of no-free-lunch principle.

Comment author: 29 September 2010 04:00:52AM 4 points [-]

Parfit's Hitchhiker seems like a pretty ridiculous reason to abandon CDT. The guy will leave you to die because he knows you won't keep your word. If you know that, and you are capable of sincerely committing in advance to give him the money when you reach the town, then making that sincere commitment is the thing to do, and CDT should say as much.

It should, but it doesn't. If you get a ride to town, CDT tells you to break your promise and stiff the guy. So in order to sincerely commit yourself, you'd want to modify yourself to become an agent that follows CDT in all cases except when deciding whether to pay the guy in the end. So, strictly speaking, you aren't a CDT agent anymore. What we want is a decision theory that won't try to become something else.

I also don't believe that a new decision theory will consistently do better than CDT on PD. If you cooperate "too much", if you have biases towards cooperation, you will be exploited in other settings. It's a sort of no-free-lunch principle.

CDT always defects in one-shot PD, right? But it's obvious that you should cooperate with an exact copy of yourself. So CDT plus cooperating with exact copies of yourself is strictly superior to CDT in PD.

Comment author: 29 September 2010 04:21:53AM 0 points [-]

I consider it debatable whether these amendments to naive CDT - CDT plus keeping a commitment, CDT plus cooperating with yourself - really constitute a new decision theory. They arise from reasoning about the situation just a little further, rather than importing a whole new method of thought. Do TDT or UDT have a fundamentally different starting point to CDT?

Comment author: 29 September 2010 08:02:09AM 1 point [-]

Well, I'm not sure what you're asking here. The problem that needs solving is this: We don't have a mathematical formalism that tells us what to do and which also satisfies a bunch of criteria (like one-boxing on Newcomb's problem, etc.) which attempt to capture the idea that "a good decision theory should win".

When we criticize classical CDT, we are actually criticizing the piece of math that can be translated as "do the thing that, if I-here-now did it, would cause the best possible situation to come about". There are lots of problems with this. "Reasoning about the situation" ought to go into formulating a new piece of math that has no problems. All we want is this new piece of math.

Comment author: 29 September 2010 04:59:56AM 0 points [-]

Err... the 'C'? 'Causal'.

Comment author: 29 September 2010 05:06:48AM 1 point [-]

I also don't believe that a new decision theory will consistently do better than CDT on PD. If you cooperate "too much", if you have biases towards cooperation, you will be exploited in other settings. It's a sort of no-free-lunch principle.

Only settings that directly reward stupidity (capricious Omega, etc). A sane DT will cooperate whenever that is most likely to give you the best result but not a single time more.

It is even possible to consider (completely arbitrary) situations in which TDT will defect while CDT will cooperate. There isn't an inherent bias in TDT itself (just some proponents.)

Comment author: 29 September 2010 08:12:07AM 1 point [-]

Can you give an example? (situation where CDT cooperates but TDT defects)

Comment author: 29 September 2010 07:23:15PM *  1 point [-]

Do you mean for PD variants?

I don't know what your method is for determining what cooperation maps to for the general case, but I believe this non-PD example works: costly punishment. Do you punish a wrongdoer in a case where the costs of administering the punishment exceed the benefits (including savings from future deterrence of others), and there is no other punishment option?

I claim the following:

1) Defection -> punish
2) Cooperation -> not punish
3) CDT reasons that punishing will cause lower utility on net, so it does not punish.
4) TDT reasons that "If this algorithm did not output 'punish', the probability of this crime having happened would be higher; thus, for the action 'not punish', the crime's badness carries a higher weighting than it does for the action 'punish'." (note: does not necessarily imply punish)
5) There exist values for the crime's badness, punishment costs, and criminal response to expected punishment for which TDT punishes, while CDT always doesn't.
6) In cases where TDT differs from CDT, the former has the higher EU.

Naturally, you can save CDT by positing a utility function that values punishing of wrongdoers ("sense of justice"), but we're assuming the UF is fixed -- changing it is cheating.

What do you think of this example?

Comment author: 09 October 2010 02:22:24AM 0 points [-]

That forthcoming essay by me ithat is mentioned here is actually online now, and is a two-part series, but I should say that it supports an evidential approach to decision theory (with some fairly major qualifications). The two essays in this series are as follows:

Almond, P., 2010. On Causation and Correlation â€“ Part 1: Evidential decision theory is correct. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation1.pdf or http://www.paul-almond.com/Correlation1.doc [Accessed 9 October 2010].

Almond, P., 2010. On Causation and Correlation â€“ Part 2: Implications of Evidential Decision Theory. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation2.pdf or http://www.paul-almond.com/Correlation2.doc [Accessed 9 October 2010].

Comment author: 28 September 2010 05:59:11AM 8 points [-]

People understand aspects of life that they donâ€™t have good words for. Math could supply them with some names for these concepts.

Knowledge is a (pre)sheaf

Comment author: 28 September 2010 08:47:26AM 2 points [-]

I often wish I could use the terms "transitive" "equivalence relation" "partition" and "subset", and have people understand their technical meanings.

Comment author: 29 September 2010 11:56:13PM 1 point [-]

It is certainly worth considering the possibility that there is no global element in the Universal Sheaf of Theories.

This sounds like a blatant map/territory confusion. Maybe we haven't found a single theory that applies to all domains. That is, we may have to use multiple inconsistent maps, at least for now. But the territory doesn't refer to our maps to figure out what to do. The territory just does its thing.

Comment author: 28 September 2010 08:54:43PM *  0 points [-]

Pardon the self-promotion, but the point that post makes is similar to the structure of understanding I outlined here. The sheaf model of knowledge is what I call a Level 2 understanding, and the level that scientists can't yet achieve for General Relativity and Quantum Mechanics.

Ordinary people go through life having different theories about love, religion, politics, when you kick a table it hurts your foot, and so on, and donâ€™t seem to worry a bit about whether the restriction maps are compatible ...

That's what I call a Level 1 understanding.

I probably could have created a better hierarchy if I had been familiar with the sheaf concept -- sounds like an ideal ontology for an AI to have since it faciliates regeneration of knowledge (Level 3) and consilience (Level 2).

Comment author: 28 September 2010 06:27:39AM *  0 points [-]

I like the idea, but he seems to be using some nonstandard terminology - IIRC, restriction maps still have to be compatible in a presheaf, no?

Edit: Or maybe he's just using "compatible" to mean "can be glued together".

Comment author: 28 September 2010 11:49:44PM *  7 points [-]

I'd like to remind everyone that I have continued to work on predictionbook.com, and now it's up to ~1800 predictions, and for those of you in a hurry, there are dozens/hundreds of interesting predictions coming due in the next year or 3: http://predictionbook.com/predictions/future

Remember, signing up for Intrade is hard and it's not profitable to wager on many of its long-term contracts, but PB is absolutely free!

(One thing is for sure: with ~443 predictions registered, I should eventually be pretty well-calibrated...)

EDIT: Hanson on the value of track records: http://www.overcomingbias.com/2010/09/track-records.html

Comment author: 29 September 2010 12:05:11AM 0 points [-]

Also it would be nice if you had a small amount of additional explanation of how to use the interface.

Comment author: 29 September 2010 12:40:10AM *  0 points [-]

Well, it strikes me as pretty intuitive to me. The only part that seems to trip people up is that a 0-49% prediction for is translated into a 100-x% prediction against.

Comment author: 29 September 2010 12:42:12AM 1 point [-]

Right, that was the main issue. I had to think for a second to figure out how to put that in.

Comment author: 29 September 2010 05:38:10AM 0 points [-]

Unless something has changed, a UI problem is that people often "judge" when they should "estimate" or "wager" or something.

Comment author: 29 September 2010 12:25:45PM 0 points [-]

That explains some judgements, but there is a line of text below the judge buttons specifically to forestall that; I'm not sure what easy solutions there are to that.

Comment author: 29 September 2010 12:01:05AM 0 points [-]

A large number of these are things where my confidence on them is much too low to bet. Almost anyone already willing to bet on them would likely have a lot more of thought and relevant data. Still, some of these look interesting enough to maybe play with.

Comment author: 29 September 2010 12:35:03AM 0 points [-]

Well, isn't that all the more reason to use PB rather than Intrade?

And your confidence may be too low. For example, looking at my own profile, I am so far significantly underconfidence, which is a problem to be fixed just like overconfidence.

Comment author: [deleted] 28 September 2010 08:35:38PM *  6 points [-]

Have people discussed the field complementary to the ugh field? We might call these "mmm fields".

An "mmm field" could be thought of as a mental cluster that has a tantalizing glow of positive affect. One subtly flinches toward such a cluster whenever possible, which results in one getting "stuck" there and cycling through the associated mental sequences.

Among other things, it could be used to describe those troublesome wireheading patterns. I'm personally interested in using it in the post I'm writing on meditation.

The name is a nod to pjeby's "mmm test".

Comment author: 28 September 2010 11:33:43PM *  4 points [-]
Comment author: 28 September 2010 06:23:51AM *  6 points [-]

There is a new discussion section on LessWrong.

This is to:

• provide a place you can post with lower karma consequences than the main site

• provide a place you can discuss things you think are not worthy of the main site

• provide a place you can work with the community to tune something up until it's ready for the main site

• give you guys an opportunity to make up your own uses for this part of the site.

(There's a link to there at the top right, under the banner)

Comment author: 28 September 2010 12:15:10PM *  2 points [-]

someone should really top-level post that in the next 12 hours or so. it could be you, reader of this comment.

Comment author: 28 September 2010 01:04:11PM *  1 point [-]

Oh, OK, I posted one.

Comment author: 28 September 2010 01:48:55PM 1 point [-]

Is it supposed to replace the open threads?

Comment author: 28 September 2010 01:51:35PM *  0 points [-]

That's what I wondered too; according to Kevin:

The community might converge on that but with how things work around here both will probably be around for a while.

I imagine it will replace the open thread; once you have a discussion area I don't see what's left in the open thread :P

Comment author: 29 September 2010 10:01:08AM 0 points [-]

I would like to see the discussion section (or a more robust set of subreddits) replace the open threads, but let's try it out for a while and wait to see if we can form a consensus.

Comment author: 29 September 2010 10:39:23AM *  24 points [-]

I recently read an anecdote (so far unconfirmed) that Ataturk tried to ban the veil in Turkey, but got zero compliance from religious people, who simply ignored the law. Instead of cracking down, Ataturk decreed a second law: all prostitutes were required to wear a veil. The general custom of veil-wearing stopped immediately.

This might be the most impressive display of rationality I've ever heard of in a world leader.

Comment author: 01 October 2010 08:15:21AM *  8 points [-]

As a Turk, I strongly believe that story is fictional.

Where and how was this ban issued? Can you give more details?

You may be hearing some fictional story based on his social reforms.

See here

And the veil, currently banned in public universities, is still very much a hot button issue. Also, a large segment of the Turkish population still wears the veil. The country is deeply divided over this issue.

Comment author: 01 October 2010 11:28:56AM 8 points [-]

Now that I think about it, believing the story requires ignoring how strongly many people who follow modesty rules are apt to be attached to them.

If a western ruler announced that prostitutes were required to cover their breasts, do you think respectable women would start going topless?

Comment author: 02 October 2010 12:48:57AM 2 points [-]

Your wikipedia link claims that the fez & turban were banned in 1925 and the veil and (again!) turban in 1934. Do you know these laws? Could you confirm that the text matches wikipedia's description? or not - perhaps these are the famous laws that cover universities? (I can't follow google's translation) How does this fit in your understanding of history?

While Yvain's story doesn't sound terribly plausible to me, deducing law from the present state is tricky.

Comment author: 02 October 2010 05:48:15AM *  1 point [-]

Do you know these laws?

The laws I know ban wearing the veil/turban (i mean the same thing by these two words) in government-related places - you can't wear it in the work place if you are working for a government, can't wear it in public universities, can't wear it in the TBMM (the Turkish congress) etc. etc... You are free to wear it on the street or in the workplace if you are working for a private company. I may be mistaken - the ban covering the universities is the most famous and contentious.

Could you confirm that the text matches wikipedia's description?

Which text? I've not read the wikipedia entry - just linked to it, thinking it would repeat what I already know.

How does this fit in your understanding of history?

You mean Yvain's story? It makes no sense. In 1920s, Turkey was largely being rebuilt after WW1 and the Turkish War of Independence. The legal system/constitution was being overhauled. The Arabic script was replaced with the Latin script. It is said that in one day, the entire country became illiterate - i.e. nobody understood the new alphabet at first.

With so much going on, I find it funny that AtatĂĽrk would pause and decree laws about prostitution. Consider me biased, but I think AtatĂĽrk had more urgent things to attend.

Comment author: 02 October 2010 10:27:37AM *  2 points [-]

Here is the 1925 law which wikipedia describes as banning men's hats. And here the 1934 law banning the veil and the (men's?) turban.

Yes, I don't think Yvain's story about prostitution is correct, but you seem to also claim that since many people wear veils, they must not be banned. I would not be at all surprised if there has been a law for 70 years banning them and even that no one talks about this law.

Comment author: 29 September 2010 12:10:08PM *  1 point [-]

I don't get it, why would prostitutes be more eager to obey the law? Especially seeing as their professional success depends on their perceived beauty?

Comment author: 29 September 2010 12:12:08PM 10 points [-]

I believe the point is that if prostitutes are required to wear veils, then whether they do or not, the veil is immediately stigmatized.

Comment author: 29 September 2010 12:17:34PM 1 point [-]

Thanks, I'd missed that.

Comment author: 01 October 2010 12:27:33AM 5 points [-]

I'm working on a top-level post about AI (you know what they say, write what you don't know), and I'm wondering about the following question:

Can we think of computer technologies which were only developed at a time when the processing power they needed was insignificant?

That is, many technologies are really slow when first developed, until a few cycles of Moore's Law make them able to run faster than humans can input new requests. But is there anything really good that was only thought of at a time when processor speed was well above that threshold, or anything where the final engineering hurdle was something far removed from computing power?

Comment author: 01 October 2010 10:10:29PM *  6 points [-]

To clarify the question a bit, I would consider dividing software technologies into three categories:

1. Technologies developed while the necessary computing resources were still unavailable or too expensive, which flourished later when the resources became cheap enough. For example, Alan Turing famously devised a chess program which he could only run using paper and pencil.

2. Technologies that appeared very soon after the necessary computing resources became available and cheap enough, suggesting that the basic idea was fairly straightforward after all, and it was only necessary to give smart people some palpable incentive to think about it. Examples such as the first browsers and spreadsheets would be in this category.

3. Technologies for which the necessary computing resources had been cheaply available for a long time before someone finally came up with them, suggesting an extraordinary intellectual breakthrough. I cannot think of any such examples, and it doesn't seem like anyone else in this thread can either.

This reinforces my cynical view of software technologies in general, namely that their entire progress in the last few decades has been embarrassingly limited considering the amount of intellectual power poured into them.

Here's an interesting related thought experiment that reinforces my cynicism further. Suppose that some miraculous breakthrough in 1970 enabled the production of computers equally cheap, powerful, compact, and easily networked as we have today. What do we have today in terms of software technology that the inhabitants of this hypothetical world wouldn't have by 1980?

Comment author: 28 June 2011 11:17:53PM *  1 point [-]

Chess had steady algorithmic improvements on the same order as the gains from hardware: Deep Fritz and Rybka both got to better performance per FLOP than Deep Blue, etc. More generally, I think that looking at quantitative metrics (as opposed to whole new capabilities) like game performance, face recognition, image processing, etc, will often give you independent hardware and software components to growth.

Comment author: 02 October 2010 08:20:02PM 0 points [-]

What do we have today in terms of software technology that the inhabitants of this hypothetical world wouldn't have by 1980?

Well, I'd guess our video game library is larger...

Comment author: 01 October 2010 10:15:23AM *  3 points [-]

An example might be binary search, which is pretty trivial conceptually but which took many years for a correct, bug-free algorithm to be published.

This kind of thing is particularly worrying in the context of AI, which may well need to be exactly right the first time!

Comment author: 01 October 2010 04:31:26PM 2 points [-]

An example might be binary search, which is pretty trivial conceptually but which took many years for a correct, bug-free algorithm to be published.

That an incorrect algorithm persisted for decades is rather different from the claim that no correct algorithm was published. This bug only applies to low-level languages that treat computer words as numbers and pray that there is no overflow.

Comment author: 01 October 2010 11:35:32PM *  0 points [-]

"in section 6.2.1 of his 'Sorting and Searching,' Knuth points out while the first binary search was published in 1946, the first published binary search without bugs did not appear until 1962" (Programming Pearls 2nd edition, "Writing Correct Programs", section 4.1, p. 34).

Besides, it's not like higher-level languages are immune to subtle bugs, though in general they're less susceptible to them.

edit: Also, if you're working on something as crucial as FAI, can you trust the implementation of any existing higher-level language to be completely bug free? It seems to me you'd have to try to write and formally prove your own language, unless you could somehow come up with a sufficiently robust design that even serious bugs in the underlying implementation wouldn't break it.

Comment author: 01 October 2010 02:33:26PM 0 points [-]

That's terrifying in the context of get-it-right-the-first-time AI.

I hope there will be some discussion of why people think it's possible to get around that sort of unknown unknown, or at best, barely niggling on the edge of consciousness unknown.

Comment author: 03 October 2010 08:13:40AM 2 points [-]

What about BitTorrent or P2P file transfers in general? Anonymous peer to peer seems to have not emerged until 2001, or peer to peer in general until November 1998. That's a bit too far back for me to have any idea what computers were like but peer to peer file transfer is an amazing software development which could be implemented on any computers that can transfer files--at least as early as 1977.

Comment author: 01 October 2010 03:15:38PM *  2 points [-]

Would the first spreadsheet (VisiCalc) or the first browser (Mosaic) fit your bill? As far as I know, they didn't face difficult hardware bottlenecks when they appeared.

Comment author: 01 October 2010 04:15:12PM *  0 points [-]

VisiCalc is a great example, but Mosaic was hardly the first browser. Nelson and Engelbart certainly had hypertext browsers before. I'm not entirely sure that they had graphics, but I think so. Have you seen Engelbart's 1968 demo? (ETA: I'm not sure that Engelbart's demo counts, but even if it doesn't, he clearly cared about both hypertext and graphics, so he probably did it in the following decade or two)

Comment author: 01 October 2010 04:33:04PM 1 point [-]

Speaking of Engelbart, how about the mouse as an example? Did that take a nontrivial amount of the computing power when first demoed, or not?

Comment author: 01 October 2010 04:54:01PM *  1 point [-]

I'd guess that the computing power devoted to the mouse in a graphical environment is always small compared to that devoted to the screen, and thus it should be a good example. (if one used a mouse to manipulate an array of characters, as sounds like a good idea for a spreadsheet, the mouse might be relatively expensive, but that's not what Engelbart did in '68)

The mouse and the browser (and magfrump's similar example of AIM) are probably examples of a phenomenon that generalizes your original question, where the bottleneck to deployment was something other than computer power.

Comment author: 01 October 2010 09:23:36AM 2 points [-]

How significant of a technology are you thinking of?

For example, I would guess that most video game emulators came about when computers were much faster than the games they were emulating--if it weren't the case that fast computers were cheaper than the emulated consoles emulators wouldn't be very popular. Further, I can guarantee you that computers easily have more power than video game consoles, so any emulator produced of the latest generation of console was written when computers had far more power than necessary.

So: Does a new emulator count? It's a specific technology that is developed in a fast environment. Does an old emulator count? Emulators in general aren't new technology at all. Does an instant messenger count? Predecessors existed in times when text content was a big deal, but I would be mildly surprised to hear that the original AIM (or whatever the first instant messenger program was) was created at a time when text-over-the-internet was a big stress on computers.

Comment author: 01 October 2010 07:04:22PM 1 point [-]

Relatedly, I wish I could remember what I read recently about comparing the performance of a 2000s algorithm for a particular problem (factoring?) on 1980s hardware to a 1980s algorithm on 2000s hardware. It might've been on Accelerating Future.

Comment author: 01 October 2010 10:05:54PM 1 point [-]

There are lots of examples of improved algorithms, such as your example of factoring, the similarly timed example of chess algorithms, and the much earlier merge sort and simplex algorithms. But in none of these cases did the algorithm completely solve the problem; there are always harder instances that we care about. This is particularly clear with factoring, which is adversarial. (you might count human chess as a solved problem, though)

Comment author: 18 January 2011 01:50:43AM 4 points [-]

I used to post here on LessWrong and left for various reasons. Someone recognized my name earlier today from my activity here and I just so happened to have thought of LessWrong during a conversation I had with a friend of mine. The double hit was enough to make me curious.

So how's it going? I am just stopping by to say a quick, "Hello." It seems that Open Threads are no longer the way things work but I didn't notice anything else relevant in the Recent Posts. The community seems to be alive and thriving. Congratulations. :)

Comment author: 18 January 2011 02:45:57AM 2 points [-]

LW now has a discussion section that serves as a permanent open thread. The link is at the top right, next to the link to the wiki.

Comment author: 18 January 2011 03:41:38AM 1 point [-]

Aha! Thank you much! I figured something was up. :) I won't bother copying this over there, however.

Also, apparently there are some spammers about.

Comment author: 18 January 2011 03:32:24AM 1 point [-]

LW now has a discussion section that serves as a permanent open thread.

And within the discussion thread there are open threads for those things that are too small to warrant even a discussion post.

Comment author: 18 January 2011 03:48:18AM 1 point [-]

And then there are nested comment threads within the open threads within the discussion thread, for things that are... oh, never mind.

Comment author: 01 October 2010 05:37:04PM *  4 points [-]

Nate Silver has just begun a new series of posts on 538 addressing the conflict between his model numbers and intuition - the first part, The Uncanny Accuracy of Polling Averages*, Part I: Why You Canâ€™t Trust Your Gut, and second part, The Uncanny Accuracy of Polling Averages*, Part 2: What the Numbers Say, are up.

A money quote for Less Wrong users who remember The Bottom Line:

Politicians â€” the ones worth their salt, anyway â€” are exceptionally skilled at making believers out of people, and theyâ€™ll try to make a believer out of you. Some of the time, theyâ€™ll make a strong enough argument to persuade even the most seasoned observers. But a much smaller fraction of the time will they actually turn out to be right. Thatâ€™s what the data says, and it says so pretty clearly.

Edit: If people prefer, I can cross-post to the Discussion section.

Comment author: 28 September 2010 09:05:46PM *  4 points [-]

"Incredibly Depressing Mega Millions Lottery Simulator!" â€”Â this may be helpful to show to people who don't quite grasp probability theory well enough to break habits like playing the lottery and other forms of gambling.

Comment author: 28 September 2010 11:29:35PM 1 point [-]

"In the 156845 times this simulation has run, players have won \$1686353 And by won I mean they have won back \$1686353 of the \$156845 they spent (1075%)."

"In the 590873 times this simulation has run, players have won \$2761902 And by won I mean they have won back \$2761902 of the \$590873 they spent (467%)."

"In the 842587 times this simulation has run, players have won \$2788774 And by won I mean they have won back \$2788774 of the \$842587 they spent (330%)."

This part seems to fluctuate pretty wildly. But it's a very cool and intuitive way to show people the low chance of them winning the lottery.

Comment author: 29 September 2010 03:10:02PM 1 point [-]

Weird, why would it be showing you that? That's a message telling people they can at least triple their money playing the lottery. Mine instead shows the much more expected

"In the 3986493 times this simulation has run, players have won \$180090 And by won I mean they have won back \$180090 of the \$3986493 they spent (4%)."

Comment author: 29 September 2010 03:44:21PM *  0 points [-]

I'm still getting weird results on both Chrome and Firefox. Did you try more than once? Could you try again now?

Could someone else provide results?

Comment author: 29 September 2010 07:03:04PM 0 points [-]

In the 419991 times this simulation has run, players have won \$1811922 And by won I mean they have won back \$1811922 of the \$419991 they spent (431%).

Comment author: 29 September 2010 06:59:36PM 0 points [-]

"In the 5617525 times this simulation has run, players have won \$664073 And by won I mean they have won back \$664073 of the \$5617525 they spent (11%)."

Either it's buggy or there is some tampering with data going on.

Also, several Redditors claim to have won - maybe the simulator is just poorly programmed.

Comment author: 29 September 2010 11:46:42PM 3 points [-]

It's an integer overflow - it wraps around at either 2^31, 2^32/100, or 2^32. I wasn't patient enough to refresh the page enough times to figure out which.

Comment author: 09 November 2010 12:14:49AM 3 points [-]

One thing people often seem to miss on LW, when discussing cryonics, is the cost of the operation. People seem to often operate under illusion that if the cost of process is, say, \$50 000, you don't need to worry about it that much since you can get a insurance, and thus pay only few hundred a year or so. This has made me wonder, since, insurance most likely makes it cost more, not less, and only works to offer protection from the case of you dying a lot earlier than insurance company would predict, which, you know, is unlikely.

This combined with the fact that even if you pay that cost, \$50 000, you still are not guaranteed to be awakened in a better future. If the chance is 1/10, your expected utility is same as saving your life using surefire method for \$500 000, and the chance is by most estimates lower than 1/10.

This is just to present my reasons for not getting cryonics deal even if I happened to live in a country where it would be realistically possible. Insurance doesn't make your expected utility of signing up for cryonics any better, and even if you valued your life more than \$50 000, it would be at least problematic to say if that was really the best you can do with all that money. Say you spent the same amount of a food that was a bit healthier than what you're eating now, or paid someone to clean up your house and thus avoid stress regarding that and thus increasing your expected lifespan, it could easily be argued that you're using that money better than our hypothetical cryonicists. And those are hardly the only and the best uses your \$50 000 can have.

Of course, things get more difficult if you have only around 20 years or less to live. Still, I'm not sure if giving going through something that unlikely is going to help you and has huge costs is the best option.

Comment author: 09 November 2010 04:39:06AM 12 points [-]

Don't forget that if it works, you probably get immortality too. If you were already immortal, would you be willing to become mortal for \$500 000?

Comment author: 09 November 2010 07:51:03PM 1 point [-]

I really have to remember that frame for cryonics.

Comment author: 10 November 2010 02:24:33AM 0 points [-]

Don't forget that if it works, you probably get immortality too. If you were already immortal, would you be willing to become mortal for \$500 000?

I'm not sure if this can be reversed just like that. If the immortality is possible at that world, you could just use part of that \$500 000 to spring back to immortality, and I'm a bit unsure about how denying that affects our hypothetical situation and how it compares to the original dilemma.

But this would be quite close to the original for someone whose lifespan is almost over, so that money doesn't have the time to change anything else for him. Still, one point that makes me wonder is that comparatively, we'd expect \$500 000 be worth much less in the world where immortality is commonplace enough for you to have it, whereas now it's really much. Do we assume that in this new world, \$500 000 has same comparative edge as it would have in our world, or in other words, amount of money/people in the world remains the same?

Comment author: 09 November 2010 12:53:06AM 2 points [-]

The point of the insurance isn't to help you. The point of using insurance is because there were problems with the early cryonics organizations where people were prepped for cryonics with money supposedly going to come from their estates and then the money never materialized. The insurance makes sure that the organizations get enough funds. It doesn't make things less expensive for the person to be preserved.

Comment author: 09 November 2010 01:08:21AM 1 point [-]

Sure, but I've gotten the impression that if someone mentions that they are not sure if the cryonics is worth the money, people come mentioning that "it's actually only <sum x> a year", fallacy that I wanted to point out.

Comment author: 09 November 2010 03:07:12AM 1 point [-]

I don't think that specific issue is a fallacy. In that context, one needs to just remember that utility does not scale linearly with amount of money.

Comment author: 02 October 2010 02:50:20PM *  3 points [-]

I seem to be unable to downvote. I don't downvote all that often, so there's no way I've used up my downvotes allowance. Is this because I'm trying to access LW from Austria, or for some other reason (e.g., is downvoting broken overall)? I am unable to downvote in either of the two browsers I tried.

Comment author: 02 October 2010 02:52:57PM 0 points [-]

I can't downvote either. I don't think I've used up my allotment.

Comment author: 02 October 2010 04:37:05PM 0 points [-]

Downvoting all these because the bug has been reported an excessive number of times. Wait, no I'm not because... downvoting myself for consistency... no... damn.

Comment author: 02 October 2010 04:58:43PM -1 points [-]

I can downvote. That was a test and you were randomly selected. Sorry. Using firefox on windows xp.

Comment author: 02 October 2010 06:14:44PM 0 points [-]

Louie pushed the fix to production last night and it looks like the update script triggered sometime between these two posts

Comment author: 02 October 2010 08:50:52PM 2 points [-]

Yes. Thank goodness I've fixed downvoting. It's my favorite part of this site.

Comment author: 02 October 2010 02:58:12PM *  0 points [-]

It seems that for three days now, downvotes don't work on Less Wrong.

Comment author: 30 September 2010 01:18:33PM *  3 points [-]

â€śMy research team and I have found that highly skilled golfers are more likely to hole a simple 3-foot putt when we give them the tools to stop analyzing their shot, to stop thinking,â€ť Beilock said. â€śHighly practiced putts run better when you donâ€™t try to control every aspect of performance.â€ť Even a simple trick of singing helps prevent portions of the brain that might interfere with performance from taking over, Beilockâ€™s research shows.

In one study, researchers gave standardized tests to black and white students, both before and after President Obama was elected. Black test takers performed worse than white test takers before the election. Immediately after Obamaâ€™s election, however, blacksâ€™ performance improved so much that their scores were nearly equal with whites. When black students can overcome the worries brought on by stereotypes, because they see someone like President Obama who directly counters myths about racial variation in intelligence, their performance improves.

Beilock and her colleagues also have shown that when first-grade girls believe that boys are better than girls at math, they perform more poorly on math tests. One big source of this belief? The girlsâ€™ female teachers. It turns out that elementary school teachers are often highly anxious about their own math abilities, and this anxiety is modeled from teacher to student. When the teachers serve as positive role models in math, their male and female students perform equally well.

In tests in her lab, Beilock and her research team gave people with no meditation experience 10 minutes of meditation training before they took a high-stakes test. Students with meditation preparation scored 87, or B+, versus the 82 or B- score of those without meditation training. This difference in performance occurred despite the fact that all students were of equal ability.

Comment author: [deleted] 30 September 2010 05:01:54PM 2 points [-]

Interestingly, they claim that choking is due to poor use of working memory:

Talented people often have the most working memory, but when worries creep up, the working memory they normally use to succeed becomes overburdened. People lose the brain power necessary to excel.

Comment author: 30 September 2010 10:24:29PM 1 point [-]

That is an interesting idea. But there are motor programs that don't use verbal working memory. Making conscious adjustments (different from how the program was practiced) could interfere, though.

I think physiological panic/fear has to be a large part of most choke experiences, distinct from any thoughts interfering w/ working memory.

I've also heard of people choking especially because they're worried that their social status may be threatened if they're too good or too bad at something. I don't know if that acts through a different mechanism; I'm just saying that such concerns seem especially distorting on performance.

Comment author: 02 October 2010 02:20:17AM 1 point [-]

I'd like to see someone compare college students' performance on important tests after, say, 0--3 drinks. If test anxiety hurts people's scores as much as it seems to, then perhaps cheap beer will be used as a nootropic.

(A quick check on Google Scholar doesn't show any studies that have been done on this, which isn't surprising.)

Comment author: 02 October 2010 11:20:57AM 0 points [-]

It might be worth checking, though it would surprise me if it works. I'm betting that if alcohol improves test performance, college students would have discovered it long ago.

Thanks for the link-- I didn't realize test anxiety was that common or that there were such effective methods of treating it.

Comment author: 28 September 2010 03:04:03PM 3 points [-]

The community doesn't seem to be resolved. Should I model willpower as a muscle or as a battery? Or should I abandon both and model myself in terms of incentives/signaling?

If you fall into either camp, why do you believe what you believe? Links to studies where scientists used a particular framework obviously doesn't count. Is there any evidence constantly challenging your willpower makes it stronger in the long run?

Comment author: 28 September 2010 03:13:27PM 6 points [-]

Links to studies where scientists used a particular framework obviously doesn't count. Is there any evidence constantly challenging your willpower makes it stronger in the long run?

Well. Now I don't know what to say.

Comment author: 28 September 2010 10:05:22PM *  5 points [-]

Should I model willpower as a muscle or as a battery?

I'm firmly in the "muscle" camp. Here's why:

When you use a muscle a lot, it tires, and you need to rest for a while before exercising it again. This is the part that resembles a battery, giving rise to that model. The difference is that, after going through this process several times, the muscle's capacity for use is greater, whereas the battery's would be smaller. In my experience, exercising willpower carefully makes it easier to use it in the future. As a fellow once said, getting better at skills is a skill, which you can get better at.

So, by that model, the problem with this oft-referenced comic is obvious. The author is doing the equivalent of going to the gym and lifting the heaviest weights she can for as long as she can stand, then going back the next day and trying to do the same thing. Of course she crashes. The way to get stronger is to push at the edge of your comfort zone only a little bit, keep doing it until it becomes comfortable, and then push a little more next time. Ask anyone who plays an instrument. You don't rush through the tricky section over and over and expect to learn it; you slow it down, break it up until it's only a little harder than what you've been doing, and then work up to play speed again. And what do they call the thing you build up that way? Muscle memory.

Comment author: 30 September 2010 10:30:06PM 0 points [-]

I lean toward battery. I'm unaware of any 'willpower muscle' evidence.

Comment author: 28 September 2010 01:10:52PM 3 points [-]

I wonder if there would be a market for rationalist councilors who would talk one-on-one with their "patients" as Psychologists and Social Workers do today. You would probably need some impressive credentials to get started such as a Ph.d. from a well-known school. I have been thinking about running advertisements to see if I could get anyone to pay me for rationalist services. As I'm an economist I would stress my capacity to give financial advice. I would want some way of protecting myself from lawsuits, however, by purchasing insurance or working through some personal service organization.

Comment author: 28 September 2010 04:07:41PM 4 points [-]

In the discussion on Prices or Bindings, EY mentioned that it may help to organize as something that is legally a "church" (apparently there can be atheist churches) so that you can give a vow of secrecy that's stronger than psychologists can give.

Comment author: 28 September 2010 08:46:27PM 3 points [-]

Let's make them wear hooded robes and call them Confessors.

Comment author: 29 September 2010 12:32:11AM 1 point [-]

Basically the idea behind REBT is to correct irrational thoughts and point out errors in thinking. One of my professors, a clinician, said he had great success treating panic disorder with agoraphobia using REBT.

Comment author: 28 September 2010 03:08:12PM 0 points [-]
Comment author: 28 September 2010 06:47:02AM 3 points [-]

After This Discussion I made a private google group to discuss working together for profit.

Email me at james.andrix@gmail.com and I'll add you to the list.

Comment author: 02 October 2010 12:40:17AM 2 points [-]

A discussion of paperclip maximizers (linking here) has made the front page of reddit.

Comment author: 02 October 2010 02:05:44AM *  3 points [-]

It's interesting to look at people's arguments against paperclip maximizers. There seem to be two related categories that make up most of the objections:

1. People who can't imagine that a sufficiently intelligent being could be that different from us. One guy tried to claim that morality is universal so of course an artificial intelligence would share our values. Another said that a superintelligence would inevitably realize that its existence was pointless, as if there could possibly be some point other than maximizing the number of paperclips. Another claimed that morality is an "emergent phenomenon", but didn't explain what that actually meant, or how humanlike morality would emerge from a being whose only goal is paperclip maximization.

2. People who think of it as a dumb machine, more akin to a drill press than an alien god. Just put an off button on it! Or require that there be a human operator with an axe ready to cut the power lines to the computer.

What these objections both have in common is that they assume that the world consists of humanlike intelligences or dumb machines. It's unintuitive to imagine something that is both intelligent and profoundly alien.

Comment author: 02 October 2010 12:01:46PM 1 point [-]

I'd split the difference-- I don't think it's that hard to imagine an AI which has about as much loyalty to Ais as people have to people.

Really alien minds are naturally much harder to imagine. Clippy seems more like a damaged human than a thoroughly alien mind.

This may be a matter of assuming that minds would naturally have a complex mix of entangled goals, the way humans do. Even an FAI has two goals (Friendliness and increasing its intelligence) which may come into conflict.

Faint memory: an Alexis Gilliland cartoon of an automated bomber redirecting its target from a robot factory to a maternity ward.

Comment author: 02 October 2010 01:31:03PM *  2 points [-]

Even an FAI has two goals (Friendliness and increasing its intelligence) which may come into conflict.

No, just Friendliness. Increasing intelligence has no weight whatsoever as a terminal goal. Of course, an AI that did not increase its intelligence to a level which it could do anything practical to aid me (or whatever the AI is Friendly to) is trivially not Friendly a posteriori.

Comment author: 02 October 2010 01:41:32PM 0 points [-]

That leads to an interesting question-- how would an FAI decide how much intelligence is enough?

Comment author: 02 October 2010 01:51:12PM *  2 points [-]

I don't know. It's supposed to be the smart one, not me. ;)

I'm hoping it goes something like:

• Predict the expected outcome of choosing to self improve some more.
• Predict the expected outcome of choosing not to self improve some more.
• Do the one that gives the best probability distribution of expected results.
Comment author: 01 October 2010 09:51:15AM 2 points [-]

I recently started graduate school in mathematics, and I have been having some difficulty focusing on studying. Reading through various posts on Akrasia and a few posts on Overcoming Bias (I don't remember which ones and they aren't directly relevant) I came to the (tepid) conclusion that I don't feel like I will gain status by being good at math.

Has there been discussion of or does anyone have ideas about how to raise the status of an activity in one's own mind?

The specifics of my situation make it difficult for me to find students and faculty to become accountable to in the short term, so something that would be internal would probably be ideal for me. I'm also interested in seeing a more general discussion.

Comment author: 01 October 2010 07:56:44PM 0 points [-]

Could you elaborate on what you mean by status and how you reached the belief that it is relevant to your studying?

While being a professor has a certain amount of status in the eyes of the outside world, I'm not sure one should attribute this status to "being good at math"; indeed, it would probably be easiest to transform math into academic prestige through some other field, like bioinformatics. An anecdote about status and the outside world: there is a U of Chicago professor who is bitter that his parents think it a state school.
By being good at math, you'll gain status in the eyes of mathematicians. I think most people have the opposite problem, that being in a graduate program, surrounded by people who care about the subject causes them to think the world cares about it.

Comment author: 03 October 2010 01:02:21AM *  1 point [-]

In the past, my mind made me obsessed with number theory. Since then, I have decided that studying number theory is something that I actually want to do.

More recently, the set of people who were happy when I was good at math (esp. my parents) have had less influence in my life, and people who were less happy about it (i.e. math classmates who weren't going to grad school and felt like I was being a smart ass) have had more influence.

So as opposed to previously, when my underlying drives said "do math all the time" and I gained short term status from it, I know have a desire to do math all the time and my underlying drives aren't helping me out like they used to.

So what I want to do is harness the other kind of status.

Comment author: 03 October 2010 04:55:31AM 0 points [-]

An actual suggestion, before I continue interrogating you for my own curiosity:
Have you tried mathoverflow as a way of acquiring a community that promotes the status you want?

people who were less happy about it (i.e. math classmates who weren't going to grad school and felt like I was being a smart ass) have had more influence.

How do they have more influence in grad school than they had in undergrad? Did they only start being negative when your paths diverged? Or did you not need to study in undergrad and so you didn't notice that you were losing this ability? But if that's the case, how do you know it's status and not just being out of practice studying?

You explained what you mean by status, but you didn't really answer how you know. I'm skeptical of your introspection.

Comment author: 03 October 2010 05:06:36AM 1 point [-]

I spent time with students who didn't study math while I was an undergrad, so I wasn't in direct academic competition with them. Also many of them were good students, so grades could be social status markers, whereas the graduate school I'm in is not hugely prestigious, and I am the youngest student in the program. Also I the friends that I have gotten to know better recently have told me these feelings explicitly which they had not before.

I also had somewhat poor study skills, but my introspection springs from the fact that I learned about the singularity (read Kurzweil for the first time), failed my first math class, and experienced depression for the first time within a few months of each other. In the past, I sometimes felt that I could achieve figurative immortality and value by being a great mathematician because I could always succeed, now figurative immortality seems bitter and abstract math seems like fruit that hangs high relative to future changes in mental architecture (I think uploads or AI will easily advance mathematics beyond what I can accomplish.)

I'm being stingier with the details than I could be because the whole thing is somewhat personal.

Comment author: 01 October 2010 01:58:54PM *  0 points [-]

I recently started graduate school in mathematics, and I have been having some difficulty focusing on studying.

Difficulties focusing on studying math are common. There's an issue of many existing expositions being markedly suboptimal with respect to engaging the student.

Comment author: [deleted] 01 October 2010 01:45:24PM 0 points [-]

Well, you will gain status by being good at math, unless you have a more prestigious opportunity than "professor" lined up, in which case you should take that instead.

Comment author: 28 September 2010 09:19:43PM *  2 points [-]

Does anyone else remember a short article by Asimov presenting the idea of an intelligence explosion? I read it in the mid-80s in a collection of his essays I checked out from the library (so it wasn't necessarily recently published); if I remember right and I'm not confabulating, the essay had been written years before for an airline in-flight magazine. If it mentioned I.J. Good's paper, I don't recall it.

This was the first I encountered the idea, as a teenager, and obviously it stuck with me.

Comment author: 03 October 2010 05:08:39AM 1 point [-]

Is the Recent Comments sidebar stuck several hours back for anyone else?

Comment author: 03 October 2010 11:23:58AM 0 points [-]
Comment author: 01 October 2010 02:21:44PM *  1 point [-]

I like how the home page of the discussion sub-site (/r/discussion/) shows just the titles of the posts, not the whole post or all the paragraphs "above the fold" and wish there was a similar way to browse the main site.

Actually there is! I didn't know about it till now because it never occurred to me till now to click the title of the "Recent Posts" section of the sidebar on the right.

Comment author: 29 September 2010 07:27:02PM 1 point [-]

Today, my downvotes stopped working. I can press the buttons, but after refreshing downvote marks disappear, while upvote marks stay (which means that the scripts at least partially work; I also tried another browser). No message to the effect that I'm not allowed to downvote ever appears. I suppose I could've made 25000 downvotes, counting 10x post downvotes, to hit the 4*karma limit, but it seems unlikely, and there should be a message if that limit is reached in any case.

Comment author: 29 September 2010 07:52:18PM 1 point [-]

Same here. And upvotes do not have the same problem, so I don't think it's a caching issue.

Comment author: 29 September 2010 09:14:44PM 2 points [-]

hmmm I can confirm this both here and on a local copy of the codebase, I'll have a look and make sure Wes knows

Comment author: 29 September 2010 09:59:21PM 2 points [-]

It was a simple bug, fix is committed and a pull request is in, I'll send an email out now to get this into production.

Comment author: 01 October 2010 09:25:07AM 0 points [-]

I would be very interested in a one-sentence description of the bug. Especially if it was not just a side-effect of some other change in the codebase.

Comment author: 02 October 2010 11:52:55AM 1 point [-]

in one sentence... the vote processing mechanism required a reference to the global configuration for pylons and the pylons configuration import was missing.

not super interesting unfortunately :]

it was probably something like a munged automerge or something

Comment author: 30 September 2010 02:21:08AM 0 points [-]

This happens to me, as well.

Comment author: 28 September 2010 07:21:35AM 1 point [-]

Regarding optimal mental performance: I've bought some modafinil and piracetam recently. I think I remember hearing some people on LW use these drugs. Does anybody have any wisdom or experiences they would like to share? How significant are the effects? Were your experiences good or bad?

Comment author: 28 September 2010 07:33:08AM *  4 points [-]

Modafinil increased my blitz chess rating by 150 points the first time I took it. That is completely ridiculous. The effects weren't as strong after that, but still very noticeable. It definitely worked to keep me awake for long hours. One time it totally fucked me up for 3 days or so: I took it thinking I'd stay up, but then I was like nah, never mind, I'll go to sleep 'cuz I'm really tired. Like an idiot. I then got 5 or so hours of fitful sleep followed by about 5 more hours of thrashing about on my mattress follow by about 10 more hours of groggy sleep deprivation-induced pain followed by a completely messed up sleep cycle for the next few days of constant tiredness. So, it's awesome when you use it right, but be careful to stay up for the duration of the effects (12 hours or so, I think.)

I am quite obviously not a doctor, though.

Comment author: 28 September 2010 02:07:56PM 3 points [-]

Also, there are idiosyncratic reactions-- I know one person who found that Modafinil made him sleep-- a lot. It was very refreshing-- possibly deeper than his usual sleep.

This seems to be very rare, or at least I haven't heard of it happening to anyone else, but I think it implies that you should try the drug for the first time when it isn't important that it works.

Comment author: 28 September 2010 01:12:17PM *  1 point [-]

I had sleep problems as well with Modafinil.

Comment author: 28 September 2010 08:45:43AM 0 points [-]

Interesting. I have a bad sleep cycle already, so that was a major concern. I thought eugeroics have a reputation for not damaging sleep cycle like amphetamine (promoting wakefulness without interfering with normal sleep). Anyway, I guess as long as I take it in the morning, it should be metabolized by bed time. Thanks for the info.

Comment author: 28 September 2010 08:13:25PM 1 point [-]

What I've heard is that modafinil has exactly the same set of effects as amphetamines, just with different proportions, particularly much less of the negative ones. But there's a lot of variation from person to person.

Comment author: 28 September 2010 08:51:45AM 1 point [-]

The messed up sleep cycle part could have been coincidence: my sleep schedule is both chaotic and easily perturbed, such that random things can screw it up for days. But yeah, if you take it in the morning you should be fine. I mostly used it for staying up for 2 days in a row instead of 1, which I think is a nonstandard use case.

Comment author: 28 September 2010 04:46:07PM *  1 point [-]

I would point out that if you are interested in general information, there are many larger sites to search for information. Some quick links:

It would be better if you asked on LW only if you had something more specific (for example, if you had developed a theory of akrasia and thought a particular nootropic might help as has been done with sulbutiamine).

(For completeness, I keep a record of my nootropics in http://www.gwern.net/Nootropics )

Comment author: 28 September 2010 06:06:34PM *  5 points [-]

Well, I already researched general info, but I thought I would ask on LW because I suspect the commenters here are less likely to forget to take placebo effect or coincidence into account when describing effects.

I know it's tangential to the site topic, but it is the open thread, not a top-level thread.

Comment author: 29 September 2010 04:58:10AM *  2 points [-]

Agreed, I'm curious to know what rational people think about specific nootropes. You should start a nootrope discussion in the discussion page.

For what its worth, I've lurked around the imminst forums, and am generally impressed with the level of carefulness.

As for my own use:

I tried Modafinil a few times, at small doses (about 25 mg and 10 mg). Both times I had greatly enhanced focus, and reduced tolerance for nuanced complex thought. That is, I took it while programming and found I could focus and execute a design, but had a hard time stepping back to think a design through (I just wanted to get to work, damnit!).

Also, even on my small doses, taken in the morning, I was completely incapable of sleep for the next 40 hours or so (the last 20 hours of which were incredibly unproductive and almost painful). I'm sensitive to uppers in general though.

Needless to say, I don't plan on using Modafinil again, unless an extreme situation arises. On the other hand, I've met plenty of people who seem to be able to take it daily at high doses with no negative side effects.

Comment author: 29 September 2010 05:51:55AM 0 points [-]

Good idea, I created a discussion.

Comment author: 01 October 2010 04:51:32PM 0 points [-]
Comment author: 28 September 2010 06:02:08PM 0 points [-]

If I live in a universe which is not Turing computable but try and apply Solomonoff induction, I may end up in trouble--I may be unable to accept the existence of a black box for the halting problem, for example, even when believing in such a thing is useful. There are several possible solutions to this problem, but I have not seen any here which I find satisfactory.

My solution is this. Instead of considering a prior on the space of models for the universe (since you can't really have a prior on the uncountable space of ways the universe could work if it weren't as restricted as Solomonoff induction thinks it is) consider all computable strategies you could use to achieve whatever goal you are interested in (e.g., to predict what the universe will do next). Rather than disqualifying a model when it fails to conform to our expectations, we penalize a strategy (say by reducing its weight by some constant factor) when it fails to perform well (e.g., makes an incorrect prediction or fails to make a useful prediction) and follow the strategy which currently has the highest weight.

If your goal is predicting the next bit handed to you by the universe, this is the same as Solomonoff induction where every model includes random noise. However, it can be argued convincingly that this is the correct way to incorporate the possibility of e.g. a black box for the halting problem into your world view, while it is not at all clear why Solomonoff induction is reasonable if the universe isn't computable.

In general you get theoretical guarantees whenever you are trying to solve a problem which doesn't have too much state--ie, in which it is very difficult to pursue a strategy so bad that it will ruin your current performance and all future performance. The proof is probably not surprising to anyone here, but it is worth pointing out that if you use a multiplicative update rule you get particularly fast convergence to a good prediction strategy (this is an infinite analog of multiplicative weights).

There is some question about why we restrict to computable strategies when implementing our induction procedure is already uncomputable. I don't really want to include a discussion of this here, but if we are going to think about induction as a useful tool to apply in reality then we should probably adopt a view of it which doesn't involve us solving the halting problem (for example, as a framework for evaluating suggestions as to what we should believe etc.). In this case it hopefully makes sense to restrict to computable strategies.