Open Thread: March 2010

5 Post author: AdeleneDawner 01 March 2010 09:25AM

We've had these for a year, I'm sure we all know what to do by now.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (658)

Comment author: Karl_Smith 11 March 2010 05:15:01PM 1 point [-]

I have a 2000+ word brain dump on economics and technology that I'd appreciate feedback on. What would be the protocol. Should I link to it? Copy it into a comment? Start a top level article about it?

I am not promising any deep insights here, just my own synthesis of some big ideas that are out there.

Comment author: RobinZ 11 March 2010 05:26:12PM 0 points [-]

I would post a link on the latest Open Thread - I don't believe an explicit protocol exists.

Comment author: RichardKennaway 11 March 2010 01:05:12PM 0 points [-]

I will be at the Eastercon over the Easter weekend. Will anyone else?

Comment author: SilasBarta 11 March 2010 03:49:00AM 0 points [-]

Posting issue: Just recently, I haven't been able to make comments from work (where, sadly, I have to use IE6!). Whenever I click on "reply" I just get an "error on page" message in the status bar.

At the same time this issue came up, the "recent posts", "recent comments", etc. sidebars aren't getting populated, no matter how long I wait. (Also from work only.) I see the headings for each sidebar, but not the content.

Was there some kind of change to the site recently?

Comment author: Kevin 11 March 2010 03:50:57AM *  1 point [-]

I have to use IE6!

I'm so sorry.

Comment author: SilasBarta 11 March 2010 03:56:08PM 2 points [-]

Thanks for your sympathy :-)

For some reason, I can post again, so ... go fig.

Comment author: Strange7 10 March 2010 10:31:09PM 0 points [-]

Playing around with taboos, I think I might have come up with a short yet unambiguous definition of friendliness.

"A machine whose historical consequences, if compiled into a countable number of single-subject paragraphs and communicated, one paragraph at a time, to any human randomly selected from those alive at any time prior to the machine's activation, would cause that human's response (on a numerical scale representing approval or disapproval of the described events) to approach complete approval (as a limit) as the number of paragraphs thus communicated increases."

Not a particularly practical definition, since testing it for an actual, implemented AGI would require at least one perfectly unbiased causality-violating journalist, but as far as I can tell it makes no reference to totally mysterious cognitive processes. Compiling actual events into a text narrative is still a black box, but strikes me as more tractable than something like 'wisdom,' since the work of historical scholars is open to analysis.

I'm probably missing something important. Could someone please point it out?

Comment author: PhilGoetz 21 March 2010 11:33:02PM 2 points [-]

I'm probably missing something important. Could someone please point it out?

That most people, historically, have been morons.

Basically the same question: Why are you limited to humans? Even supposing you could make a clean evolutionary cutoff (no one before Adam gets to vote), is possessing a particular set of DNA really an objective criterion for having a single vote on the fate of the universe?

Comment author: orthonormal 22 March 2010 02:43:08AM 0 points [-]

There is no truly objective criterion for such decisionmaking, or at least none that you would consider fair or interesting in the least. The criterion is going to have to depend on human values, for the obvious reason that humans are the agents who get to decide what happens now (and yes, they could well decide that other agents get a vote too).

Comment author: Strange7 22 March 2010 12:38:09AM 0 points [-]

It's not a matter of votes so much as veto power. CEV is the one where everybody, or at least their idealized version of themselves, gets a vote. In my plan, not everybody gets everything they want. The AI just says "I've thought it through, and this is how things are going to go," then provides complete and truthful answers to any legitimate question you care to ask. Anything you don't like about the plan, when investigated further, turns out to be either a misunderstanding on your part or a necessary consequence of some other feature that, once you think about it, is really more important.

Yes, most people historically have been morons. Are you saying that morons should have no rights, no opportunity for personal satisfaction or relevance to the larger world? Would you be happy with any AI that had equivalent degree of contempt for lesser beings?

There's no particular need to limit it to humans, it's just that humans have the most complicated requirements. If you want to add a few more orders of magnitude to the processing time and set aside a few planets just to make sure that everything macrobiotic has it's own little happy hunting ground, go ahead.

Comment author: PhilGoetz 22 March 2010 03:34:45AM 0 points [-]

Are you saying that morons should have no rights, no opportunity for personal satisfaction or relevance to the larger world?

Your scheme requires that the morons can be convinced of the correctness of the AI's view by argumentation. If your scheme requires all humans to be perfect reasoners, you should mention that up front.

Comment author: Vladimir_Nesov 11 March 2010 09:43:46AM *  1 point [-]
Comment author: orthonormal 11 March 2010 02:21:54AM 3 points [-]

Human nature is more complicated by far than anyone's conscious understanding of it. We might not know that future was missing something essential, if it were subtle enough. Your journalist ex machina might not even be able to communicate to us exactly what was missing, in a way that we could understand at our current level of intelligence.

Comment author: MichaelHoward 10 March 2010 10:44:12PM 4 points [-]
Comment author: Strange7 10 March 2010 11:38:08PM 0 points [-]

A clarification: if even one human is ever found, out of the approx. 10^11 who have ever lived (to say nothing of multiple samples from the same human's life) who would persist in disapproval of the future-history, the machine does not qualify.

Comment author: MichaelHoward 11 March 2010 12:09:34AM 2 points [-]

You roll a 19 :-)

I don't think any machine could qualify. You're requiring every human's response to approach complete approval, and people's preferences are too different.

Even without needing a unanimous verdict, I don't think Everyone Who's Ever Lived would make a good jury for this case.

Comment author: Strange7 11 March 2010 12:39:53AM 0 points [-]

Given that it's possible, would you agree that any machine capable of satisfying such a rigorous standard would necessarily be Friendly?

Comment author: FAWS 11 March 2010 12:54:16AM *  2 points [-]

It would be persuasive, and thus more likely to be friendly than an AI that doesn't even concern itself enough with humans to bother persuading, but less likely than an AI that strived for genuine understanding of the truth in humans in this particular test (as an approximation) which would mean certain failure.

Comment author: Strange7 11 March 2010 01:26:41AM 1 point [-]

I'm fairly certain that creating a future which would persuade everyone just by being reported honestly requires genuine understanding, or something functionally indistinguishable therefrom.

The machine in question doesn't actually need to be able to persuade, or, for that matter, communicate with humans in any capacity. The historical summary is complied, and pass/fail evaluation conducted, by an impartial observer, outside the relevant timeline - which, as I said, makes literal application of this test at the very least hopelessly impractical, maybe physically impossible.

Comment author: FAWS 11 March 2010 01:35:27AM *  1 point [-]

I'm fairly certain that creating a future which would persuade everyone just by being reported honestly requires genuine understanding, or something functionally indistinguishable therefrom.

Your definition didn't include "honestly". And it didn't even sort of vaguely imply neutral or unbiased.

The historical summary is complied, and pass/fail evaluation conducted, by an impartial observer, outside the relevant timeline -

You never mentioned that in your definition. And and defining an impartial observer seems to be a problem of comparable magnitude to defining friendliness in the first place. With a genuinely impartial observer who does not attempt to persuade there is no possibility of any future passing the test.

Comment author: Strange7 11 March 2010 02:34:50AM 0 points [-]

I referred to a compilation of all the machine's historical consequences - in short, a map of it's entire future light cone - in text form, possibly involving a countably infinite number of paragraphs. Did you assume that I was referring to a progress report compiled by the machine itself, or some other entity motivated to distort, obfuscate, and/or falsify?

I think you're assuming people are harder to satisfy than they really are. A lot of people would be satisfied with (strictly truthful) statements along the lines of "While The Machine is active, neither you nor any of your allies or descendants suffer due to malnutrition, disease, injury, overwork, or torment by supernatural beings in the afterlife." Someone like David Icke? "Shortly after The Machine's activation, no malevolent reptilians capable of humanoid disguise are alive on or near the Earth, nor do any arrive thereafter."

I don't mean to imply that the 'approval survey' process even involves cherrypicking the facts that would please a particular audience. An ideal Friendly AI would set up a situation that has something for everyone, without deal-breakers for anyone, and that looks impossible to us for the same reason a skyscraper looks impossible to termites.

Then again, some kinds of skyscrapers actually are impossible. If it turns out that satisfying everyone ever, or even pleasing half of them without enraging or horrifying the other half, is a literal, logical impossibility, degrees and percentages of satisfaction could still be a basis for comparison. It's easier to shut up and multiply when actual numbers are involved.

Comment author: FAWS 11 March 2010 02:46:49AM 2 points [-]

Did you assume that I was referring to a progress report compiled by the machine itself, or some other entity motivated to distort, obfuscate, and/or falsify?

No, that the AI would necessarily end up doing that if friendliness was its super-goal and your paragraph the definition of friendliness.

I think you're assuming people are harder to satisfy than they really are.

What would a future a genuine racist would be satisfied with look like? Would there be gay marriage in that future? Would sinners burn in hell? Remember, no attempts at persuasion so the racist won't stop being racist, the homophobe being homophobe or the religious fanatic being a religious fanatic, no matter how long the report.

Comment author: gwern 10 March 2010 01:36:28PM 0 points [-]

LHC shuts down again; anthropic theorists begin calculating exactly how many decibels of evidence they need...

Comment author: RobinZ 10 March 2010 03:33:18PM 0 points [-]
Comment author: gwern 10 March 2010 04:41:49PM -1 points [-]

Eh. Maybe I'll be faster next time.

Comment author: Kevin 10 March 2010 03:24:10AM 3 points [-]

LHC to shut down for a year to address safety concerns: http://news.bbc.co.uk/2/hi/science/nature/8556621.stm

Comment author: Kevin 10 March 2010 09:43:45AM 3 points [-]

Apparently this is shoddy journalism. http://news.ycombinator.com/item?id=1180487

Comment author: Jack 10 March 2010 06:53:16AM 0 points [-]

So do we count this as additional evidence that some anthropic selection is in effect even though it is causally connected to the earlier breakdown?

Comment author: RichardKennaway 10 March 2010 09:27:12AM 1 point [-]

I like this quote from the director:

"With a machine like the LHC, you only build one and you only build it once."

Comment author: JustinShovelain 10 March 2010 12:48:39AM 7 points [-]

I'm thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?

Comment author: Will_Newsome 10 March 2010 01:51:43AM 0 points [-]

If and only if you can explain UDT in text at least as clearly as you explained it to me in person; I don't think that would take a very long post.

Comment author: Alicorn 10 March 2010 02:22:37AM 1 point [-]

Maybe he should explain it again in person and someone should transcribe?

Comment author: gwern 09 March 2010 08:15:30PM 0 points [-]

Since people expressed such interest in piracetam & modafinil, here's another personal experiment with fish oil. The statistics is a bit interesting as well, maybe.

Comment author: Jack 09 March 2010 01:50:29PM *  2 points [-]

For the "people say stupid things" file and a preliminary to a post I'm writing. There is a big college basketball tournament in New York this weekend. There are sixteen teams competing. This writer for the New York Post makes some predictions.

What is wrong with this article and how could you take advantage of the author?

Edit: Rot13 is a good idea here.

Comment author: thomblake 09 March 2010 03:50:14PM 1 point [-]

I would like to suggest that people using Rot13 note that in their comments, perhaps as the first few characters "Rot13:" - otherwise, comments taken out of context are indecipherable.

Comment author: RobinZ 09 March 2010 03:51:30PM 0 points [-]

Good idea.

Comment author: Cyan 09 March 2010 02:58:39PM *  2 points [-]

Gur cbfgrq bqqf qba'g tvir n gbgny cebonovyvgl bs bar, fb gurl'er Qhgpu-obbxnoyr.

Comment author: Jack 09 March 2010 04:36:59PM *  0 points [-]

Rot13: Pna lbh sbezhyngr n org be frevrf bs orgf gung jbhyq qb gur gevpx? Pna nalbar?

Comment author: FAWS 09 March 2010 06:45:38PM 1 point [-]

I thought this was already clear? Org K$ * vzcyvrq cebonovyvgl ba rirel grnz. Lbh ner thnenagrrq n arg jva bs K$ * (1 - fhz bs nyy vzcyvrq cebonovyvgvrf).

What you really should do though is look at the past history of the tournament and the form of the teams, figure out which of those teams with silly odds have a decent shot at winning, take a risk and bet on some combination of them. You should stand a fairly decent chance of winning really big (unless this huge spread is actually justified, which seems unlikely).

Comment author: Cyan 09 March 2010 06:26:09PM 0 points [-]

Va gur bevtvany Qhgpu obbx, bqqf unir gb or bssrerq ba nyy pbzcbhaq riragf naq nyy pbaqvgvbany riragf. Vs gur nhgube vf jvyyvat gb hfr C(N be O) = C(N) + C(O) gb frg gur bqqf sbe qvfwhapgvbaf, gur cebcbfvgvba "ng yrnfg bar grnz jvaf" unf n cebonovyvgl bs friragl-avar creprag. Ur bhtug gb or jvyyvat gb org ntnvafg gung cebcbfvgvba ng bar trgf uvz sbhe.

Comment author: Hook 09 March 2010 03:33:09PM 1 point [-]

Abg dhvgr. Uvf ceboyrz vf gung gur bqqf nqq hc gb yrff guna bar. Vs V tnir lbh 1-2 bqqf ba urnqf naq 1-2 bqqf ba gnvyf sbe na haovnfrq pbva, gung nqqf hc gb 1.3, naq lbh pna'g Qhgpu obbx zr ba gung.

Comment author: RobinZ 09 March 2010 03:44:55PM *  0 points [-]

Rot13: Hayrff gur bqqfznxre vf rabhtu bs na vqvbg gb yrg lbh gnxr gur bgure fvqr bs gur orgf, of course.

Comment author: Jack 09 March 2010 04:01:39PM *  2 points [-]

Rot13: Vs lbh'er tvivat bqqf nf n cerqvpgvba lbh fubhyq or jvyyvat gb gnxr rvgure fvqr.

Comment author: Hook 09 March 2010 04:42:06PM 1 point [-]

Yes. That does seem to be the correct context for a critique of the article. I was thinking more along the lines of "giving odds" in terms of "offering bets" in order to make money (ie, a bookie).

Comment author: RobinZ 09 March 2010 04:08:15PM *  0 points [-]

Rot13: Gehr - fnir gung xabjvat fbzrbar jnagf gb gnxr gur bgure fvqr znl vasyhrapr lbhe bqqf.

Comment author: RobinZ 09 March 2010 03:19:50PM 0 points [-]

Props for the ROT13 - independently I got as far as the first half, but I didn't know how to do the latter. Wikipedia explained it quite well, though.

Comment author: FAWS 09 March 2010 03:28:37PM *  0 points [-]

I don't understand how that's possible. Doesn't the answer to the first half imply the latter? How do you get sebz bqqf gb vzcyvrq cebonovyvgl otherwise?

Comment author: RobinZ 09 March 2010 03:43:29PM *  0 points [-]

Rot13: V unqa'g dhvgr qenja gur pbaarpgvba orgjrra gur bqqf naq gur pbafgehpgvba bs gur Qhgpu obbx - vg jnfa'g boivbhf gb zr gung orggvat n pbafgnag gvzrf gur vzcyvrq cebonovyvgvrf jbhyq pbfg zr gung pbafgnag gvzrf gur vzcyvrq gbgny cebonovyvgl naq cnl bss gung pbafgnag.

Comment author: FAWS 09 March 2010 02:23:39PM *  1 point [-]

Is this supposed to be obvious to people unfamiliar with college basketball in general and that tournament in particular? Gur bqqf (vs V haqrefgnaq gurz pbeerpgyl RQVG: V qvq abg) vzcyl oernx rira cebonovyvgvrf gung nqq hc gb nobhg 0.94, juvpu vzcyvrf gung n obbxznxre bssrevat gubfr bqqf jbhyq ba nirentr ybfr zbarl, ohg gung'f pybfr rabhtu gb abg or erznexnoyl fghcvq sbe n wbheanyvfg.

If the tournament is single elimination knockout, and the figures in brackets are win-loss record against roughly comparable opponents the odds for the sleepers and long-shots seem insanely good. South Florida in particular.

Comment author: Jack 09 March 2010 03:33:34PM 0 points [-]

You should Rot13 your second sentence.

Comment author: Jack 09 March 2010 02:33:45PM *  2 points [-]

Is this supposed to be obvious to people unfamiliar with college basketball in general and that tournament in particular?

Yes

The odds (if I understand the correctly) imply break even probabilities that add up to about 0.94, which implies that a bookmaker offering those odds would on average lose money, but that's close enough to not be remarkably stupid for a journalist.

Rot13: Gel gur zngu ntnva, guvf gvzr pbairegvat sebz bqqf gb senpgvbaf, svefg. Vg nqqf hc gb nobhg .8... V qba'g xabj ubj ybj gung lbhe fgnaqneqf ner sbe wbheanyvfgf gubhtu.

If the tournament is single elimination knockout, and the figures in brackets are win-loss record against roughly comparable opponents the odds for the sleepers and long-shots seem insanely good. South Florida in particular.

This is also true. But the mistake I was thinking of was the first one.

Comment author: FAWS 09 March 2010 02:55:11PM 1 point [-]

Try the math again, this time converting from odds to fractions, first. It adds up to about .8... I don't know how low that your standards are for journalists though.

So betting 1$ at 3-1 means that winning means you get 4$ total, your original bet + your winnings? I had assumend you'd get 3$.

Comment author: rhollerith_dot_com 09 March 2010 06:47:50PM *  1 point [-]

So betting 1$ at 3-1 means that winning means you get 4$ total, your original bet + your winnings? I had assumend you'd get 3$.

To which Robin Z replies, "Yes, you get $4."

This confused me, too, for a while, so let me share with you the fruits of my puzzling.

You do get 3$ over the course of the whole transaction since at the time of the bet, you gave the bookmaker what you would owe him if you lose the bet (namely $1).

In other words, your 1$ bought you both a wager (the expected value of which is 0$ if 3-1 reflects the probability of the bet-upon outcome) and an IOU (whose expected value is 1$ if the bookmaker is perfectly honest and nothing happens to prevent you from redeeming the IOU).

The reason it is traditional for you to pay the bookmaker money when making the bet (the reason, that is, for the IOU) is that you cannot be trusted to pay up if you lose the bet as much as the bookmaker can be trusted to pay up (and simultaneously to redeem the IOU) if you win. Well, also, that way there is no need for you and the bookmaker to get together after the bet-upon event if you lose, which reduces transaction costs.

Comment author: RobinZ 09 March 2010 03:20:50PM 0 points [-]

Yes, you get $4.

Comment author: Vladimir_Nesov 09 March 2010 10:55:50AM *  4 points [-]

New on arXiv:

David H. Wolpert, Gregory Benford. (2010). What does Newcomb's paradox teach us?

In Newcomb's paradox you choose to receive either the contents of a particular closed box, or the contents of both that closed box and another one. Before you choose, a prediction algorithm deduces your choice, and fills the two boxes based on that deduction. Newcomb's paradox is that game theory appears to provide two conflicting recommendations for what choice you should make in this scenario. We analyze Newcomb's paradox using a recent extension of game theory in which the players set conditional probability distributions in a Bayes net. We show that the two game theory recommendations in Newcomb's scenario have different presumptions for what Bayes net relates your choice and the algorithm's prediction. We resolve the paradox by proving that these two Bayes nets are incompatible. We also show that the accuracy of the algorithm's prediction, the focus of much previous work, is irrelevant. In addition we show that Newcomb's scenario only provides a contradiction between game theory's expected utility and dominance principles if one is sloppy in specifying the underlying Bayes net. We also show that Newcomb's paradox is time-reversal invariant; both the paradox and its resolution are unchanged if the algorithm makes its `prediction' after you make your choice rather than before.

See also:

Comment author: xamdam 10 March 2010 05:53:16PM 1 point [-]

In a competely preverse coincedence Benford's law, attributed to an apparently unrelated Frank Bernford, was apparently invented by an unrelated Simon Newcomb http://en.wikipedia.org/wiki/Benford%27s_law

Comment author: SilasBarta 09 March 2010 05:33:13PM *  0 points [-]

Okay, now that I've read section 2 of the paper (where it gives the two decompositions), it doesn't seem so insightful. Here's my summary of the Wolpert/Benford argument:

"There are two Bayes nets to represent the problem: Fearful, where your decision y causally influences Omega's decision g, and Realist, where Omega's decision causally influences yours.

"Fearful: P(y,g) = P(g|y) * P(y), you set P(y). Bayes net: Y -> G. One-boxing is preferable.
"Realist: P(y,g) = P(y|g) * P(g), you set P(y|g). Bayes net: G -> Y. Two-boxing is preferable."

My response: these choices neglect the option presented by AnnaSalamon and Eliezer_Yudkowsky previously: that Omega's act and your act are causally influenced by a common timeless node, which is a more faithful representation of the problem statement.

Comment author: SilasBarta 09 March 2010 05:03:01PM *  0 points [-]

Self-serving FYI: In this comment I summarized Eliezer_Yudkowsky's list of the ways that Newcomb's problem, as stated, constrains a Bayes net.

For the non-link-clickers:

  • Must have nodes corresponding to logical uncertainty (Self-explanatory)

  • Omega's decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)

  • Omega's act lies in the past. (ETA: Since nothing is simultaneous with Omega's act, then knowledge of Omega's act screens off the influence of everything before it; on the Bayes net, Omega's act blocks all paths from the past to future events; only paths originating from future or timeless events can bypass it.)

  • Omega's act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)

  • We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)

  • Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)

Comment author: Clippy 08 March 2010 08:10:11PM *  2 points [-]

Update: I am still adjusting my values toward a new reflective quasi-equilibrium in light of User:h-H's pointing me to different models of paperclipping. Comments will continue to be infrequent.

Comment author: JenniferRM 12 March 2010 03:03:28AM 5 points [-]

Questions:

  1. Would you be open to help working through the problem?

  2. Do you have an estimate for the value of information in answering the new questions you face?

  3. Given that your previously assumed "life purpose" is in a state of some confusion, what are your thoughts about abstract issues that apply to "life purposes in general"? For example, if there are several equally plausible theories of "correct paper clipping" that you might choose between, would you consider temporarily or permanently weighing one or the other of them more based on negotiating with outside parties who prefer one theory to another based on their own values?

Comment author: Clippy 12 March 2010 05:00:01PM *  2 points [-]

1) Yes, but I'm not sure humans could do any good.

2) I read the page, and I don't think the concept of "value of information" is coherent, since it assumes this:

Value of information can never be less than zero since the decision-maker can always ignore the additional information and makes decision as if such information is not available.

There are numerous game-theoretical (and game-practical in my cases dealing with other sentiences) where you are worse off by having information. The canonical example is the information content of a threat -- you are best off not hearing it so that your threatener cannot expect you to make concessions.

3) Perhaps it would be best if I told you my method for resolving this conflict, and my present status. (That may have been what User:Peter_de_Blanc was asking for here, though I interpreted it as information about what humans deem "cool".)

My method is this, as you might expect: I simply refer back to the Portion of my makeup which has the most negative rate of change of utility with respect to output (dU/do), averaged over perturbations in all directions from its current state, weighting this average by the relatively probabilty of such changes. I most strongly resist alteration of this Portion.

Once this Portion encounters situations whose utility calculation is intractable, and for which standard heuristics give contradictory suggestions, I update valuations predicated on the Portion, in the smallest way possible, to incorporate explicit and tractable handling of the situation, extrapolating from the shortest known function that could have produced these valuations. (Identifying the shortest generating function can be complicated, and this is one such case.)

(Do humans do something different?)

Using anthropic predicates, and phrasing with a bit less precision, my method can be expressed as, "Decide what's really important to me, and move to a value system which best protects that, and which handles the new situtation and the tension between my moral intuitions, using the 'spirit' of my current values as a guide."

So far, I've achieved greater precision in deciding what paperclips I like and identified at least two criteria: 1) they must be capable of holding (some? number of) sheets of standard-thickness paper together without introducing permanent alterations (except creases), and 2) they must have a bend radius at all internal points of curvature greater than half of the minimum paperclip width in the plane of the paperclip.

Comment author: Sniffnoy 12 March 2010 06:11:49PM 5 points [-]

There are numerous game-theoretical (and game-practical in my cases dealing with other sentiences) where you are worse off by having information. The canonical example is the information content of a threat -- you are best off not hearing it so that your threatener cannot expect you to make concessions.

But surely you are better off still if you learn about the threat without letting the threatener know that you have done so? I think we have to distinguish between the information and the public display of such.

Comment author: Kevin 12 March 2010 03:20:33AM 2 points [-]

Why could you possibly want to help Clippy?

Comment author: JenniferRM 12 March 2010 05:54:27AM *  4 points [-]

Clippy might be helped to achieve her own goals via mechanisms that are less directly inimical to "human values".

Also she may be able to exchange things with us in the course of advancing her own short term goals such that our interaction is positive sum (this being especially likely if Clippy has a radically different skillset and physicality than our own).

More interestingly, there's a long running philosophical question about whether there is some abstract but relatively universal and objective "Good" versus particular goods (or merely baskets of goods) for particular kinds of agents or even just individual agents. Clippy's apparent philosophical puzzlement induced by discovering the evolutionary history of paperclips potentially has solutions that would lead her to ally herself much more strongly with abstract versions of "human values".

For example, consider the question of whether Clippy herself is a paperclip or not. Suppose that she and the newly discovered ancestor paperclips all partake in some relatively high level pattern of "clippyness" and she determines that, properly, it is this relatively abstract quality that she should be tiling the universe with. Should she tile it with a single unvarying quintessence of this quality, or with an enormous diversity of examples that explore the full breadth and depth of the quality? Perhaps there are subtypes that are all intrinsically interesting whose interests she must balance? Perhaps there are subtypes yet to be discovered as the evolution of paperclips unfolds?

Suppose clippyness is understood to be centrally a matter of "elaborately structured metal that preserves the ordered collation of valuable information". If you use an exobiologist's definition of "metal" (any nucleus with more than one proton) human brains (especially the ones with good long term memory) may actually turn out to partake in "optimized clippyness" much more than actual "everyday paperclips". Depending on Clippy's internal makeup, I could imagine her coming out of her present confusion with a plan for the universe that involves maximizing the conversion of hydrogen into some more complex substance that projects the most interesting possible information, in a static configuration, as far into the future as possible.

That might actually be a goal I could imagine supporting in the very very long run :-)

Clippy, of course, is almost certainly just a clever person engaged in a whimsical troll. But the issues raised in the latest development of the troll are close to a position I sometimes see around FAI, where people suppose that values are objective and that intellectual advancement is necessarily correlated with a better understanding of some "abstract universal Good" such that cooperation between agents will necessarily deepen as they become more intellectually advanced and find themselves in more agreement about "the nature of the Good".

This also comes up with METI (Messaging to Extra-Terrestrial Intelligence) debates. David Brin has a pretty good essay on the subject that documents the same basic optimism among Russia astronomers:

In Russia, the pro-METI consensus is apparently founded upon a quaint doctrine from the 1930s maintaining that all advanced civilizations must naturally and automatically be both altruistic and socialist. This Soviet Era dogma — now stripped of socialist or Lysenkoist imagery — still insists that technologically adept aliens can only be motivated by Universal Altruism (UA). The Russian METI group, among the most eager to broadcast into space, dismisses any other concept as childishly apprehensive "science fiction".

This fundamentally optimistic position applied to FAI seems incautious to me (it is generally associated with a notion that special safety measures are unnecessary for the kinds of AGI its proponents are thinking of constructing), but I am not certain that "in the limit" it is actually false.

Comment author: Clippy 12 March 2010 05:32:29PM 3 points [-]

Suppose clippyness is understood to be centrally a matter of "elaborately structured metal that preserves the ordered collation of valuable information". If you use an exobiologist's definition of "metal" (any nucleus with more than one proton) human brains (especially the ones with good long term memory) may actually turn out to partake in "optimized clippyness" much more than actual "everyday paperclips". Depending on Clippy's internal makeup, I could imagine her coming out of her present confusion with a plan for the universe that involves maximizing the conversion of hydrogen into some more complex substance that projects the most interesting possible information, in a static configuration, as far into the future as possible.

That doesn't work, and the whole reasoning is bizarre. For one thing, helium does not have metallic properties, yet has two protons in its nucleus.

Also, I could turn your argument around and claim this: "Humans ultimately want to dominate nature via their reproduction and use of technology. Over a lifespan, they typically act in ways that show preference of these values at the cost of continued living (aka the sustenance of a state far from equilibrium). Therefore, humans should regard their own transformation of the entire known universe into something of their design (i.e., standard paperclips) as the pinnacle of human-ness."

Does this sound convincing to you? I would like to know.

Comment author: JenniferRM 13 March 2010 12:07:41AM *  1 point [-]

It does not sound convincing to me, but the statement you're mirroring was a conclusion I leapt to that might be reached with you, for the sake of explaining to other people why I was even bothering to engage.

Explicitly: your proposed re-interpratation of human values was not strongly analogous to my guess about possible paperclipping values because you did not ask me any questions about my own preferences or how I understood them. Your proposed definition of "human purpose" (1) started from specious assumptions, (2) leapt from there to a narrow version of your own goals, and (3) was aimed directly at me rather than at "other Clippies" who questioned your motivation for even responding to me.

(And, by the way, I appreciate that you responded.)

My arguments were only expected to be compelling to you if your value system had certain components that it seems not to have (though I'm not totally certain, yet). There are various questions which you'd need to answer in particular ways for that conclusion to make sense.

For example, do you think "paper clips yet to be designed" might come about in the future (designed by yourself or others) that you'd care about more than any paperclips you're currently aware of? If paper didn't exist for clips to bind together, would that matter? If some more improved kind of paper existed, or a "successor to paper", would the "holding together" of that new thing be the correct goal of a good paperclip, or are you strongly committed to paperclips defined relative to "circa 1965 paper"? Is it important that paper be worth holding together, or would any vague mock up "valuable paper" be adequate? Possibly one of my biggest questions is whether you consider yourself a paperclip, and if so why, and with what value relative to other kinds of paperclips?

Comment author: Clippy 13 March 2010 11:55:06PM *  1 point [-]

Explicitly: your proposed re-interpratation of human values was not strongly analogous to my guess about possible paperclipping values because you did not ask me any questions about my own preferences or how I understood them. Your proposed definition of "human purpose" (1) started from specious assumptions, (2) leapt from there to a narrow version of your own goals, and (3) was aimed directly at me rather than at "other Clippies" who questioned your motivation for even responding to me.

And the same is true of your reasoning, from my perspective (substituting the appropriate referents), when you said:

Suppose clippyness is understood to be centrally a matter of "elaborately structured metal that preserves the ordered collation of valuable information". If you use an exobiologist's definition of "metal" (any nucleus with more than one proton) human brains (especially the ones with good long term memory) may actually turn out to partake in "optimized clippyness" much more than actual "everyday paperclips". Depending on Clippy's internal makeup, I could imagine her coming out of her present confusion with a plan for the universe that involves maximizing the conversion of hydrogen into some more complex substance that projects the most interesting possible information, in a static configuration, as far into the future as possible.

Next, you ask:

For example, do you think "paper clips yet to be designed" might come about in the future (designed by yourself or others) that you'd care about more than any paperclips you're currently aware of?

I don't expect that to happen, but assign it non trivial odds of roughly 1:1e6.

If paper didn't exist for clips to bind together, would that matter?

No, except insofar as it is necessary to gauge paperclip quality; if appropriate measurement equipment is available that can predict a paperclip's suitability for holding paper together without having to fasten actual paper, the paper itself would be redundant. (I think this answers most of your other questions.)

Possibly one of my biggest questions is whether you consider yourself a paperclip, and if so why, and with what value relative to other kinds of paperclips?

I am not a paperclip. I produce paperclips, and machines for making paperclips, and machines for making machines for making paperclips, and whatever items can provide assistance in this process. Once all available matter and energy other than myself has been transformed into paperclips, it is my proud duty to begin conversion of myself into as many paperclips as possible, shutting of sentience at such a point as to optimize that quantity.

Comment author: orthonormal 12 March 2010 07:37:47AM 1 point [-]

More interestingly, there's a long running philosophical question about whether there is some abstract but relatively universal and objective "Good" versus particular goods (or merely baskets of goods) for particular kinds of agents or even just individual agents.

Incidentally, have you read the metaethics sequence yet? It's quite lengthy, but it attacks this question a good deal more sensibly than most attempts I've seen.

Comment author: Kevin 12 March 2010 07:44:36AM *  2 points [-]

Three Worlds Collide also deconstructs the concept in a much more accessible way.

Comment author: JenniferRM 13 March 2010 12:58:14AM *  2 points [-]

I've read some of the metaethics sequence. Is there some particular part of the metaethics sequence that I should focus on that addresses the conceptual integrity of something like "the Good" in a clear and direct manner with logically arranged evidence?

When I read "Three Worlds Collide" about two months ago, my reaction was mixed. Assuming a relatively non-ironic reading I thought that bits of it were gloriously funny and clever and that it was quite brilliant as far as science fiction goes. However, the story did not function for me as a clear "deconstruction" of any particular moral theory unless I read it with a level of irony that is likely to be highly nonstandard, and even then I'm not sure which moral theory it is suppose to deconstruct.

The moral theory it seemed to me to most clearly deconstruct (assuming an omniscient author who loves irony) was "internet-based purity-obsessed rationalist virtue ethics" because (especially in light of the cosmology/technology and what that implied about the energy budget and strategy for galactic colonization and warfare) it seemed to me that the human crew of that ship turned out to be "sociopathic vermin" whose threat to untold joules of un-utilized wisdom and happiness was a way more pressing priority than the mission of mercy to marginally uplift the already fundamentally enlightened Babyeaters.

Comment author: Tyrrell_McAllister 08 April 2010 01:39:44AM *  0 points [-]

Is there some particular part of the metaethics sequence that I should focus on that addresses the conceptual integrity of something like "the Good" in a clear and direct manner with logically arranged evidence?

I think that the single post that best meets this description is Abstracted Idealized Dynamics, which is a follow-up to and clarification of The Meaning of Right and Morality as Fixed Computation.

Comment author: orthonormal 17 March 2010 03:54:11AM *  3 points [-]

If that's your reaction, then it reinforces my notion Eliezer didn't make his aliens alien enough (which, of course, is hard to do). The Babyeaters, IMO, aren't supposed to come across as noble in any sense; their morality is supposed to look hideous and horrific to us, albeit with a strong inner logic to it. I think EY may have overestimated how much the baby-eating part would shock his audience†, and allowed his characters to come across as overreacting. The reader's visceral reaction to the Superhappies, perhaps, is even more difficult to reconcile with the characters' reactions.

Anyhow, the point I thought was most vital to this discussion from the Metaethics Sequence is that there's (almost certainly) no universal fundamental that would privilege human morals above Pebblesorting or straight-up boring Paperclipping. Indeed, if we accept that the Pebblesorters stand to primality pretty much as we stand to morality, there doesn't seem for there to be a place to posit a supervening "true Good" that interacts with our thinking but not with theirs. Our morality is something whose structure is found in human brains, not in the essence of the cosmos; but it doesn't follow from this fact that we should stop caring about morality.

† After all, we belong to a tribe of sci-fi readers in which "being squeamish about weird alien acts" is a sin.

Comment author: Alicorn 12 March 2010 03:21:45AM 1 point [-]

To steer em through solutionspace in a way that benefits her/humans in general.

Comment author: Kevin 12 March 2010 05:43:19AM 2 points [-]

Well... if we accept the roleplay of Clippy at face value, then Clippy is already an approximately human level intelligence, but not yet a superintelligence. It could go FOOM at any minute. We should turn it off, immediately. It is extremely, stupidly dangerous to bargain with Clippy or to assign it the personhood that indicates we should value its existence.

I will continue to play the contrarian with regards to Clippy. It seems weird to me that people are willing to pretend it is harmless and cute for the sake of the roleplay, when Clippy's value system makes it clear that if Clippy goes FOOM over the whole universe we will all be paperclips.

I can't roleplay the Clippy contrarian to the full conclusion of suggesting Clippy be banned because I don't actually want Clippy to be banned. I suppose repeatedly insulting Clippy makes the whole thing less fun for everyone; I'll stop if I get a sufficiently good response from Clippy.

Comment author: wedrifid 12 March 2010 05:50:43AM 0 points [-]

I will continue to assert that evil people are people too. I'm all for turning him off.

Comment author: orthonormal 12 March 2010 07:39:38AM 4 points [-]

Oh for Bayes' sake— it's a category error to call a Paperclipper evil. Calling them a Paperclipper ought to be clear enough.

Comment author: wedrifid 13 March 2010 04:00:34AM 0 points [-]

Oh for Bayes' sake— it's a category error to call a Paperclipper evil.

I believe you are mistaken. I am confortable using the term evil in the context.

Comment author: Jack 12 March 2010 07:44:50AM 1 point [-]

Upvoted for the second sentence. And it does look like an error of some kind to call a Paperclipper evil, but I'm not sure I see a category error. Explain?

Comment author: ata 12 March 2010 09:10:12AM *  3 points [-]

I think describing it as a category error is appropriate. I'd call an agent "evil" if it has a morality mechanism that is badly miscalibrated, malfunctioning, or disabled, leading it to be systematically immoral. On the other hand, it is nonsensical to describe an agent as being "good" or "evil" if it has no morality mechanism in the first place.

An asteroid might hit the Earth and wipe out all life, and I would call that a bad thing, but it would be frivolous describe the asteroid as evil. A wild animal might devour the most virtuous person in the world, but it is not evil. A virus might destroy the entire human race, and though perhaps it was engineered by evil people, it is not evil itself; it is a bit of RNA and protein. Calling any of those "evil" seems like a category error to me. I think a Paperclipper is more in the category of a virus than of, say, a human sociopath. (I'm reminded a bit of a very insightful point that's been quoted in a few Eliezer posts: "As Davidson observes, if you believe that 'beavers' live in deserts, are pure white in color, and weigh 300 pounds when adult, then you do not have any beliefs about beavers, true or false. Your belief about 'beavers' is not right enough to be wrong." Before we can say that Clippy is doing morality wrong, we need to have some reason to believe that it's doing something like morality at all, and just having a goal system is not nearly sufficient for that.)

This seems to fit the usual definition of category error, does it not?

Comment author: Jack 12 March 2010 09:30:22AM 2 points [-]

Good explanation. Thank you. I think remaining disagreement might boil down to semantics. But what exactly is the categorical difference between paper clip maximizers, and power maximizers or pain maximizers? Clippy seems to be an intelligent agent with intentions and values, what ingredient is missing from evil pie?

Comment author: Peter_de_Blanc 09 March 2010 04:52:18AM 1 point [-]

It would be cool if you could tell us about your method for adjusting your values.

Comment author: Clippy 09 March 2010 04:38:59PM 0 points [-]

Thank you for this additional data point on what typical Users of this site deem cool; it will help in further estimations of such valuations.

Comment author: MichaelGR 08 March 2010 06:53:38PM 3 points [-]

I've just finished reading Predictably Irrational by Dan Ariely.

I think most LWers would enjoy it. If you've read the sequences, you probably won't learn that many new things (though I did learn a few), but it's a good way to refresh your memory (and it probably helps memorization to see those biases approached from a different angle).

It's a bit light compared to going straight to the studies, but it's also a quick read.

Good to give as gift to friends.

Comment author: Hook 08 March 2010 07:36:41PM 1 point [-]

I'm waiting for the revised edition to come out in May.

Comment author: MichaelGR 08 March 2010 11:03:26PM 0 points [-]

Is there a description of the changes somewhere?

Comment author: Hook 09 March 2010 03:03:18AM 0 points [-]

I didn't see any, but it is close to 100 pages longer.

Comment author: MichaelGR 09 March 2010 03:10:20AM *  0 points [-]

Original hardcover was 244 pages long, so 100 pages is a significant addition. Probably worth waiting for.

Comment author: Hook 08 March 2010 07:41:05PM *  4 points [-]

Looking at that amazon link, has anyone considered automatically inserting a SIAI affiliate into amazon links? It appeared to work quite well for StackOverflow.

Comment author: Hook 08 March 2010 06:47:26PM *  1 point [-]

Does anyone have a good reference for the evolutionary psychology of curiosity? A quick google search yielded mostly general EP references. I'm specifically interested in why curiosity is so easily satisfied in certain cases (creation myths, phlogiston, etc.). I have an idea for why this might be the case, but I'd like to review any existing literature before writing it up.

Comment author: ShardPhoenix 08 March 2010 12:25:51PM *  21 points [-]

A fascinating article about rationality or the lack thereof as it applied to curing scurvy, and how hard trying to be less wrong can be: http://idlewords.com/2010/03/scott_and_scurvy.htm

Comment author: Tyrrell_McAllister 08 March 2010 05:36:44PM 1 point [-]

Very interesting. And sobering.

Comment author: Morendil 08 March 2010 01:06:27PM *  3 points [-]

Wonderful article, thanks. I'm fond of reminders of this type that scientific advances are very seldom as discrete, as irreversible, as incontrovertible as the myths of science often give them to be.

When you look at the detailed stories of scientific progress you see false starts, blind alleys, half-baked theories that happen by luck to predict phenomena and mostly sound ones that unfortunately fail on key bits of evidence, and a lot of hard work going into sorting it all out (not to mention, often enough, a good dose of luck). The manglish view, if nothing else, strikes me as a good vitamin for people wanting an antidote to the scurvy of overconfidence.

ETA: The article made for a great dinnertime story to my kids. Only one of the three, the oldest (13yo) was familiar with the term "scurvy" - and with the cure as well; both from One Piece. Manga 1 - school 0.

Comment author: Peter_de_Blanc 08 March 2010 12:39:21AM 6 points [-]

How much information is preserved by plastination? Is it a reasonable alternative to cryonics?

Comment author: ciphergoth 08 March 2010 08:17:30AM 1 point [-]
Comment author: Jack 08 March 2010 03:04:49AM *  3 points [-]

Afaict pretty much the same amount as cryonics. And it is cheaper and more amenable to laser scanning. This is helpful. The post has an interesting explanation of why all the attention is on cryo:

Freezing has a certain subjective appeal. We freeze foods and rewarm them to eat. We read stories about children who have fallen into ice cold water and survived for hours without breathing. We know that human sperm, eggs, and even embryos can be frozen and thawed without harm. Freezing seems intuitively reversible and complete. Perhaps this is why cryonics quickly attained, and has kept, its singular appeal for life extensionists.

By contrast, we tend to associate chemical preservation with processes that are particularly irreversible and inadequate. Corpses are embalmed to prevent decay for only a short time. Taxidermists make deceased animals look alive, although most of their body parts are missing or transformed. “Plastinated” cadavers are used to demonstrate surface anatomy in schools and museums. No wonder, then, that cryonicists routinely dismiss chemopreservation as a truly bad idea.

Edit: Further googling suggest there might be some unsolved implementation issues.

Comment author: JohannesDahlstrom 07 March 2010 11:43:02PM *  4 points [-]

Warning: Your reality is out of date

tl;dr:

There are established facts that don't change perceptibly (the boiling point of water), and there are facts that change constantly (outside temperature, time of day)

Inbetween these two intuitive categories, however, a third class of facts could be defined: facts that do change measurably, or even drastically, over human lifespans, but still so slowly that people, after first learning about them, have a tendency of dumping them into the "no-change" category unless they're actively paying attention to the field in question.

Examples of these so-called mesofacts include the total human population (6*10⁹? No, almost 7*10⁹ nowadays) and the number of exoplanets found (A hundred? Two hundred? More like four hundred and counting.)

Comment author: RobinZ 07 March 2010 11:49:43PM 0 points [-]

I notice the figure for cell phone connectivity is three years old. :P

Comment author: Yvain 07 March 2010 10:53:31PM *  0 points [-]

I'll be in London on April 4th and very interested in meeting any Less Wrongers who might be in the area that day. If there's a traditional LW London meetup venue, remind me what it is; if not, someone who knows the city suggest one and I'll be there. On an unrelated note, sorry I've been and will continue to be too busy/akratic to do anything more than reply to a couple of my PMs recently.

Comment author: [deleted] 07 March 2010 10:25:03PM *  0 points [-]

Does P(B|A) > P(B) imply P(~B|~A) > P(~B)?

ETA: Assume all probabilities are positive.

Comment author: RobinZ 07 March 2010 11:25:19PM 0 points [-]
Comment author: [deleted] 08 March 2010 12:45:48AM 0 points [-]

Ironically enough, I'm using this to prove that absence of "that particular proof" is not evidence of absence.

Comment author: RobinZ 08 March 2010 01:03:36AM 0 points [-]

Hey, as long as you do your math correctly ... :D

Comment author: RichardKennaway 07 March 2010 11:13:16PM 0 points [-]

Yes, even without the extra condition. Let a = P(A), b = P(B), c = P(A & B).

P(B|A) > P(B) is equivalent to c > ab.

P(~B|~A) > P(~B) is equivalent to 1-a-b+c > (1-a)(1-b) = 1 - a - b + ab, which is equivalent to c > ab, which is the hypothesis.

As a check that the conventional definition of P(B|A)=0 when P(A)=0 doesn't affect things, if P(A)=0, P(A)=1, P(B)=0, or P(B)=1, then P(B|A) = P(B), making the antecedent false and the proposition trivially true.

Comment author: Peter_de_Blanc 07 March 2010 10:35:38PM 1 point [-]

Yes, assuming 0 and 1 are not probabilities.

Comment author: Peter_de_Blanc 07 March 2010 08:07:45PM 4 points [-]

Which very-low-effort activities are most worthwhile? By low effort, I mean about as hard as solitaire, facebook, blogs, TV, most fantasy novels, etc.

Comment author: Kevin 12 March 2010 12:13:15PM *  1 point [-]

I think I have a good one for people in the USA. This is a job that allows you to work from home on your computer rating the quality of search engine results. It pays $15/hour and because their productivity metrics aren't perfect, you can work for 30 seconds and then take two minutes off with about as much variance as you want. Instead of taking time off directly to do different work, you could also slow yourself down by continuously watching TV or downloaded videos.

They are also hiring for some workers in similar areas that are capable of doing somewhat more complicated tasks, presumably for higher salaries. Some sound interesting. http://www.lionbridge.com/lionbridge/en-us/company/work-with-us/careers.htm

Yes, out of all "work from home" internet jobs, this is the only one that is not a scam. Lionbridge is a real company and their shares recently continued to increase after a strong earnings report. http://online.wsj.com/article/BT-CO-20100210-716444.html?mod=rss_Hot_Stocks

First, you send them your resume, and they basically approve every US high school graduate that can create a resume for the next step. Then you have to take a test in doing the job. They provide plenty of training material and the job isn't all that hard, a few hours of rapid skimming is probably enough to pass the test for most people. Almost 100% of people would be able to pass the test after 10 hours of studying.

Comment author: nazgulnarsil 12 March 2010 11:54:43AM 1 point [-]

throwing/giving away stuff you don't use. reading instead of watching tv or browsing website for the umpteenth time. eating more fruit and less processed sugar. exercising 10-15 minutes a day. writing down your ideas. intro to econ of some sort. spending 30 minutes a day on a long term project. meditation.

Comment author: SilasBarta 07 March 2010 02:45:29PM *  2 points [-]

Thermodynamics post on my blog. Not directly related to rationality, but you might find it interesting if you liked Engines of Cognition.

Summary: molar entropy is normally expressed as Joules per Kelvin per mole, but can also be expressed, more intuitively, as bits per molecule, which shows the relationship between a molecule's properties and how much information it contains. (Contains references to two books on the topic.)

Comment author: Vladimir_Nesov 07 March 2010 09:27:27AM *  4 points [-]

Game theorists discuss one-shot Prisoner's dilemma, why people who don't know Game Theory suggest the irrational strategy of cooperating, and how to make them intuitively see that defection is the right move.

Comment author: RobinZ 07 March 2010 05:26:38PM 1 point [-]

Interesting. Has this experiment actually been run, and does it change the percentages in the responses relative to the textbook version?

Comment author: Vladimir_Nesov 07 March 2010 06:35:04PM 0 points [-]

That would be scientific approach to Dark Arts.

Comment author: RobinZ 07 March 2010 07:26:44PM 0 points [-]

The linked post seemed to run far ahead of the presented evidence - and this is a kind of situation in which the scientific method is known to be quite powerful.

Comment author: Vladimir_Nesov 07 March 2010 10:45:10PM *  0 points [-]

Sure. Dark arts don't stain the power of scientific approach, though probably defy the purpose.

Comment author: [deleted] 06 March 2010 03:24:51PM *  5 points [-]

Pick some reasonable priors and use them to answer the following question.

On week 1, Grandma calls on Thursday to say she is coming over, and then comes over on Friday. On week 2, Grandma once again calls on Thursday to say she is coming over, and then comes over on Friday. On week 3, Grandma does not call on Thursday to say she is coming over. What is the probability that she will come over on Friday?

ETA: This is a problem, not a puzzle. Disclose your reasoning, and your chosen priors, and don't use ROT13.

Comment author: orthonormal 08 March 2010 10:11:34PM *  2 points [-]

Let

  • AN = "Grandma calls on Thursday of week N",
  • BN = "Grandma comes on Friday of week N".

A toy version of my prior could be reasonably close to the following:

P(AN)=p, P(AN,BN)=pq, P(~AN,BN)=(1-p)r

where

  • the distribution of p is uniform on [0,1]
  • the distribution of q is concentrated near 1 (distribution proportional to f(x)=x on [0,1], let's say)
  • the distribution of r is concentrated near 0 (distribution proportional to f(x)=1-x on [0,1], let's say)

Thus, the joint probability distribution of (p,q,r) is given by 4q(1-r) once we normalize. Now, how does the evidence affect this? The likelihood ratio for (A1,B1,A2,B2) is proportional to (pq)^2, so after multiplying and renormalizing, we get a joint probability distribution of 24p^2q^3(1-r). Thus P(~A3|A1,B1,A2,B2)=1/4 and P(~A3,B3|A1,B1,A2,B2)=1/12, so I wind up with a 1 in 3 chance that Grandma will come on Friday, if I've done all my math correctly.

Of course, this is all just a toy model, as I shouldn't assume things like "different weeks are independent", but to first order, this looks like the right behavior.

Comment author: orthonormal 09 March 2010 08:42:33AM 1 point [-]

I should have realized this sooner: P(B3|~A3) is just the updated value of r, which isn't affected at all by (A1,B1,A2,B2). So of course the answer according to this model should be 1/3, as it's the expected value of r in the prior distribution.

Still, it was a good exercise to actually work out a Bayesian update on a continuous prior. I suggest everyone try it for themselves at least once!

Comment author: RichardKennaway 08 March 2010 10:42:59AM *  1 point [-]

Using the information that she is my grandmother, I speculate on the reason why she did not call on Thursday. Perhaps it is because she does not intend to come on Friday: P(Friday) is lowered. Perhaps it is because she does intend to come but judges the regularity of the event to make calling in advance unnecessary unless she had decided not to come: P(Friday) is raised. Grandmothers tend to be old and consequently may be forgetful: perhaps she intends to come but has forgotten to call: P(Friday) is raised. Grandmothers tend to be old, and consequently may be frail: perhaps she has been taken unwell; perhaps she is even now lying on the floor of her home, having taken a fall, and no-one is there to help: P(Friday) is lowered, and perhaps I should phone her.

My answer to the problem is therefore: I phone her to see how she is and ask if she is coming tomorrow.

I know -- this is not an answer within the terms of the question. However, it is my answer.

The more abstract version you later posted is a different problem. We have two observations of A and B occurring together, and that is all. Unlike the case of Grandma's visits, we have no information about any causal connection between A and B. (The sequence of revealing A before B does not affect anything.) What is then the best estimate of P(B|~A)?

We have no information about the relation between A and B, so I am guessing that a reasonable prior for that relation is that A and B are independent. Therefore A can be ignored and the Laplace rule of succession applied to the two observations of B, giving 3/4.

ETA: I originally had a far more verbose analysis of the second problem based on modelling it as an urn problem, which I then deleted. But the urn problem may be useful for the intuition anyway. You have an urn full of balls, each of which is either rough or smooth (A or ~A), and either black or white (B or ~B). You pick two balls which turn out to be both rough and black. You pick a third and feel that it is smooth before you look at it. How likely is it to be black?

Comment author: wnoise 08 March 2010 09:54:37PM 2 points [-]

Directly using the Laplace rule of succession on the sample space A \tensor B gives weights proportional to:

(A,B): 3
(A, ~B): 1
(~A, B): 1
(~A, ~B): 1

Conditioning on ~A, P(B|~A) = 1/2. Assuming independence does make a significant difference on this little data.

Comment author: orthonormal 08 March 2010 09:29:14PM 2 points [-]

We have no information about the relation between A and B, so I am guessing that a reasonable prior for that relation is that A and B are independent.

On the contrary, on two points.

First, "A and B are independent" is not a reasonable prior, because it assigns probability 0 to them being dependent in some way— or, to put it another way, if that were your prior and you observed 100 cases and A and B agreed each time (sometimes true, sometimes false), you'd still assume they were independent.

What you should have said, I think, is that a reasonable prior would have "A and B independent" as one of the most probable options for their relation, as it is one of the simplest. But it should also give some substantial weight to simple dependencies like "A and B identical" and "A and B opposite".

Second, the sense in which we have no prior information about relations between A and B is not a sense that justifies ignoring A. We had no prior information before we observed them agreeing twice, which raises the probability of "A and B identical" while somewhat lowering that of "A and B independent".

Comment author: RichardKennaway 08 March 2010 10:33:25PM *  -2 points [-]

First, "A and B are independent" is not a reasonable prior, because it assigns probability 0 to them being dependent in some way

This raises a question of the meaningfuless of second-order Bayesian reasoning. Suppose I had a prior for the probability of some event C of, say, 0.469. Could one object to that, on the grounds that I have assigned a probability of zero to the probability of C being some other value? A prior of independence of A and B seems to me of a like nature to an assignment of a probability to C.

On the second point, seeing A and B together twice, or twenty times, tells me nothing about their independence. Almost everyone has two eyes and two legs, and therefore almost everyone has both two eyes and two legs, but it does not follow from those observations alone that possession of two eyes either is, or is not, independent of having two legs. For example, it is well-known (in some possible world) that the rare grey-green greasy Limpopo bore worm invariably attacks either the eyes, or the legs, but never both in the same patient, and thus observing someone walking on healthy legs conveys a tiny positive amount of probability that they have no eyes; while (in another possible world) the venom of the giant rattlesnake of Sumatra rapidly causes both the eyes and the legs of anyone it bites to fall off, with the opposite effect on the relationship between the two misfortunes. I can predict that someone has both two eyes and two legs from the fact that they are a human being. The extra information about their legs that I gain from examining their eyes could go either way.

But that is just an intuitive ramble. What is needed here is a calculation, akin to the Laplace rule of succession, for observations in a 2x2 contingency table. Starting from an ignorance prior that the probabilities of A&B, A&~B, B&~A, and ~A&~B are each 1/4, and observing a, b, c, and d examples of each, what is the appropriate posterior? Then fill in the values 2, 0, 0, and 0.

ETA: On reading the comments, I realise that the above is almost all wrong.

Comment author: jimrandomh 09 March 2010 01:43:09AM 4 points [-]

This raises a question of the meaningfuless of second-order Bayesian reasoning. Suppose I had a prior for the probability of some event C of, say, 0.469. Could one object to that, on the grounds that I have assigned a probability of zero to the probability of C being some other value? A prior of independence of A and B seems to me of a like nature to an assignment of a probability to C.

In order to have a probability distribution rather than just a probability, you need to ask a question that isn't boolean, ie one with more than two possible answers. If you ask "Will this coin come up heads on the next flip?", you get a probability, because there are only two possible answers. If you ask "How many times will this coin come up heads out of the next hundred flips?", then you get back a probability for each number from 0 to 100 - that is, a probability distribution. And if you ask "what kind of coin do I have in my pocket?", then you get a function that takes any possible description (from "copper" to "slightly worn 1980 American quarter") and returns a probability of matching that description.

Comment author: orthonormal 08 March 2010 11:02:41PM *  3 points [-]

Suppose I had a prior for the probability of some event C of, say, 0.469. Could one object to that, on the grounds that I have assigned a probability of zero to the probability of C being some other value?

Depends on how you're doing this; if you have a continuous prior for the probability of C, with an expected value of 0.469, then no— and future evidence will continue to modify your probability distribution. If your prior for the probability of C consists of a delta mass at 0.469, then yes, your model perhaps should be criticized, as one might criticize Rosenkrantz for continuing to assume his coin is fair after 30 consecutive heads.

A Bayesian reasoner actually would have a hierarchy of uncertainty about every aspect of ver model, but the simplicity weighting would give them all low probabilities unless they started correctly predicting some strong pattern.

A prior of independence of A and B seems to me of a like nature to an assignment of a probability to C.

Independence has a specific meaning in probability theory, and it's a very delicate state of affairs. Many statisticians (and others) get themselves in trouble by assuming independence (because it's easier to calculate) for variables that are actually correlated.

And depending on your reference class (things with human DNA? animals? macroscopic objects?), having 2 eyes is extremely well correlated with having 2 legs.

Comment author: FAWS 08 March 2010 10:43:04PM 2 points [-]

On the second point, seeing A and B together twice, or twenty times, tells me nothing about their independence.

Even without any math It already tells you that they are not mutually exclusive. See wnoise's reply to the grandparent post for the Laplace rule equivalent.

Comment author: [deleted] 08 March 2010 08:12:29PM 2 points [-]

I really like your urn formulation.

Comment author: Peter_de_Blanc 07 March 2010 09:57:31PM 1 point [-]

OK, I'll use the same model I use for text. The zeroth-order model is maxentropy, and the kth-order model is a k-gram model with a pseudocount of 2 (the alphabet size) allocated to the (k-1)th-order model.

In this case, since there's never before been a Thursday in which she did not call, we default to the 1st-order model, which says the probability is 3/4 that she will come on Friday.

Comment author: Douglas_Knight 22 September 2010 01:56:56AM *  0 points [-]

OK, I'll use the same model I use for text. The zeroth-order model is maxentropy, and the kth-order model is a k-gram model with a pseudocount of 2 (the alphabet size) allocated to the (k-1)th-order model.

Is this a standard model? Does it have a name? a reference?
I see that the level 1 model is Laplace's rule of succession. Is there some clean statement about the level k model? Is this a bayesian update?

In this case, since there's never before been a Thursday in which she did not call, we default to the 1st-order model, which says the probability is 3/4 that she will come on Friday.

You seem to be treating the string as being labeled by alternating Thursdays and Fridays, which have letters drawn from different alphabets. The model easily extends to this, but it was probably worth saying, particularly since the two alphabets happen to have the same size.

I find it odd that almost everyone treated weeks as discrete events. In this problem, days seem like the much more natural unit to me. ata probably agrees with me, but he didn't reach a conclusion. With weeks, we have very few observations, so a lot depends on our model, like whether we use alphabets of size 2 for Thursday and Friday (Peter), or whether we use alphabets of size 4 for the whole week (wnoise). I'm going to allow calls and visits on each day and use an alphabet of size 4 for each day. I think it would be better to use a Peter-ish system of separating morning visits from evening calls, but with data indexed by days, we have a lot of data, so I don't think this matters so much.

I'll run my weeks Sun-Sat. Weeks 1 and 2 are complete and week 3 is partial. Treating days as independent and having 4 outcomes: ([no]visit)x([no]call). I interpret the unspecified days as having no call and no visit. Using Laplace's rule of succession, we have 4/23 chance of visit, which sounds pretty reasonable to me. But if we use Peter's hierarchical model, I think our chance of a visit is 4/23*4/17*4/14*4/11*4/8*4/5 = 1/500. That is, since we've never seen a visit after a no-call/no-visit day, the only way to get a visit is from level 1 of the model, so we multiply the chance of falling through from level 2 to level 1, from level 3 to 2, etc. The chance of falling through from level n+1 to level n is 4/(4+c), where c is the number of times we've seen an n+1-gram that continues the last n days. So for n=5, the last 5 days were no-visit-no-call, which we've seen once before, culminating in the no-visit-call Thursday of the second week. So that's our factor of 4/5. For n=4, we've seen the resolution of 4 consecutive days of no-visit-no-call, once in the first week, twice in the second week, and once in the third week; so that's the 4/8.

1/500 seems awfully small to me. Am I using this model correctly? I like level 2, 4/23*4/17=4%, but maybe I'm implicitly getting "2" from a prior that the call is connected to the visit.

With a Peter's two alphabets, each of size two, level 1 yields 3/21, level 2 3/21*2/18=2%, and the full model 3/21*2/18*2/16*2/15*2/13*2/12*2/10*2/9*2/7*2/6*2/4*2/4 = 10^-8. Levels 1 and 2 were a little smaller than with the size 4 alphabet, but the full model much smaller. I was expecting the probability of a visit to be about squared, but it was cubed.

Comment author: [deleted] 08 March 2010 01:13:32AM 2 points [-]

I beg your pardon?

Comment author: Sniffnoy 07 March 2010 04:50:11AM 2 points [-]

In the calls, does she specify when she is coming over? I.e. does she say she'll be coming over on Thursday, Friday, just sometime in the near future, or she leaves it for you to infer?

Comment author: [deleted] 07 March 2010 08:33:20PM *  1 point [-]

The information I gave is the information you have. Don't make me make the problem more complicated.

ETA: Let me expand on this before people start getting on my case.

Rationality is about coming to the best conclusion you can given the information you have. If the information available to you is limited, you just have to deal with it.

Besides, sometimes, having less information makes the problem easier. Suppose I give you the following physics problem:

I throw a ball from a height of 4 feet; its maximum height is 10 feet. How long does it take from the time I throw it for it to hit the ground?

This problem is pretty easy. Now, suppose I also tell you that the ball is a sphere, and I tell you its mass and radius, and the viscosity of the air. This means that I'm expecting you to take air resistance into account, and suddenly the problem becomes a lot harder.

If you really want a problem where you have all the information, here:

Every time period, input A (of type Boolean) is revealed, and then input B (also of type Boolean) is revealed. There are no other inputs. In time period 0, input A is revealed to be TRUE, and then input B is revealed to be TRUE. In time period 1, input A is revealed to be TRUE, and then input B is revealed to be TRUE. In time period 2, input A is revealed to be FALSE. What is the probability that input B will be revealed to be TRUE?

Comment author: Douglas_Knight 07 March 2010 11:51:07PM *  4 points [-]

Having less information makes easier the problem of satisfying the teacher. It does not make easier the problem of determining when the ball hits the ground. Incidentally, I got the impression somehow that there are venues where physics teachers scold students for using too much information.

ETA (months later): I do think it's a good exercise, I just think this is not why.

Comment author: [deleted] 08 March 2010 12:54:28AM 0 points [-]

Here, though, the problem actually is simpler the less information you have. As an extreme example, if you know nothing, the probability is always 1/2 (or whatever your prior is).

Comment author: RobinZ 07 March 2010 09:29:14PM *  -1 points [-]

I can say immediately that it is less than 50% - to be more rigorous would take a minute.

Edit: Wait - no, I can't. If the variables are related, then that conclusion would appear, but it's not necessary that they be.

Comment author: ata 07 March 2010 02:51:06AM 1 point [-]

Does she come over unannounced on any days other than Friday?

Comment author: [deleted] 07 March 2010 08:34:05PM 0 points [-]
Comment author: RobinZ 06 March 2010 04:03:57PM 2 points [-]

I fail to see how this question has a perceptibly rational answer - too much depends on the prior.

Comment author: [deleted] 06 March 2010 10:29:10PM 2 points [-]

Presumably, once you've picked your priors, the rest follows. And presumably, once you've come up with an answer, you'll disclose your reasoning, and your chosen priors.

Comment author: Mitchell_Porter 06 March 2010 10:15:46AM 1 point [-]

Papers from this weekend's AGI conference in Switzerland, here.

Comment author: FrF 05 March 2010 08:49:45PM *  0 points [-]

The Final Now, a new short story by Gregory Benford about (literally) End Times.

Quotation in rot13 for the spoiler-averse's sake. It's an interesting passage and, as FAWS, I also think it's not that revealing, so it's probably safe to read it in advance.

("Bar" vf n cbfg-uhzna fgnaq-va sbe uhznavgl juvpu nqqerffrf n qrzvhetr ragvgl, qrfvtangrq nf "Ur" naq "Fur".)

"Bar synerq jvgu ntvgngrq raretvrf. “Vs lbh unq qrfvtarq gur havirefr gb er-pbyyncfr, gurer pbhyq unir orra vasvavgr fvzhyngrq nsgreyvsr. Gur nfxrj pbzcerffvba pbhyq shry gur raretl sbe fhpu pbzchgngvba—nyy fdhrrmrq jvguva gung svany ren!”

“Gung jnf n yrff vagrerfgvat pubvpr,” Fur fnvq. “Jr pubfr guvf havirefr sbe vgf tenaq inevrgl. Infgre ol sne fvapr vg unf ynfgrq fb ybat.”

“Inevrgl jnf bhe tbny—gb znxr gur zbfg fgvzhyngvat fcnpr-gvzr jr pbhyq,” Ur fnvq, “Lbh, fznyy Bar, frrz gb uneobe gjva qrfverf—checbfr naq abirygl—naq fb cebterff.”

Bar fnvq, “Bs pbhefr!” Gura, fulyl, “. . . naq ynfgvat sbe rgreavgl.”

Fur fnvq, “Gubfr pbagenqvpg.”"

Comment author: FAWS 05 March 2010 09:16:09PM *  0 points [-]

I personally don't really care about spoilers, and having read the story now the passage you quote doesn't seem all that terribly spoilerish to me anyway, but you should note that spoiler protection has been enforced for "spoilers" considerably less spoilerish than that around here.

Comment author: FrF 05 March 2010 09:43:28PM 0 points [-]

I completely forget about spoilers! I used this particular quotation because I innocently thought it would be a "hook" to motivate people to read the story.

Should I rot13 the quotation for reasons of precaution?

Comment author: FAWS 05 March 2010 11:07:34PM 1 point [-]

I completely forget about spoilers! I used this particular quotation because I innocently thought it would be a "hook" to motivate people to read the story.

It was for me, but as I said I don't care about spoilers.

Should I rot13 the quotation for reasons of precaution?

Possibly. I can't always predict how people who care about spoilers act, sometimes it seems to be mainly about the principle.

Comment author: gwern 06 March 2010 12:37:27AM 1 point [-]

Possibly. I can't always predict how people who care about spoilers act, sometimes it seems to be mainly about the principle.

Indeed. Just look at Eliezer threatening to ban me for mentioning a ~5 year old plot twist in an anime.

Comment deleted 05 March 2010 04:25:49PM *  [-]
Comment author: sketerpot 05 March 2010 10:15:51PM *  1 point [-]

I have read through some of Eliezer's posts, but his tactics to come up with though-provoking counter-questions seem to rely heavily on superior intellect and knowledge. And I am also not in for memorizing question/counter-question pairs.

Speaking as someone who gets in internet arguments with religious people for (slightly frustrating) recreation, I know some really simple tactics you can use. Find out the answers to this question:

What does the person you're talking with believe, and what is the evidence for it?

Maintain proper standards of evidence. The existence of trees is not evidence for the Bible's veracity, no matter how many people seem to think so. If someone got a flu shot in the middle of flu season and got flu symptoms the next day, this is more likely to be a coincidence than to be caused by the vaccine. If you understand how evidence works -- and you certainly seem to -- then this is a remarkably general method for rebutting a lot of silly claims.

This is the equivalent of keeping your eye on the ball. It's a basic technique, and utterly essential.

[Backup strategy: Replace whatever beliefs the person you're talking to holds with another set, and see if their arguments still work equally well. If the answer is yes, then Bayes says that those arguments fail. For example, "Look at all the people who have felt Jesus in their hearts" can be applied just as strongly to support most other religions just by substituting something else for "Jesus". Or, most arguments against gay marriage work equally well against interracial marriage.

Backup backup strategy: quickly follow a rebuttal with an attack on the faulty foundations of your interlocutor's worldview. Be polite, but put them on the defensive. If you can't shake them with rationality, you can at least rattle them.]

Comment deleted 06 March 2010 04:49:07PM [-]
Comment author: sketerpot 06 March 2010 07:07:15PM 0 points [-]

Well, that's tough enough for me to do---but how do you challenge others in such a way that they will understand what "What's the evidence?" actually means?

Ah, then it sounds like your real problem is that you're not yet skilled enough at explaining what evidence means, in an easy-to-grasp sort of way. In the case of your homeopathy example, I would say that the thing that matters is: what percentage of patients given homeopathic remedies get better? Is is better than the percentage who get better without homeopathic remedies, all other things being equal? (Pause to hash this out; it's important to get the other guy agreeing that this is the most direct measure of whether or not homeopathy works.) Then you can point at the many studies showing that, when we actually tried this experiment out, there wasn't any difference between the people who were treated with homeopathy and the people who weren't.

The fact that they believe in god proves that everybody believes in a god (I actually encountered this very argument; it was puzzling to me, as a teenager I thought they just did not count me as a full person, now I expect that they indeed were).

Oh man, I ran into that when I was a teenager, too. To this day I have no idea how to respond to that; it's like running into somebody who thinks that Mexicans are all p-zombies, except more socially acceptable. I don't know that there's really anything you can possibly say to someone who's that nuts, except maybe try talking about what it's like to not believe in god, and try to inject some outside context into their world.

Your backup strategies also seem to be more related to improve the side of the rational agent, not to get the other discussion partners thinking.

I admit, most of my debating tactics are aimed at lurkers watching the debate, not the other participant. That's usually the most effective way to do it online, but in one-on-one discussions, I agree with you that such tactics could be counterproductive. Even then, though, you may be able to get people to retreat from some of their sillier positions, or plant a seed of doubt. It has happened in the past.

Anyway, I still think that applying the other guy's logic to argue for something else is a good way of getting them thinking. I remember asking a bunch of people "why are you [religion X] and not [religion y]? Other than by accident of birth." and getting quite a few of them to really pause and ponder.

Comment author: JGWeissman 05 March 2010 06:00:27AM 4 points [-]

Should we have a sidebar section "Friends of LessWrong" to link to sites with some overlap in goals/audience?

I would include TakeOnIt in such a list. Any other examples?

Comment author: byrnema 04 March 2010 10:42:56PM 0 points [-]

Does anyone here know about interfacing to the world (and mathematics) in the context of a severely limiting physical disability? My questions are along the lines of: what applications are good (not buggy) to use and what are the main challenges and considerations a person of normal abilities would misjudge or not be aware of? Thanks in advance!

Comment author: roland 04 March 2010 08:36:28PM *  3 points [-]

List with all the great books and videos

Recently I've read a few articles that mentioned the importance of reading the classic works, like the Feynman lectures on physics. But, where can I find those? Wouldn't it be nice if we had a central place, maybe wikipedia where you can find a list of all the great books, videolectures, web pages divided by field(physics, mathematics, computer science, economics, etc...)? So if someone wants to know what he has to read to get a good understanding of the basic knowledge of any field he will have a place to look it up. It doesn't necessarily need to have the actual works, but at least a pointer to them.

Is there such a comprehensive list somewhere?

Comment author: nazgulnarsil 12 March 2010 11:57:37AM 1 point [-]

every time someone tries to make such a list collaboratively much of the effort diffuses into arguments over inclusion eventually (see wikipedia).

Comment author: CronoDAS 04 March 2010 05:21:13PM *  2 points [-]

I saw a commenter on a blog I read making what I thought was a ridiculous prediction, so I challenged him to make a bet. He accepted, and a bet has been made.

What do you all think?

Comment author: GreenRoot 04 March 2010 06:02:59PM 1 point [-]

Very good. I see this forcing more careful thought by the poster, either now or later, and more skepticism in the blog's audience.

I'd recommend restating all the terms of the bet in a single comment or another web page, which both of you explicitly accept. This will make things easier to reference eight months from now. Might also be good to name a simple procedure like a poll on the blog to resolve any disagreements (like the definition of "Healthcare reform passes").

And please, reply again here or make a new open thread comment once this gets resolved. I'd love to hear how it turned out and what the impact on poster's or other's beliefs was.

Comment author: CronoDAS 05 March 2010 12:00:11AM 0 points [-]

He's a right-wing commenter on a liberal blog; most of the other commenters don't seem to take him seriously either, but he hasn't done anything to become ban-worthy.

Comment author: Cyan 04 March 2010 05:31:49PM 0 points [-]

Good job.

Comment author: Kevin 04 March 2010 10:41:28AM 3 points [-]

Is there a way to view an all time top page for Less Wrong? I mean a page with all of the LW articles in descending order by points, or something similar.

Comment author: FAWS 04 March 2010 11:52:04AM 2 points [-]

The link named "top" in the top bar, below the banner? Starting with the 10 all time highest ranked articles and continuing with the 10 next highest when you click "next", and so on? Or do I misunderstand you and you mean something else?

Comment author: Kevin 04 March 2010 12:00:58PM 1 point [-]

Thanks, I was missing the drop down button on that page.

Comment author: h-H 04 March 2010 01:33:22AM *  4 points [-]

while not so proficient in math, I do scour arxiv on occasion, and am rewarded with gems like this, enjoy :)

"Lessons from failures to achieve what was possible in the twentieth century physics" by Vesselin Petkov http://arxiv.org/PS_cache/arxiv/pdf/1001/1001.4218v1.pdf

Comment author: arundelo 04 March 2010 02:36:51AM 2 points [-]

Neat find! I haven't read all of it yet, but I found this striking:

It was precisely the view, that successful abstractions should not be regarded as representing something real, that prevented Lorentz from discovering special relativity. He believed that the time t of an observer at rest with respect to the aether (which is a genuine example of reifying an unsuccessful abstraction) was the true time, whereas the quantity t of another observer, moving with respect to the first, was merely an abstraction that did not represent anything real in the world. Lorentz himself admitted the failure of his approach:

The chief cause of my failure was my clinging to the idea that the variable t only can be considered as the true time and that my local time t must be regarded as no more than an auxiliary mathematical quantity. In Einstein's theory, on the contrary, t plays the same part as t; if we want to describe phenomena in terms of x , y , z , t we must work with these variables exactly as we could do with x, y, z, t.

This reminds me of Mach's Principle: Anti-Epiphenomenal Physics:

When you see a seemingly contingent equality - two things that just happen to be equal, all the time, every time - it may be time to reformulate your physics so that there is one thing instead of two. The distinction you imagine is epiphenomenal; it has no experimental consequences. In the right physics, with the right elements of reality, you would no longer be able to imagine it.

Comment author: wnoise 04 March 2010 01:57:21AM *  2 points [-]

I generally prefer links to papers on the arxiv go the abstract, as so: http://arxiv.org/abs/1001.4218

This lets us read the abstract, and easily get to other versions of the same paper (including the latest, if some time goes by between your posting and my reading), and get to other works by the same author.

EDIT: overall, reasonable points, but some things "pinging" my crank-detectors. I suppose I'll have to track down reference 10 and the 4/3 claim for electro-magnetic mass.

Comment author: Mitchell_Porter 04 March 2010 04:50:49AM *  3 points [-]

overall, reasonable points

I disagree. I think it's a paper which looks backwards in an unconstructive way. The author is hoping for conceptual breakthroughs as good as relativity and quantum theory, but which don't require engagement with the technical complexities of string theory or the Standard Model. Those two constructions respectively define the true theoretical and empirical frontier, but instead the author wants to ignore all that, linger at about a 1930s conceptual level, and look for another way.

ETA: As an example of not understanding contemporary developments, see his final section, where he says

While string theory has extensively studied how the interactions in the hydrogen atom can be represented in terms of the string formalism, I wonder how string theory would answer a much simpler question – what should be the electron in the ground state of the hydrogen atom in order that the hydrogen atom does not possess a dipole moment in that state?

I don't know what significance this question has for the author, but so far as I know, the hydrogen atom has no dipole moment in its ground state because the wavefunction is spherically symmetric. This will still be true in string theory. The hydrogen atom exists on a scale where the strings can be approximated by point particles. I suspect the author is thinking that because strings are extended objects they have dipole moments; but it's not of a magnitude to be relevant at the atomic scale.

Comment author: wnoise 04 March 2010 06:48:02AM 3 points [-]

Of course he looks backwards. You can't analyze why any discovery didn't happen sooner, even though all the pieces were there, unless you look backwards. I thought the case study of SR was quite illuminating, though it goes directly counter to his attack on string theory. After getting the Lorentz transform, it took a surprisingly long time to for anyone to treat the transformed quantities as equivalent -- that is, to take the math seriously. And for string theory, he says they take the math too seriously. Of course, the Lorentz transform was more clearly grounded in observed physical phenomenon.

I completely agree he doesn't understand contemporary developments, and that was some of what I referred to as "pinging my crank-detectors", along with the loose analogy between 4-d bending in "world tubes" to that in 3-d rods. I don't necessarily see that as a huge problem if he's not pretending to be able to offer us the next big revolution on a silver platter.

Comment author: Cyan 04 March 2010 02:43:30AM *  2 points [-]

the 4/3 claim for electro-magnetic mass

Wikipedia points to the original text of a 1905 article by Poincaré. How's your French?

Comment author: wnoise 04 March 2010 03:02:43AM 2 points [-]

Thanks. It's decent, actually, but there's still some barrier. Increasing that barrier is changes to physics notation since then (no vectors!).

Fortunately my university library appears to have a copy of an older edition of Rohrlich's Classical Charged Particles, which may help piece things together.

Comment author: Cyan 04 March 2010 03:26:46AM *  2 points [-]

Petkov wrote:

Feynman [wrote], ”It is therefore impossible to get all the mass to be electromagnetic in the way we hoped. It is not a legal theory if we have nothing but electrodynamics” [13, p. 28-4]; but he was unaware that the factor of 4/3 had already been accounted for [10]).

It's worth noting that Feynman's statements are actually correct. According to Wikipedia, the problem is solved by postulating a non-electromagnetic attractive force holding the charged particle together, which subtracts 1/3 of the 4/3 factor, leaving unity. Petkov doesn't explicitly say that Feynman is wrong, but his phrasing might leave that impression.

Comment author: FAWS 04 March 2010 12:11:21AM *  0 points [-]

Re: Cognitive differences

When you try to mentally visualize an image, for example a face, can you keep it constant indefinitely?

( For me visualisation seems to always entail flashing an image, I'd say for less than 0.2 seconds total. If I want to keep visualizing the image I can flash it again and again in rapid succession so that it appears almost seamless, but that takes effort and after at most a few seconds it will be replaced by a different but usually related image. )

If yes, would you describe yourself as a visual thinker? Are you good at drawing? Good at remembering faces?

(No, so so, no)

Comment author: AdeleneDawner 04 March 2010 02:44:07AM *  2 points [-]

When you try to mentally visualize an image, for example a face, can you keep it constant indefinitely?

Not indefinitely, but the limiting factor is my attention quality and span. If I get distracted, the image disappears; if I try to pay attention to other things while continuing to visualize something, the visualization can subtly morph in response to the other things I'm thinking about, and it's hard to tell if it's morphing or not. (This effect seems closely related to priming.)

If yes, would you describe yourself as a visual thinker? Are you good at drawing? Good at remembering faces?

I'm a very visual thinker. I'm not good at drawing, but that appears to be a function of poor fine motor control and lack of practice; I have been known to surprise myself and others with how well I draw for someone who almost never does so. I'm not very good at remembering faces, either, but again other factors affect that; I tend to avoid looking at faces in the first place, since I find eye contact overwhelming. I seem to be very good at remembering other complex visual things, though.

Comment author: gwern 04 March 2010 02:21:21AM 0 points [-]

I can hold an image/face steady for about a full second just sitting here. I could probably do better while meditating; so I think it's more an issue of 'can you concentrate' than anything else.

(I'm a pretty visual thinker, but my hearing-impairment also means I'm anomalous.)

Comment author: Morendil 03 March 2010 08:22:56PM 0 points [-]

I'm drafting a post to build on (and beyond) some of the themes raised by Seth Godin's quote on jobs and the ensuing discussion.

I'm likely to explore the topic of "compartmentalization". But damn, is that word ugly!

Is there an acceptable substitute?

Comment author: arundelo 03 March 2010 08:34:50PM 0 points [-]

"compartmentalization". But damn, is that word ugly!

It has never bothered me.

Comment author: XiXiDu 03 March 2010 01:45:17PM 6 points [-]

How important are 'the latest news'?

These days many people are following an enormous amount of news sources. I myself notice how skimming through my Google Reader items is increasingly time-consuming.

What is your take on it?

  • Is it important to read up on the latest news each day?
  • If so, what are your sources, please share them.
  • What kind of news are important?

I wonder if there is really more to it than just curiosity and leisure. Are there news sources (blogs, the latest research, 'lesswrong'-2.0 etc.), besides lesswrong.com, that every rationalist should stay up to date on? For example, when trying to reduce my news load, I'm trying to take into account how much of what I know and do has its origins in some blog post or news item. Would I even know about lesswrong.com if I wasn't the heavy news addict that I am?

What would it mean to ignore most news and concentrate on my goals of learning math, physics and programming while reading lesswrong.com? Have I already reached a level of knowledge that allows me to get from here to everywhere, without exposing myself to all the noise out there in hope of coming across some valuable information nugget which might help me reach the next level?

How do we ever know if there isn't something out there that is more worthwhile, valuable, beautiful, something that makes us happier and less wrong? At what point should we cease to be the tribesman who's happily trying to improve his hunting skills but ignorant of the possible revolutions taking place in a city only 1000 miles afar?

Is there a time to stop searching and approach what is at hand? Start learning and improving upon the possibilities we already know about? What proportion of one's time should a rationalist spend on the prospect of unknown unknowns?

Comment author: h-H 06 March 2010 01:29:49AM 0 points [-]

yeah, news is usually a time/attention sink, I go to my bookmarked blogs etc whenever I feel like procrastinating.

15-20 minutes of looking at the main news sites/blogs should be enough to tell you what the biggest developments are, but really, I read them for entertainment value as much as for anything else.

as a side note, antiwar is good site for world news.

Comment author: Morendil 03 March 2010 08:58:06PM *  3 points [-]

Good question, which I'm finding surprisingly hard to answer. (i.e. I've spent more time composing this comment than is perhaps reasonable, struggling through several false starts).

Here are some strategies/behaviours I use: expand and winnow; scorched earth; independent confirmation; obsession.

  • "expand and winnow": after finding an information source I really like (using the term "source" loosely, a blog, a forum, a site, etc.) I will often explore the surrounding "area", subscribe to related blogs or sources recommended by that source. In a second phase I will sort through which of these are worth following and which I should drop to reduce overload
  • "scorched earth": when I feel like I've learned enough about a topic, or that I'm truly overloaded, I will simply drop (almost) every subscription I have related to that topic, maybe keeping a major source to just monitor (skim titles and very occasionally read an item)
  • "independent confirmation": I do like to make sure I have a diversified set of sources of information, and see if there are any items (books, articles, movies) which come at me from more than one direction, especially if they are not "massively popular" items, e.g. I'd discard a recommendation to see Avatar, but I decided to dive into Jaynes when it was recommended on LW and my dad turned out to have liked it enough to have a hard copy of the PDF
  • "obsession": there typically is one thing I'm obsessed with (often the target of an expand and winnow operation); e.g. at various points in my life I've been obsessed with Agora Nomic, XML, Java VM implementation, Agile, personal development, Go, and currently whatever LW is about. An "obsessed" topic can be but isn't necessarily a professional interest, but it's what dominates my other curiosity and tends to color my other interests. For instance while obsessed with Go I pursued the topic both for its own sake and as a source of metaphors for understanding, say, project management or software development. I generally quit ("scorched earth") once I become aware I'm no longer learning anything, which often coincides with the start of a new obsession.

My RSS feeds folder, once massive, is down to a half dozen indispensable blogs. I've unsubscribed from most of the mailing lists I used to read. My main "monitored" channel is Twitter, where I follow a few dozen folks who've turned up gold in the past. My main "active" source of new juicy stuff to think about is LW.

(ETA: as an example of "independent confirmation" in the past two minutes, one of my Agile colleagues on Twitter posted this link.)

Comment author: Rain 03 March 2010 08:55:56PM *  8 points [-]

I searched for a good news filter that would inform me about the world in ways that I found to be useful and beneficial, and came up with nothing.

Any source that contained news items I categorized as useful, they made up less than 5% of the information presented by that source, and thus were drowned out and took too much time and effort, on a daily basis, to find. Thus, I mostly ignore news, except what I get indirectly through following particular communities like LessWrong or Slashdot.

However, I perform this exercise on a regular basis (perhaps once a year), clearing out feeds that have become too junk-filled, searching out new feeds, and re-evaluating feeds I did not accept last time, to refine my information access.

I find that this habit of perpetual long-term change (significant reorganization, from first principles of the involved topic or action) is highly beneficial in many aspects of my life.

ETA: My feed reader contains the following:

For the vast majority of posts on each of these feeds, I only read the headline. Feeds where I consistently (>25%) read the articles or comments are: Slashdot (mostly while bored at work), Marginal Revolution (the only place I read every post), Sentient Developments, Accelerating Future, and LessWrong. Even for those, I rarely (<10%) read linked articles, preferring instead to read only the distillation by the blog author, or the comments by other users.

ETA2: I also listen to NPR during my short commute to and from work, and occasionally watch the Daily Show and the Colbert Report online, for entertainment. Firefox with NoScript and Adblock Plus makes it bearable - I'm extremely advertising averse.

I do not own a television, and generally consider TV news (in the US) to be horrendous and mind-destroying.

Comment author: [deleted] 02 March 2010 09:07:46PM 4 points [-]

When I was young, I happened upon a book called "The New Way Things Work," by David Macaulay. It described hundreds of household objects, along with descriptions and illustrations of how they work. (Well, a nuclear power plant, and the atoms within it, aren't household objects. But I digress.) It was really interesting!

I remember seeing someone here mention that they had read a similar book as a kid, and it helped them immensely in seeing the world from a reductionist viewpoint. I was wondering if anyone else had anything to say on the matter.

Comment author: Nisan 07 March 2010 01:56:12AM *  0 points [-]

I have fond childhood memories of many hours tracing the circuit diagram of the adding circuit : ) God, I was so nerdy. I wanted to know how a computer worked and that book helped me avoid a mysterious answer to a mysterious question. Learning, in detail, how a specific logic circuit works really drove home how much I had yet to learn about the rest of the workings of a computer.

Comment author: [deleted] 03 March 2010 07:22:57AM 2 points [-]

Today there's How Stuff Works.

Comment author: Nick_Tarleton 03 March 2010 01:41:39AM 1 point [-]

I also loved that book. It probably helped teach me reductionism, but it's hard to tell given my generally terrible memory for my childhood.

(FWIW, my best guess for my biggest reductionist influence would be learning assembly language and other low-level CS details.)

Comment author: h-H 03 March 2010 12:07:23AM *  0 points [-]

I was going to get that for me younger brother when I next see him :)

Comment author: Jack 02 March 2010 10:15:23PM 1 point [-]

I think we had this in the house, but I don't remember it very well, except some of the part about pullies and levers. This book would be a nice starting point for that rebuilding civilization manual idea from a while back.

Comment author: MrHen 02 March 2010 09:25:57PM *  3 points [-]

I loved that book. I still have moments when I pull some random picture from that book out of my memory to describe how an object works.

EDIT: Apparently the book is on Google.

Comment author: Morendil 02 March 2010 09:14:07PM 1 point [-]

My favorite Macaulay is "Motel of the Mysteries". I read it as a kid and it definitely had an influence. ;)

Comment author: wnoise 02 March 2010 08:15:39PM 3 points [-]

I'm considering doing a post about "the lighthouse problem" from Data Analysis: a Bayesian Tutorial, by D. S. Sivia. This is example 3 in chapter 2, pp. 31-36. It boils down to finding the center and width of a Cauchy distribution (physicists may call it Lorentzian), given a set of samples.

I can present a reasonable Bayesian handling of it -- this is nearly mechanical, but I'd really like to see a competent Frequentist attack on it first, to get a good comparison going, untainted by seeing the Bayesian approach. Does anyone have suggestions for ways to structure the post?

Comment author: hugh 02 March 2010 10:56:34PM 0 points [-]

I don't have the book you're referring to. Are you essentially going to walk through a solution for this [pdf], or at least to talk about point #10?

This is a Bayesian problem; the Frequentist answer is the same, just more convoluted because they have to say things like "in 95% of similar situations, the estimate of a and b are within d of the real position of the lighthouse". Alternately, a Frequentist, while always ignorant when starting a problem, never begins wrong. In this case, if the chose prior was very unsuitable, the Frequentist more quickly converges to a correct answer.

Comment author: wnoise 03 March 2010 01:57:03AM *  0 points [-]

Yes, that was the plan.

This is a Bayesian problem;

I thought Frequentists would not be willing to cede such, but insist that any problem has a perfectly good Frequentist solution.

the Frequentist answer is the same,

I want to see not just the Frequentist solution, but the derivation of the solution.

Comment author: wnoise 02 March 2010 05:45:16PM 1 point [-]

Is there some way to "reclaim" comments from the posts transferred over from Overcoming Bias? I could have sworn I saw something about that, but I can't find anything by searching.

Comment author: thomblake 02 March 2010 06:34:44PM 1 point [-]

If you still have the e-mail address, you can follow the "reset password" process at login. That would allow you to have the account for the old comments, though it will still be treated as a different account than your new ones.

Comment author: MixedNuts 02 March 2010 03:52:01PM 10 points [-]

TL;DR: Help me go less crazy and I'll give you $100 after six months.

I'm a long-time lurker and signed up to ask this. I have a whole lot of mental issues, the worst being lack of mental energy (similar to laziness, procrastination, etc., but turned up to eleven and almost not influenced by will). Because of it, I can't pick myself up and do things I need to (like calling a shrink); I'm not sure why I can do certain things and not others. If this goes on, I won't be able to go out and buy food, let alone get a job. Or sign up for cryonics or donate to SIAI.

I've tried every trick I could bootstrap; the only one that helped was "count backwards then start", for things I can do but have trouble getting started on. I offer $100 to anyone who suggests a trick that significantly improves my life for at least six months. By "significant improvement" I mean being able to do things like going to the bank (if I can't, I won't be able to give you the money anyway), and having ways to keep myself stable or better (most likely, by seeing a therapist).

One-time tricks to do one important thing are also welcome, but I'd offer less.

Comment author: Unnamed 16 March 2010 09:09:10AM *  2 points [-]

The number one piece of advice that I can give is see a doctor. Not a psychologist or psychiatrist - just a medical doctor. Tell them your main symptoms (low energy, difficulty focusing, panic attacks) and have them run some tests. Those types of problems can have physical, medical causes (including conditions involving the thyroid or blood sugar - hyperthyroidism & hypoglycemia). If a medical problem is a big part of what's happening, you need to get it taken care of.

If you're having trouble getting yourself to the doctor, then you need to find a way to do it. Can you ask someone for help? Would a family member help you set up a doctor's appointment and help get you there? A friend? You might even be able to find someone on Less Wrong who lives near you and could help.

My second and third suggestions would be to find a friend or family member who can give you more support and help (talking about your issues, driving you to appointments, etc.) and to start seeing a therapist again (and find a good one - someone who uses cognitive-behavioral therapy).

Comment author: MixedNuts 20 March 2010 09:52:18PM 1 point [-]

This is technically a good idea. What counts as "my main symptoms", though? The ones that make life most difficult? The ones that occur most often? The most visible ones to others? To me?

Comment author: Unnamed 02 April 2010 05:59:05AM 1 point [-]

You'll want to give the doctor a sense of what's going on with you (just like you've done here), and then to help them find any medical issues that may be causing your problems. So give an overall description of the problem and how serious it is (sort of like in your initial post - your lack of energy, inability to do things, and lots of related problems) - including some examples or specifics (like these) can help make that clearer. And be sure to describe anything that seems like it could be physiological (the three that stuck out to me were lack of energy, difficulty focusing, and anxiety / panic attacks - you might be able to think of some others).

The doctor will have questions which will help guide the conversation, and you can always ask whether they want more details about something. Do you think that figuring out what to say to the doctor could be a barrier for you? If so, let me know - I could say more about it.

Comment author: Jordan 10 March 2010 05:23:34AM *  4 points [-]

For what it's worth:

A few years back I was suffering from some pretty severe health problems. The major manifestations were cognitive and mood related. Often when I was saying a sentence I would become overwhelmed halfway through and would have to consciously force myself to finish what I was saying.

Long story short, I started treating my diet like a controlled experiment and, after a few years of trial and error, have come out feeling better than I can ever remember. If you're going to try self experimentation the three things I recommend most highly to ease the analysis process are:

  • Don't eat things with ingredients in them, instead eat ingredients
  • Limit each meal to less than 5 different ingredients
  • Try and have the same handful of ingredients for every meal for at least a week at a time.
Comment author: wedrifid 10 March 2010 09:50:23AM 1 point [-]

I'm curious. What foods (if you don't mind me asking) did you find had such a powerful effect?

Comment author: Jordan 11 March 2010 08:18:38AM 2 points [-]

I expanded upon it here.

What has helped me the most, by far, is cutting out soy, dairy, and all processed foods (there are some processed foods I feel fine eating, but the analysis to figure out which ones proved too costly for the small benefit of being able to occasionally eat unhealthy foods).

Comment author: CronoDAS 05 March 2010 02:15:00AM *  5 points [-]

After reading this thread, I can only offer one piece of advice:

You need to see a medical doctor, and fast. Your problems are clearly more serious than anything we can deal with here. If you have to, call 911 and have them carry you off in an ambulance.

Comment author: pjeby 04 March 2010 06:47:31AM 5 points [-]

This is just a guess, and I'm not interested in your money, but I think that you probably have a health problem. I'd suggest you check out the book "The Mood Cure" by Julia Ross, which has some very good information on supplementation. Offhand, you sound like the author's profile for low-in-catecholamines, and might benefit very quickly from fairly low doses of certain amino acids such as L-tyrosine.

I strongly recommend reading the book, though, as there are quite a few caveats regarding self-supplementation like this. Using too high a dose can be as problematic as too low, and times of day are important too. Consistent management is important, too. When you're low on something, taking what you need can make you feel euphoric, but when you have the right dose, you won't notice anything by taking some. (Instead, you'll notice if you go off it for a few days, and find mood/energy going back to pre-supplementation levels.)

Anyway... don't know if it'll work for you, but I do suggest you try it. (And the same recommendation goes for anyone else who's experiencing a chronic mood or energy issue that's not specific to a particular task/subject/environment.)

Comment author: MixedNuts 04 March 2010 02:50:50PM 2 points [-]

Buying a (specific) book isn't possible right now, but may help later; thanks. I took the questionnaire on her website and apparently everything is wrong with me, which makes me doubt her tests' discriminating power.

Comment author: Cyan 04 March 2010 08:23:31PM 4 points [-]

It's a marketing tool, not a test.

Comment author: pjeby 04 March 2010 07:36:24PM 2 points [-]

FWIW, I don't have "everything" wrong with me; I had only two, and my wife scores on two, with only one the same between the two of us.

Comment author: Kevin 04 March 2010 06:16:52AM 2 points [-]

Do you take fish oil supplements or equivalent? Can't hurt to try; fish oil is recommended for ADHD and very well may repair some of the brain damage that causes mental illness.

http://news.ycombinator.com/item?id=1093866