Open Thread: May 2010

3 Post author: Jack 01 May 2010 05:29AM

You know what to do.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (543)

Comment author: CronoDAS 28 May 2010 06:18:22AM *  7 points [-]

I am thinking of making a top-level post criticizing libertarianism, in spite of the current norm against discussing politics. Would you prefer that I write the post, or not write it?

Comment author: cupholder 29 May 2010 07:14:15AM *  2 points [-]

Upvoted your comment for asking in the first place.

If your post was a novel explanation of some aspect of rationality, and wasn't just about landing punches on libertarianism, I'd want to see it. If it was pretty much just about criticizing libertarianism, I wouldn't.

I say this as someone very unsympathetic to libertarianism (or at least what contemporary Americans usually mean by 'libertarianism') - I'm motivated by a feeling that LW ought to be about rationality and things that touch on it directly, and I set the bar high for mind-killy topics, though I know others disagree with me about that, and that's OK. So, though I personally would want to downvote a top-level post only about libertarianism, I likely wouldn't, unless it were obnoxiously bare-faced libertarian baiting.

Comment author: ata 29 May 2010 07:30:03AM *  3 points [-]

I agree on most counts.

However, I'd also enjoy reading it if it were just a critique of libertarianism but done in an exceptionally rational way, such that if it is flawed, it will be very clear why. At minimum, I'd want it to explicitly state what terminal values or top-level goals it is assuming we want a political system to maximize, consider only the least convenient possible interpretation of libertarianism, avoid talking about libertarians too much (i.e. avoid speculating on their motives and their psychology; focus as much as possible on the policies themselves), separate it from discussion of alternatives (except insofar as is necessary to demonstrate that there is at least one system from which we can expect better outcomes than libertarianism), not appear one-sided, avoid considering it as a package deal whenever possible, etc.

Comment author: Douglas_Knight 29 May 2010 04:59:58PM 0 points [-]

done in an exceptionally rational way, such that if it is flawed, it will be very clear why

That standard sounds pretty weird. If it is so clear that it is flawed, wouldn't you expect it to be clear to the author and thus not posted? Perhaps you mean clear what your core disagreement is?

Comment author: cousin_it 28 May 2010 11:59:37AM *  1 point [-]

Not enough information to answer. I will upvote your post if I find it novel and convincing by rationalist lights. Try sending draft versions to other contributors that you trust and incorporate their advice before going public. I can offer my help, if being outside of American politics doesn't disqualify me from that.

Comment author: tut 28 May 2010 10:53:22AM *  3 points [-]

I will vote it down unless you say something that I have not seen before. I think that it was a good idea to not make LW a site for rehearsing political arguments, but if you have thought of something that hasn't been said before and if you can explain how you came up with it then it might be a good reasoning lesson.

Comment author: Blueberry 28 May 2010 02:22:48PM 0 points [-]

I will vote it up to cancel the above downvote, to encourage you to make the post in case the threat of downvoting scares you off.

Comment author: NancyLebovitz 28 May 2010 12:11:52PM 3 points [-]

I will only vote it up if there's something I haven't seen before, but will only vote it down if I think it's dreadful.

We may not be ready for it yet, but at some point we need to be able to pass the big test of addressing hard topics.

Comment author: kodos96 28 May 2010 08:18:30AM 0 points [-]

ergh.... after the recent flamewar I was involved in, I had resolved to not allow myself to get wrapped up in another one, but if there's going to be a top level post on this, I don't realistically see myself staying out of it.

I'm not saying don't write it though. If you do, I'd recommend you let a few people you trust read it over first before you put it up, to check for anything unnecessarily inflammatory. Also, what Blueberry said.

Comment author: Blueberry 28 May 2010 06:51:28AM 3 points [-]

I'd love to read it, though I may well disagree with a lot of it. I'd prefer it if it were kept more abstract and philosophical, as opposed to discussing current political parties and laws and so forth: I think that would increase the light-to-heat ratio.

Comment author: Alicorn 28 May 2010 06:18:58AM 2 points [-]

I'm interested.

Comment author: [deleted] 28 May 2010 05:17:51AM *  0 points [-]

If the future of the universe is a 'heat death' in which no meaningful information can be stored, and in which no meaningful computation is possible, what will it matter if the singularity happens or not?

Ordinarily, we judge the success of a project by looking at how much positive utility has come of it.

We can view the universe we live in as such a project. Engineering a positive singularity looks like the only really good strategy for maximizing the expression of complex human values (simplified as 'utility') in the universe.

But if the universe reaches a final heat death, so that no intelligent life exists, and there is no memory and no record of anything, what do the contents of the antecedent eons count for? There is no way to tell if the-universe-which-resulted-in-heat-death saw the rise of marvelous intelligence and value or remained empty and unobserved.

What is the utility of a project after all of its participants, and all records and memory of it, are utterly destroyed?

The pragmatic answer is simply 'carpe diem': make the best of this finite existence. This is what people have done for years before the ideas of the singularity and transhumanism had been formulated.

Transhumanist beliefs, including the prospect of 'immortality' or transcendence seem to be a way in which some cope with their fear of death. But I fail to see why death should be any less gloomy a prospect for a 3^^^3 year old being than it is for a 30 year old. By definition, one cannot 'reminisce' about one's accumulated positive experiences after death, so in one sense a 3^^^3 year old has actually lost more: vastly more information has been destroyed!

So, in short, I struggle to see a rationale for my intuitive belief that surviving into deep time is truly better than a natural human lifespan, for if heat death is inevitable, as seems to be the case, the end result--the final tally of utils accumulated--is exactly the same. 0.

Comment author: Sniffnoy 28 May 2010 05:23:44AM 3 points [-]

The problem with this is that it assumes we only care about the end state.

Comment author: [deleted] 28 May 2010 05:31:26AM 0 points [-]

Is it rational for a decision procedure to place great value on the the interim state, if the end state contains absolutely no utility?

Comment author: Sniffnoy 28 May 2010 08:53:18PM 1 point [-]

Does caring about interim states leave you open to Dutch books?

Comment author: khafra 28 May 2010 03:29:31PM *  3 points [-]

This is a philosophical question, not a rational one. Terminal values are not generated by rational processes; that's why they're terminal values. The metaethics sequence, especially existential angst factory and the moral void, should expand on this sufficiently.

Comment author: Matt_Duing 27 May 2010 03:25:01AM 1 point [-]

Has anyone read "Games and Decisions: Introduction and Critical Survey" by R. Duncan Luce and Howard Raiffa? Any thoughts on its quality?

Comment author: eugman 25 May 2010 06:19:52PM 1 point [-]

I have a cognitive problem and I figured someone might be able to help with it.

I think I might have trouble filtering stimuli, or something similar. A dog barking, an ear ache, loud people, or a really long day can break me down. I start to have difficulty focusing. I can't hold complex concepts in my head. I'll often start a task, and quit in the middle because it feels too difficult and try to switch to something else, ultimately getting nothing done. I'll have difficulty deciding what to work on. I'll start to panic or get intimidated. It's really an issue.

I've found two things that help:

Music is good at filtering out noise and helping me focus. However, sometimes i can't listen to it or it is not enough.

The other thing is to make a extremely granular tasklist and then follow it without question. The tasks have to be really small and seem manageable.

Anyone have any suggestions? I'm not neurotypical in the broader sense, but I don't believe I fall on the autism spectrum.

Comment author: Alicorn 25 May 2010 06:26:29PM 1 point [-]

I have similar sensory issues on occasion and believe them to be a component of my autism, but if you don't have other features of an ASD then this could just be a sensory integration disorder. When it's an auditory processing issue, I find that listening to loud techno or other music with a strong beat helps more than other types of music, and ear-covering headphones help filter out other noise. I'm more often assaulted by textures, which I have to deal with by avoiding contact with the item(s).

As for the long day, that sounds like a matter of running out of (metaphorical) spoons. Paying attention to what activities drain or replenish said spoons, and choosing spoon-neutral or spoon-positive activities whenever they're viable options, is the way to manage this.

Comment author: eugman 25 May 2010 06:39:21PM 0 points [-]

Thanks for the advice. The only other symptom I have is some problems with my social coprocessor, but it doesn't feel like it fits an ASD.

Comment author: mattnewport 20 May 2010 07:01:55PM 2 points [-]

This seems like a potentially significant milestone: 'Artificial life' breakthrough announced by scientists

Scientists in the US have succeeded in developing the first synthetic living cell.

The researchers constructed a bacterium's "genetic software" and transplanted it into a host cell.

The resulting microbe then looked and behaved like the species "dictated" by the synthetic DNA.

Comment author: retiredurologist 21 May 2010 03:15:39PM 2 points [-]

Given that this now opens the door for artificially designed and deployed harmful viruses, perhaps unfriendly AI falls a few notches on existentialist risk ladder.

Comment author: Alexandros 20 May 2010 10:08:58AM *  4 points [-]

I remember hearing a few anecdotes about abstaining for food for a period of time (fasting) and improved brain performance. I also seem to recall some pop-sci explanation involving detoxification of the body and the like. Today something triggered interest in this topic again, but a quick Google search did not return much on the topic (fasting is drowned in religious references).

I figure this is well within LW scope, so does anyone have any knowledge or links that offer more concrete insight into (or rebuttal of) this notion?

Comment author: cousin_it 19 May 2010 10:17:27AM *  3 points [-]

Rolf Nelson's AI deterrence doesn't work for Schellingian reasons: the Rogue AI has incentive to modify itself to not understand such threats before it first looks at the outside world. This makes you unable to threaten, because when you simulate the Rogue AI you will see its precommitment first. So the Rogue AI negates your "first mover advantage" by becoming the first mover in your simulation :-) Discuss.

Comment author: rolf_nelson 20 May 2010 02:14:56AM 1 point [-]

I agree that AI deterrence will necessarily fail if:

  1. All AI's modify themselves to ignore threats from all agents (including ones it considers irrational), and

  2. any deterrence simulation counts as a threat.

Why do you believe that both or either of these statements are true? Do you have some concrete definition of 'threat' in mind?

Comment author: cousin_it 20 May 2010 07:05:55AM *  0 points [-]

I don't believe statement 1 and don't see why it's required. After all, we are quite rational, and so is our future FAI.

Comment author: Vladimir_Nesov 19 May 2010 11:24:46AM 0 points [-]

The notion of "first mover" is meaningless, where the other player's program is visible from the start.

Comment author: RichardKennaway 19 May 2010 08:08:07AM 4 points [-]

In another comment I coined (although not for the first time, it turns out) the expression "Friendly Human Intelligence". Which is simply geekspeak for how to bring up your kids right and not make druggie losers, wholesale killers, or other sorts of paperclipper. I don't recall seeing this discussed on LessWrong. Maybe most of us don't have children, and Eliezer has said somewhere that he doesn't consider himself ready to create new people, but as the saying is, if not now, when, and if not this, what?

I don't have children and don't intend to. I have two nephews and a niece, but have not had much to do with their lives, beyond sending them improving books for birthdays and Christmas. I wonder if LessWrongers, with or without children, have anything to say on how to raise children to be rational non-paperclippers?

Comment author: JanetK 22 May 2010 09:08:12AM 1 point [-]

I think that question is a conversation stopper because those who do not have children who not feel qualify and those that do have children know what a complex and tricky question it is. Personally I don't think there is a method that fits all children and all relationships with them. But... You might try activities rather than presents. 'Oh cool, uncle is gone to make a video with us and we're going to do it at the zoo.' If you get the right activity (depends on child), they will remember it and what you did and said for years. I had a uncle that I only saw a few times but he showed me how to make and throw a bomerang. He explained why it returned. I have thanked him for that day for 60 years.

Comment author: cupholder 22 May 2010 07:06:54PM 0 points [-]

I think that question is a conversation stopper because those who do not have children who not feel qualify and those that do have children know what a complex and tricky question it is.

I don't have children and I didn't try answering the question because I knew what a complex and tricky question it is - I don't expect it to be much different than the bigger question of how to improve human rationality for people in general.

Comment author: Kevin 16 May 2010 11:05:00AM 3 points [-]
Comment author: NancyLebovitz 15 May 2010 11:59:25PM *  1 point [-]

Crinimal profiling, good and bad

Article discusses the shift from impressive-looking guesswork to use of statistics. Also has an egregious example of the guesswork approach privileging the hypothesis.

Comment author: Jack 15 May 2010 04:36:13PM 0 points [-]

HELP NEEDED Today if at all possible.

So I'm working on a Bayesian approach to the Duhem-Quine problem. Basically, the problem is that any experiment never tests a hypothesis directly but only the conjunction of the hypothesis and other auxiliary assumption. The standard method for dealing with this is to make

P(h|e)=P(h & a|e) + P(h & -a|e) (so if e falsifies h&a you just use the h&-a)

So if e falsifies h&a you end up with:

P(h|e) = P(e|h&-a) * P(h&-a) / P(e)

This guy Strevens objects on the grounds that e can impact h without impacting a. His example:

Newstein, a brilliant but controversial scientist, has asserted both that h is true and that e will be observed. You do not know Newstein’s reasons for either assertion, but if one of her claims turns out to be correct, that will greatly increase your confidence that Newstein is putting her brilliance to good use and thus your confidence that the other claim will also turn out to be correct. Because of your knowledge of Newstein’s predictions, then, your P(h|e) will be higher than it would be otherwise.

Am I crazy or shouldn't that information already be contained in the above formula? Specifically, the term P(e|h&-a) should be higher than it would otherwise.

Comment author: PhilGoetz 15 May 2010 04:06:27AM 2 points [-]

Today the Pope finally admitted there has been a problem with child sex abuse by Catholic priests. He blamed it on sin.

What a great answer! It covers any imaginable situation. Sin could be the greatest tool for bad managers everywhere since Total Quality Management.

"Sir, your company, British Petroleum, is responsible for the biggest environmental disaster in America this decade. How did this happen, and what is being done to prevent it happening again?"

"Senator, I've made a thorough investigation, and I'm afraid there has been sin in the ranks of British Petroleum. BP has a deep need to re-learn penance, to accept purification, to learn on one hand forgiveness but also the need for justice."

"Thank you, Mr. Hayward. I'm glad you're on top of the situation."

I wonder if I can use this at work.

Comment author: Morendil 15 May 2010 08:04:06AM *  1 point [-]

Sin could be the greatest tool for bad managers everywhere since Total Quality Management.

That sounds like the kind of remark that goes out of its way to offend several categories of people at once. :)

But in that category the gold standard remains Evelyn Waugh's “now that they no longer defrock priests for sexual perversities, one can no longer get any decent proofreading.”

Comment author: kpreid 14 May 2010 05:22:36PM *  0 points [-]

It's a vicious cycle-- if you work on something that sounds crank-ish, you get defensive about being seen as a crank, and that defensiveness is also characteristic of cranks. Lather, rinse, repeat.

Chad Orzel

This seems possibly broadly applicable to me; e.g. replace “crank” with “fanboy”.

Comment author: Leafy 14 May 2010 08:43:16AM 0 points [-]

Science.

To me it is a process, a method, an outlook on life. But so often it is used as a pronoun: "Science says tomatoes are good for you".

It should be used to encourage rational thinking, clarity of arguement and assumption and rigorous unbiased testing. The pursuit of knowledge and truth. Instead it is often seen as a club, to which you either belong by working in a scientific profession, or you do not.

As a child of a mixed religeon household I felt like an outcast from religeon from an early age - it didn't matter that I have beliefs of my own, if I didn't belong to a specific club then I didn't belong at all. Very few religeous people I met encourage me to have faith regardless of what that faith is.

I see a scientific approach to life and its mysteries as my way of forming my own "map of the territory" as others perhaps use religeon and I hope that as promoters of rationality that we can encourage scientific principles in others rather than making them feel like outcasts for not being in our "club".

Comment author: Kevin 14 May 2010 04:18:53AM *  2 points [-]

"Effects of nutritional supplements on aggression, rule-breaking, and psychopathology among young adult prisoners"

Likely the effects were due to the fish oil. This study was replicating similar results seen in a UK youth prison.

http://www3.interscience.wiley.com/journal/123213582/abstract?CRETRY=1&SRETRY=0

Also see this other study of the use of fish oil to present the onset of schizophrenia in a population of youth that had had one psychotic episode or similar reason to seek treatment. The p-values they got are ridiculous -- fish oil appears to be way more effective in reality than I would have expected.

http://archpsyc.ama-assn.org/cgi/content/short/67/2/146

Take your fish oil, people.

Comment author: PhilGoetz 15 May 2010 04:08:08AM 0 points [-]

What about snake oil?

Comment author: rhollerith_dot_com 15 May 2010 06:24:31AM 3 points [-]

I don't know if your kidding or scoffing, but I will give a straight answer.

Richard Kunin, M.D., once analyzed snake oil and found that it is 20 or 25% omega-3 fatty acids.

Comment author: Jack 14 May 2010 04:32:23AM 0 points [-]

The link is giving me trouble. Can you paste the whole abstract?

Comment author: Kevin 14 May 2010 04:36:10AM *  1 point [-]

Effects of nutritional supplements on aggression, rule-breaking, and psychopathology among young adult prisoners

ABSTRACT Objective: In an earlier study, improvement of dietary status with food supplements led to a reduction in antisocial behavior among prisoners. Based on these earlier findings, a study of the effects of food supplements on aggression, rule-breaking, and psychopathology was conducted among young Dutch prisoners.

Methods: Two hundred and twenty-one young adult prisoners (mean age=21.0, range 18-25 years) received nutritional supplements containing vitamins, minerals, and essential fatty acids or placebos, over a period of 1-3 months.

Results: As in the earlier (British) study, reported incidents were significantly reduced (P=.017, one-tailed) in the active condition (n=115), as compared with placebo (n=106). Other assessments, however, revealed no significant reductions in aggressiveness or psychiatric symptoms. Conclusion: As the incidents reported concerned aggressive and rule-breaking behavior as observed by the prison staff, the results are considered to be promising. However, as no significant improvements were found in a number of other (self-reported) outcome measures, the results should be interpreted with caution. Aggr. Behav. 36:117-126, 2010. © 2009 Wiley-Liss, Inc.

Comment author: ata 13 May 2010 08:47:02AM 1 point [-]

Rationality comix!

Hover over the red button at the bottom (to the left of the RSS button and social bookmarking links) for a bonus panel.

Edit: "Whoever did the duplication" would be a better answer than "The guy who came first", admittedly. The duplicate and original would both believe themselves to be the original, or, if they are a rationalist, would probably withhold judgment.

Comment author: NihilCredo 17 May 2010 04:29:24PM 0 points [-]

More importantly, the question is terribly phrased - or just terrible. The philosopher could have started with "If you met the 'twins' afterwards, could someone tell them apart without asking anyone?", which has an obvious response of "no", and then gets followed by a actually interesting questions about, for example, what "memory" exactly is.

That version is a lot funnier, though!

Comment author: RobinZ 13 May 2010 12:26:22PM 3 points [-]

Speaking as an engineer, I'd think he wasn't talking about subjective aspects: "The guy who came first" is the one which was copied (perfectly) to make the clone, and therefore existed before the clone existed.

Comment author: arundelo 13 May 2010 04:35:49AM *  3 points [-]

Kaj_Sotala is doing a series of interviews with people in the SIAI house. The first is with Alicorn.

Edit: They are tagged as "siai interviews".

Comment author: JoshuaZ 13 May 2010 04:08:29AM *  1 point [-]

There an article in this month's Nature examining the statistical evidence for universal common descent. This is the first time someone has taken the massive amounts of genetic data and applied a Bayesian analysis to determine whether the existence of a universal common ancestor is the best model. Most of what we generally think of as evidence for evolution and shared ancestry is evidence for shared ancestry of large collections, such as mammals or birds, or for smaller groups. Some of the evidence is for common ancestry for a phylum. There is prior evidence for their shared ancestry based on primitive fossils and on the shared genetic code and extreme similarity of genomes across very different species. This is the first paper to make that last argument mathematically rigorous. When taken in this fashion, the paper more or less concludes that a Bayesian analysis using just the genetic and phylogenetic known data puts the universal common ancestor model as overwhelmingly more likely than other models. (The article is behind a paywall so until I get back to the university tomorrow I won't be able to comment on this in any substantial detail but this looks pretty cool and a good example how careful Bayesianism can help make something more precise).

Comment author: JoshuaZ 13 May 2010 08:13:48PM 2 points [-]

Ok. Reading the paper now. Some aspects are bit technical and so I don't follow all of the arguments or genetic claims other than at a broad level. However, the money quote is "Therefore, UCA is at least 10^2,860 times more probable than the closest competing hypothesis." (I've replaced the superscript with a ^ becaause I don't know how to format superscripts). 10^2860 is a very big number.

Comment author: Psy-Kosh 13 May 2010 08:32:27PM 1 point [-]

What were they using for prior probabilities for the various candidate hypotheses? Uniform? Some form of complexity weighting? Other?

Comment author: JoshuaZ 23 May 2010 06:04:05PM *  1 point [-]

They have hypotheses concerning whether Eukarya, Archaea and Bacteria share a common ancestor or not, or possibly in pairs. All hypotheses were given equal prior likelyhood.

Comment author: PhilGoetz 13 May 2010 05:03:47AM 0 points [-]

I take it "a universal common ancestor" doesn't mean a universal common ancestor, but means a universal common ancestral group?

Comment author: JoshuaZ 13 May 2010 08:51:07AM 0 points [-]

As I said, I haven't had a chance to actually read the article itself, but as I understand it, this would indicate a universal common ancestor group of nearly genetically identical organisms. While there is suspicion that horizontal gene transfer was more common in the past than it is now, this supports the general narrative of all life arising from a single organism. These sorts of techniques won't distinguish between that and life arising from several genetically identical organisms.

Comment author: PhilGoetz 13 May 2010 12:17:22AM *  0 points [-]

I don't think that the math in Aumann's agreement theorem says what Aumann's paper says that it says. The math may be right, but the translation into English isn't.

Aumann's agreement theorem says:

Let N1 and N2 be partitions of Omega ... Ni is the information partition of i; that is, if the true state of the world is w [an element of Omega], then i is informed of that element Pi(w) of Ni that contains w.

Given w in Omega, an event E is called common knowledge at w if E includes that member of meet(N1, N2) that contains w.

Let A be an event, and let Qi denote the posterior probability p(A|Ni) of A given i's information; i.e., Qi(w) [ = p(A | Pi(w)) ] = p(A ^ Pi(w) / p(Pi(w)).

Proposition: Let w be in Omega ... If it is common knowledge at w that Q1 = q1 and Q2 = q2, then q1 = q2.

Proof: Let P be the member of meet(N1, N2) that contains w. Write P = union over all j of Pj, where the Pj are disjoint members of P1. Since Q1 = q1 throughout P, we have p(A ^ Pj) / p(Pj) = q1 for all j; hence p(A ^ Pj) = q1p(Pj), and so by summing over j we get p(A^P) = q1p(P). Similarly p(A^P) = q2p(P), and so q1=q2.

meet(N1, N2) is not an intersection; it's a very aggressive union of the subsets in the partitions N1 and N2 of Omega. It's generated this way:

M = {w}, Used = {}

while (M neq Used)

 Take an element m from M \ Used.
Find H1 in N1 and H2 in N2 containing m.
M = union(M, H1, H2).
Used = Used u {m}

Note in particular that P is a member of meet(N1, N2) that contains elements of Omega taken from H2, that are not in H1. To say that Q1 = q1 throughout P means that, for every x in P, Q1(x) = p(A | P1(x)) = q1. This is used to infer that p(A ^ Pj) = p(Pj) q1 for every Pj in N1.

This is a very strange thing to believe, given the initial conditions. The justification (as Robin Hanson pointed out to me) is that "common knowledge at w that Q1=q1" is defined to mean just that: Q1(x) = q1 for all x in the member P of the meet(N1,N2) containing w.

Now comes the translation into English. Aumann says that this technical definition of "common knowledge at w of the posteriors" means the same as "agent 1 and agent 2 both know both of their posteriors". And the justification for that is this: "Suppose now that w is the true state of the world, P1 = P1(w), and E is an event. To say that 1 'knows' E means that E includes P1. To say that 1 knows that 2 knows E means that E includes all P2 in N2 that intersect P1. ..." et cetera, to closure.

And this, I think, is wrong. If 1 knows that 2 knows E, 1 knows that E includes P1 union some P2 that intersects with P1, not that E includes P1 union all P2 that intersect with P1. So the "common knowledge" used in the theorem doesn't mean the same thing at all that we mean in English when we say they "know each others' posteriors".

Also, Aumann adds after the proof that it implicitly assumes that the agents know each others' complete partition functions over all possible worlds. Which is several orders of magnitude of outlandish; so the theorem can never be applied to the real world.

Comment author: Alexandros 12 May 2010 06:52:15AM *  11 points [-]

I have an idea that may create a (small) revenue stream for LW/SIAI. There are a lot of book recommendations, with links to amazon, going around in LW, and many of them do not use an affiliate code. Having a script add a LessWrong affiliate code to those links that don't already have one may lead to some income, especially given that affiliate codes persist and may get credited for unrelated purchases later in the day.

I believe Posterous did this, and there was a minor PR hubbub about it, but the main issue was that they did not communicate the change properly (or at all). Also, given that LW/SIAI are not-for-profit endeavours, this is much easier to swallow. In fact, if it can be done in an easy-to-implement way, I think quite a few members with popular blogs may be tempted to apply this modification to their own blogs.

Does this sound viable?

Comment author: RobinZ 12 May 2010 11:24:50AM 2 points [-]

Yes, under two conditions:

  1. It is announced in advance and properly implemented.

  2. It does not delete other affiliate codes if links are posted with affiliate codes.

Breaking both these rules is one of the many things which Livejournal has done wrong in the last few years, which is why I mention them.

Comment author: RichardKennaway 11 May 2010 11:21:24AM *  0 points [-]

The moral life of babies. This is an article that also recently appeared in the New York Times Magazine.

It covers various scientific experiments to explore the mental life of babies, finding evidence of moral judgements, theory of mind, and theory of things (e.g. when two dolls are placed behind a screen, and the screen is removed, 5-month-old babies expect to see two dolls).

Unlike many psychological experiments which produce more noise than signal, "these results were not subtle; babies almost always showed this pattern of response."

It also discusses various responses to the existence of innate morality, and the existence of "higher" adult morality -- caring about people who cannot possibly be of any benefit to oneself.

Comment author: exapted 11 May 2010 05:04:18AM *  0 points [-]

How should one reply to the argument that there is no prior probability for the outcome to some quantum event that that already happened and splits the world into two worlds, each with a different outcome to some test (say, a "quantum coin toss")? The idea is that if you merely sever the quantum event and consider different outcomes to the test (say, your quantum coin landed heads), and consider that the outcome could have been different (your quantum coin could have landed tails), there is no way to really determine who would be "you." Is it necessary to apply the SSA or some form of the SSSA? To me it seems that it should be allowed to rigidly maintain your identity while allowing the outcome of the quantum coin toss to vary across those two worlds. One could then base the prior probability of the coin landing heads in your world on the empirical evidence that quantum coin tosses of that type land heads with frequency 0.5 in any particular instance of a world history.

Comment author: Seth_Goldin 10 May 2010 03:18:45PM 1 point [-]

Cool paper: When Did Bayesian Inference Become “Bayesian”?

http://ba.stat.cmu.edu/journal/2006/vol01/issue01/fienberg.pdf

Comment author: NancyLebovitz 10 May 2010 12:22:44PM 3 points [-]

Most people's intuition is that assassination is worse than war, but simple utilitarianism suggests that war is much worse.

I have some ideas about why assassination isn't a tool for getting reliable outcomes-- leaders are sufficiently entangled in the groups that they lead that removing a leader isn't like removing a counter from a game, it's like cutting a piece out of a web which is going to rebuild itself in not quite the same shape-- but this doesn't add up to why assassination could be worse than war.

Is there any reason to think the common intuition is right?

Comment author: bogdanb 21 May 2010 02:27:56PM *  5 points [-]

TLDR: “War” is the inter-group version of “duel” (ie, lawful conflict). “Assassination” is the inter-group version of “murder” (ie, unlawful conflict).

My first “intuition about the intuition” is that it’s a historical consequence: During most history, things like freedom, and power and responsibility for enforcement of rules when conflicts (freedom vs. freedom) occur, were stratified. Conflicts between individuals in a family are resolved by the family (e.g. by the head thereof), conflicts between families (or individuals in different families) by tribal leaders or the kind. During feudalism the “scale” was formalized, but even before we had a large series of family → group → tribe → city → barony → kingdom → empire.

The key about this system is that attempts to “cross the borders” in this system, for instance punishing someone from a different group directly rather than invoking punishment from that group’s leadership is seen as an intrusion in that group’s affairs.

So assassination becomes seen as the between-group version of murder: going around the established rules of society. That’s something that is selected against in social environments (and has been discussed elsewhere).

By contrast, war is the “normal” result when there is no higher authority to recurse to, in a conflict of groups. Note that, analogously, for much of history duels were considered correct methods of conflict resolution between some individuals, as long as they respected some rules. So as long as, at least in theory, there are laws of war, war is considered a direct extension of that instinct. Assassination is seen as breaking rules, so it’s seen differently.

A few other points:

  • war is very visible, so you can expend a lot of signaling to dehumanize the adversary.
  • but assassination is supposed to be done in secret, so you can’t use propaganda as well (assassinating opposing leadership during a war is not seen as that much of a big problem; they’re all infidels/drug lords/terrorists anyway!)
  • assassination was a bit harder (even now, drones are expensive), and failed assassination attempts would lead to escalation to war often, anyway
  • assassination is oriented towards leaders, who have an interest to discourage, as much as they can, the concept. You can do that, e.g., via the meme that conflict is only honorable when it’s between armored knights on horses and the like. (For best results, add another meme which implies that observing that peasants are not allowed to own armor and horses is “dissent”.)
Comment author: JanetK 22 May 2010 09:30:54AM 2 points [-]

What an excellent analysis. I voted up. The only thing I can think of that could be added is that making a martyr can backfire.

Comment author: PhilGoetz 15 May 2010 04:13:37AM *  3 points [-]

Who thinks assassination is worse than war?

I could make an argument for it, though: If countries engaged regularly in assassination, it would never come to a conclusion, and would not reduce (and might increase) the incidence of war. Phrasing it as "which is worse" makes it sound like we can choose one or the other. This assumes that an assassination can prevent a war (and doesn't count the cases where it starts a war).

Comment author: NancyLebovitz 15 May 2010 06:14:58AM 0 points [-]

It seems to me that the vast majority of people think of war as a legitimate tool of national policy, but are horrified by assassination.

Comment author: Nick_Tarleton 10 May 2010 06:25:57PM *  3 points [-]

I've always assumed that the norm against assassination, causally speaking, exists mostly due to historical promotion by leaders who wanted to maintain a low-assassination equilibrium, now maintained largely by inertia. (Of course, it could be normatively supported by other considerations.)

It makes sense to me that people would oversimplify the effect of assassination in basically the way you describe, overestimating the indispensability of leaders. I know I've seen a study on the effects of assassination on terrorist groups, but can't find a link or remember the conclusions.

Comment author: Will_Newsome 10 May 2010 05:11:47AM 2 points [-]

You know, lots of people claim to be good cooks, or know good cooks, or have an amazing recipe for this or that. But Alicorn's cauliflower soup... it's the first food that, upon sneakily shoveling a fourth helping into my bowl, made me cackle maniacally like an insane evil sorcerer high on magic potions of incredible power, unable to keep myself from alerting three other soup-enjoying people to my glorious triumph. It's that good.

Comment author: SilasBarta 10 May 2010 04:28:39PM 0 points [-]

Does Alicorn's presence prohibit me from applying for an SIAI fellowship?

Comment author: Will_Newsome 11 May 2010 12:03:12AM *  0 points [-]

I second Anna, but I will also note that we plan on moving into a biggg house or possibly two big houses, and this would hopefully minimize friction in the event that two Visiting Fellows don't quite get along. I hope you apply!

Comment author: AnnaSalamon 10 May 2010 07:38:49PM 1 point [-]

Nope. All applications are welcome.

Comment author: Alicorn 10 May 2010 05:37:19AM *  3 points [-]

Awwwww :D

PS: If this endorsement of house food quality encourages anyone to apply for an SIAI fellowship, note your inspiration in the e-mail! We receive referral rewards!

Comment author: NancyLebovitz 10 May 2010 12:17:15PM 0 points [-]

Would you be willing to post the recipe?

Comment author: Alicorn 10 May 2010 04:10:09PM 1 point [-]

http://improvisationalsoup.wordpress.com/2009/05/31/cream-of-cauliflower-soup/

I have taken to also adding two or three parsnips per batch.

Comment author: bogdanb 21 May 2010 02:36:26PM 0 points [-]

Can you describe that “better than bouillon” thing, for us non-US (I assume) readers?

Also, how much cream do you use, and what’s “a ton” of garlic? (In my kitchen, that could mean half a pound — we use garlic paste as ketchup around here...)

Comment author: Alicorn 21 May 2010 07:00:07PM *  1 point [-]

Better than Bouillon is paste-textured reduced stock. It's gloopy, not very pretty, and adds excellent flavor to just about any savory dish. Instead of water and BTB, you could use a prepared stock, or instead of just the BTB, use a bouillon cube, but I find they have dramatically inferior flavors unless you make your own stock at home. I haven't tried cooking down a batch of homemade stock to see if I could get paste, but I think it would probably take too long.

I guess on the cream until the color looks about right. I use less cream if I overshot on the water when I was cooking the veggies, more if it's a little too thick.

"A ton" of garlic means "lots, to taste". I'd put one bulb in a batch of cauliflower soup mostly because it's convenient to grab one bulb out of a bag of garlic bulbs. If you're that enthusiastic about garlic, go ahead and use two, three, four - it's kind of hard to overdo something that wonderful.

Comment author: Jack 10 May 2010 05:50:39AM 0 points [-]

How long are the fellowships for?

Comment author: Alicorn 10 May 2010 06:01:32AM *  2 points [-]

As long as three months (and the possibility of sticking around after if everything goes swimmingly), but you could come for considerably shorter if you have scheduling constraints. We've also been known to have people over for a day or two just to visit and see how cool we are. Totally e-mail Anna if you have any interest at all! Don't be shy! She isn't scary!

Comment author: Mitchell_Porter 09 May 2010 06:47:47AM 4 points [-]

Entertainment for out-of-work Judea Pearl fans: go to your local job site and search on the word "causal", and then imagine that all those ads aren't just mis-spelling the word "casual"...

Comment author: Kevin 08 May 2010 11:24:32PM 2 points [-]
Comment author: steven0461 09 May 2010 03:08:16AM 1 point [-]

Is there a consensus on whether or not it's OK to discuss not-specifically-rationality-related politics on LW?

Comment author: Kevin 09 May 2010 03:15:12AM *  1 point [-]

Doesn't bother me. I think the consensus is that we should probably try and stay at a meta-political level, looking at a much broader picture than that which is discussed on the nightly news. The community is now mature enough that anything political is not automatically taboo.

I posted this not to be political, but because people here are generally interested in killer robots and their escalation of use.

Comment author: Jack 09 May 2010 12:28:57AM *  1 point [-]

This looks like very expensive way to kill terrorists, like 100k$ per militant not counting sunken costs such as the 4.5 mil price tag per drone. And not trying to estimate the cost of civilian deaths.

Comment author: mattnewport 08 May 2010 11:32:08PM *  2 points [-]

Related, Obama authorizes assassination of US citizen. I'm amazed how little anybody seems to care.

Comment author: [deleted] 15 May 2010 04:25:39AM 0 points [-]

The people who care are poorly represented by the news and by our political institutions. But they're out there.

Comment author: PhilGoetz 15 May 2010 04:21:10AM *  2 points [-]

I care, and approve, provided that Al-Awlaki can forestall it if he chooses by coming to the US to face charges.

I don't believe in treating everything with the slippery-slope argument. That way lies the madness I saw at the patent office, where every decision had to be made following precedent and procedure with syntactic regularity, without any contaminating element of human judgement.

Comment author: Jack 10 May 2010 09:58:48PM 3 points [-]

Something problematic: if you're a cosmopolitan, as I assume most people here are, can you consistently object to assassinations of citizens if you don't object to assassinations of non-citizens?

Comment author: mattnewport 10 May 2010 10:49:52PM 1 point [-]

Probably not, though you might be able to make a case that if a particular non-citizen is a significant perceived threat but there is no legal mechanism for prosecuting them then different rules apply. Most people are not cosmopolitan however and so I am more surprised at the lack of outrage over ordering the assassination of a US citizen than by the lack of outrage over the assassination of non-US citizens.

Comment author: JenniferRM 09 May 2010 12:42:50AM *  1 point [-]

The drone targeting is worrisome in the very big picture and long term sense of establishing certain kinds of precedents for robotic warfare that might be troubling. The fact that it is happening in Pakistan honestly seems more problematic to me in terms of the badness that comes with not having "clearly defined parties who can verifiably negotiate". Did the US declare war on Pakistan without me noticing? Is Pakistan happy that we're helping them "maintain brutal law and order" in their country by bombing people in their back country? Are there even functioning Westphalian nation states in this area? (These are honest questions - I generally don't watch push media, preferring instead to formulate hypotheses and then search for news or blogs that can answer the hypothesis.)

The assassination story, if true, seems much more worrisome because it would imply that the fuzziness from the so-called "war on terror" is causing an erosion of the rule of law within the US. Moreover, it seems like something I should take responsibility for doing something about because it is happening entirely within my own country.

Does anyone know of an existing political organization working to put an end to the imprisonment and/or killing of US citizens by the US government without formal legal proceedings that include the right to a trail by jury? I would rather coordinate with other people (especially competent experts) if such a thing is possible.

Comment author: Kevin 09 May 2010 02:09:01AM 0 points [-]

I would ask Amnesty International.

Comment author: JGWeissman 09 May 2010 01:08:38AM 1 point [-]

Does anyone know of an existing political organization working to put an end to the imprisonment and/or killing of US citizens by the US government without formal legal proceedings that include the right to a trail by jury? I would rather coordinate with other people (especially competent experts) if such a thing is possible.

I don't know if they have responded to this specific issue, but the ACLU is working against the breakdown of rule of law in the name of national defense.

Comment author: JenniferRM 09 May 2010 02:43:21AM *  1 point [-]

Thanks for the link. I have sent them an email asking for advice as to whether this situation is as bad as it seems to be, and if so, what I can do to make things less bad. I have also added something to my tickler file so that on May 21 I will be reminded to respond here with a followup even if there is no response from the ACLU's National Security Project.

I think I have done my good deed for the day :-)

ETA: One thing to point out is that before sending the email I tried googling "Presidential Assassination Program" in google news and the subject seems to have had little coverage since then. This was the best followup I could find in the last few days, and it spoke of general apathy on the subject. This leading me to conclude that "not enough people had noticed" yet, so I followed through with my email.

Comment author: JenniferRM 24 May 2010 09:52:40PM 1 point [-]

Following up for the sake of reference...

I did not get a reply from the ACLU on this subject and just today sent a second email asking for another response. If the ACLU continues to blow me off by June 1st I may try forwarding my unanswered emails to several people at the ACLU (to see the blowoff was simply due to incompetance on the part of only the person monitoring the email).

If that doesn't work then I expect I'll try Amnesty International as suggested by Kevin. There will be at least one more comment with an update here, whatever happens, and possibly two or three :-)

Comment author: JenniferRM 27 May 2010 01:20:11AM 4 points [-]

This will be my final update on this subject. I received an email from a representative of the ACLU. He apologized for the delayed response and directed me to a series of links that I'm passing on here for the sake of completeness.

First, there is an April 7th ACLU press release about extra-judicial killings of US citizens, that press release notes that an FOIA request had already been filed which appears to ask for the details of the program to see specifically how it works in order to find out if it really violates any laws or not, preparatory to potential legal action.

Second, on April 19th the Washington Post published a letter for the ACLU's Executive Director on the subject. This confirms that the issue is getting institutional attention, recognition in the press, and will probably not "slip through the cracks".

Third, on April 28th the ACLU sent an open letter to President Barack Obama about extrajudicial killings which is the same date that the ACLU's update page for "targeted killings" was last updated. So it seems clear that steps have been taken to open negotiations with an individual human being who has the personal authority to cancel the program.

This appears to provide a good summary of the institutional processes that have already been put in motion to fix the problems raised in the parent posts. The only thing left to consider appears to be (1) whether violations of the constitution will be adequately prevented and (2) to be sure that we are not free riding on the public service of other people too egregiously.

In this vein, the ACLU has a letter writing campaign organized so that people can send messages to elected officials asking that they respect the rule of law and the text of treaties that the US has signed, in case the extra-judicial killings of US citizens are really being planned and accomplished by the executive branch without trail or oversight by the courts.

Sending letters like these may help solve the problem a little bit, is very unlikely to hurt anything, and may patch guilt over free riding :-)

In the meantime I think "joining the ACLU as a dues paying member" just bumped up my todo list a bit.

Comment author: mattnewport 09 May 2010 01:01:20AM *  0 points [-]

Is Pakistan happy that we're helping them "maintain brutal law and order" in their country by bombing people in their back country?

No, in general I think they are about as unhappy as you might expect US citizens to be if the Chinese government was conducting drone attacks on targets in the US with heavy civilian casualties. This was part of the basis for my prediction last year that there will be a major terrorist attack in the US with a connection to Pakistan. Let's hope that all would be attackers are as incompetent as Faisal Shahzad.

The assassination story, if true, seems much more worrisome because it would imply that the fuzziness from the so-called "war on terror" is causing an erosion of the rule of law within the US.

I don't believe anyone has challenged the truth of the story, it has just not been widely reported or received the same level of scrutiny as the extra-judicial imprisonment and torture conducted by the last administration. The article I linked links to a New York Times piece on the decision. The erosion of the rule of law within the US in response to supposed terrorist threats has been going on ever since 9/11 and Obama has if anything accelerated rather than slowed that process.

Comment author: Jack 09 May 2010 01:39:32AM 0 points [-]

I imagine the assassination story would be a bigger deal if the target was still in the US.

Comment author: PhilGoetz 16 May 2010 05:53:15AM *  0 points [-]

It wouldn't happen. They'd arrest him.

Or, to put it another way - it would happen; it just wouldn't be called assassination, because it would be done using standard police procedure, and because other people would get killed. It would be like the standoffs with MOVE, or David Koresh's organization in Waco, or Ruby Ridge.

The word assassination is wrong for all these cases. These kinds of "assassination" are just the logical result of law enforcement. If you're enforcing the law, and you have police and courts and so on; and someone refuses to play along, eventually you have to use force. I don't see that the person being outside or inside America makes a big moral difference, when their actions are having effect inside America. A diplomatic difference, but not a moral difference.

I also think it's funny for people to have moral arguments in a forum where you get labeled an idiot if you admit you believe there are such things as morals.

Perhaps we should be grateful that technology hasn't advanced to the point where we can take these people out non-violently, because then we'd do it a lot more, for more trivial reasons.

Comment author: Tyrrell_McAllister 24 May 2010 10:23:29PM *  1 point [-]

I also think it's funny for people to have moral arguments in a forum where you get labeled an idiot if you admit you believe there are such things as morals.

Why shouldn't people argue over morals? The mainstream view here is that each person is arguing about what the fully-informed, fully-reflected-upon output of the other person's moral-evaluating computation would be. The presumption is that all of our respective moral-evaluating computational mechanisms would reach the same conclusion on the issue at hand in the limit of information and reflection.

Comment author: Jack 09 May 2010 12:58:19AM 1 point [-]

Are there even functioning Westphalian nation states in this area?

Pakistan does not have anything close to a force monopoly in the region we're attacking. They've as much as admitted that, I believe. I actually think I'm okay with the attacks as far as international law goes.

The drone targeting is worrisome in the very big picture and long term sense of establishing certain kinds of precedents for robotic warfare that might be troubling.

I always hear this but no one ever tells me just what precedents for robotic warfare they find troubling.

Comment author: mattnewport 09 May 2010 01:08:27AM *  3 points [-]

I always hear this but no one ever tells me just what precedents for robotic warfare they find troubling.

It is a further dehumanization of the process of killing and so tends to undermine any inbuilt human moral repugnance produced by violence. To the extent that you think that killing humans is a bad thing I suggest that is something that should be of concern. It is one more level of emotional detachment for the drone operators beyond what can be observed in the Apache pilots in the recent Wikileaks collateral murder video.

ETA: This Dylan Rattigan clip discusses some of the concerns raised by the Wikileaks video. The same concerns apply to drone attacks, only more so.

Comment author: Will_Newsome 08 May 2010 08:53:44PM *  0 points [-]

I'm looking at the forecast for the next year on CNN Money for Google stock (which will likely be an outdated link very soon). But while it's relevant...

I don't know much economics, but this forecast looks absurd to me. What are the confidence intervals? According to this graph, am I pretty much guaranteed to make vast sums of money simply by investing all of what I have in Google stock? (I'm assuming that this is just an example of the world being mad. Unless I really should buy some stock?) What implications does this sort of thing have on very unsavvy investors who look at graphs like that and instantly invest thousands of dollars? Do they win at everything forever? What am I missing?

Comment author: mattnewport 08 May 2010 09:24:28PM *  3 points [-]

It's fairly well established that actively managed funds on average underperform their benchmarks. I'm not aware of specific research on investing based solely on analyst forecasts but I imagine performance would be even worse using such a strategy. Basically, you are right to be skeptical. All the evidence indicates that the best long term strategy for the average individual investor is to invest in a low cost index fund and avoid trying to pick stocks.

ETA: This recent paper appears relevant. They do indeed find that analysts' target prices are inaccurate and appear to suffer from consistent biases.

Comment author: PhilGoetz 08 May 2010 05:55:28PM 3 points [-]

If we get forums, I'd like a projects section. A person could create a project, which is a form centered around a problem to work on with other people over an extended period of time.

Comment author: NihilCredo 17 May 2010 03:57:52PM *  2 points [-]

This seems like the sort of activity Google Wave is (was?) meant for.

Comment author: NancyLebovitz 08 May 2010 03:11:17AM 6 points [-]

Self-forgiveness limits procrastination

Wohl's team followed 134 first year undergrads through their first mid-term exams to just after their second lot of mid-terms. Before the initial exams, the students reported how much they'd procrastinated with their revision and how much they'd forgiven themselves. Next, midway between these exams and the second lot, the students reported how positive or negative they were feeling. Finally, just before the second round of mid-terms, the students once more reported how much they had procrastinated in their exam preparations.

The key finding was that students who'd forgiven themselves for their initial bout of procrastination subsequently showed less negative affect in the intermediate period between exams and were less likely to procrastinate before the second round of exams. Crucially, self-forgiveness wasn't related to performance in the first set of exams but it did predict better performance in the second set.

Comment author: xamdam 07 May 2010 06:16:21PM 0 points [-]

Pre-commitment Strategies in Behavioral Economics - PowerPoint by Russell James. Not deep, which is sometimes a good thing.

Comment author: Eneasz 07 May 2010 03:11:58PM 0 points [-]

First step in the AI take-over: gather funds. Yesterday's massive stock market spike took place in a matter of minutes, and it looks like it was in large part due to "glitches" in automatic trading programs. Accenture opened and closed at $41/share, but at one point was trading for $0.01/share. Anyone with $1000, lighting reflexes, and insider knowledge could've made $4.1M yesterday. For every $1000 they had.

http://www.npr.org/blogs/money/2010/05/the_market_just_flipped_out_ma.html

Next month: our new overlords reveal themselves?

Comment author: mattnewport 07 May 2010 05:58:21PM *  1 point [-]

In the same vein, a note from the machines:

"Many traders said computer program trades accelerated the slide as market indexes fell through crucial levels." —A barely literate human assessment of yesterday's two-minute market panic.

We are Wall Street. It's our job to make money. We didn't hear you humans complaining when the Dow went up 3000 points in the last nine months.

Just like gambling, it's not a problem for you until we make some of the machines lose so that some of the other machines can win. Your market positions are merely a small casualty in yesterday's triumph of Fidessa's EMS Workstation over Automated Trading Desk in the larger Algorithm Battles during this Long War on Execution Services. Well, yesterday some machines were crapped out and even though the market has come back somewhat, the reporters, the regulators and the hyperactive business blogs are looking for a scapegoat. But what did you think was going to happen when you invented the Turing Test anyway?

Comment author: mattnewport 07 May 2010 03:49:27PM 1 point [-]

Anyone with $1000, lighting reflexes, and insider knowledge could've made $4.1M yesterday. For every $1000 they had.

Many of those trades will be cancelled.

Comment author: nhamann 07 May 2010 03:42:20PM 1 point [-]
Comment deleted 07 May 2010 12:45:32PM [-]
Comment author: NancyLebovitz 07 May 2010 12:55:29PM 1 point [-]

No idea about the time lag-- my posts show up quickly-- but my intuition says that a fair coin has a 1/2 probability of being heads, and nothing about the experiment changes that.

Comment author: ata 07 May 2010 12:54:14PM 1 point [-]

Nope, new posts should show up immediately (or maybe with a half hour delay or so; I seem to recall that the sidebars are cached, but for far less than two days). Did it appear to post successfully, just not showing up? The only thing I can think of is that you might not have switched the "Post to" menu from "Drafts for neq1" to "LessWrong".

Comment author: neq1 07 May 2010 02:31:41PM 0 points [-]

Ah, I think that's it (posted to drafts). Thanks. Not sure how I missed that.

Comment author: SilasBarta 06 May 2010 10:40:58PM *  4 points [-]

Tough financial question about cryonics: I've been looking into the infinite banking idea, which actually has credible supporters, and basically involves using a mutual whole life insurance policy as a tax shelter for your earnings, allow you to accumulate dividends thereon tax free ("'cause it's to provide for the spouse and kids"), and withdraw from your premiums and borrow against yourself (and pay yourself back).

Would having one mutual whole life insurance policy keep you from having a separate policy of the kind of life insurance needed to fund a cryonic self-preservation project? Would the mutual whole life policy itself be a way to fund cryopreservation?

Comment author: mattnewport 06 May 2010 09:20:16PM *  1 point [-]

Don't know if anyone else was watching the stock market meltdown in realtime today but as the indices were plunging down the face of what looked a bit like an upside down exponential curve driven by HFT algorithms gone wild and the financial news sites started going down under the traffic I couldn't help thinking that this is probably what the singularity would look like to a human. Being invested in VXX made it particularly compelling viewing.

VXX

Comment author: SilasBarta 06 May 2010 10:30:54PM *  3 points [-]

To save everyone the googling: VXX is an exchange traded fund (basically a stock) whose value tracks the level of the VIX index. The VIX index is a measure of the volatility of the markets, with higher values indicating higher volatility (volatility here generally implying lost market value). VIX stands at about 33 now, and was around 80 during the '08 crisis.

Comment author: bogdanb 21 May 2010 02:40:13PM 0 points [-]

Does that mean VXX stock becomes more expensive/valuable when the volatility grows, or when it goes down?

Comment author: SilasBarta 21 May 2010 02:43:06PM 1 point [-]

VXX becomes more expensive when volatility grows.

Comment author: mattnewport 06 May 2010 10:57:18PM 0 points [-]

Thanks, I meant to include a link to that. I'll edit it.

Comment author: Kevin 06 May 2010 08:12:21PM 4 points [-]
Comment author: [deleted] 07 May 2010 12:15:52AM *  2 points [-]

Whoooohooo! Awsomest thing in the last ten years of genetic news for me! YAAY! WHO HOO!!! /does a little dance / I want to munch on that delicious data!

Ahem.

Sorry about that.

But people 1 to 4% admixture! This is big! This gets an emotional response from me!That survived more than a thousand generations of selection, the bulk of it is probably neutral but think about how many perfectly usefull and working allels we may have today (since the Neanderthalls where close to us to start with). 600 000 or something years of speration these guys evolved sperate from us for nearly as long as the fictional Vampires in Blindsight.

It seems some of us are have a bit our ancestors picked of another species in our genes! Could this have anything to do with behavioural modernity that started off at about the same time the populations crossbred in the middle east ~100 000 years ago? Which adaptations did we pick up? Think of the possiblities!

Ok I'll stop the torrent of downvote magnet words and get back to reading about this. And then everything else my grubby little paws can get on Neanderthals, I need to brush up!

Edit: I just realized part of the reason why I got so excited is because it shows I may have a bit of exotic ancestry. Considering how much people, all else being equal, like to play up their "foreign" or "unusual" semimythical ancestors or even roots in conversation, national myths or on the census instead of the ethnicity of the majority of their ancestors this may be a more general bias, that I could of course quickly justify with a evo psych "just so" story but I'll refrain from that to search for what studies have to say about this.

Comment author: Kevin 07 May 2010 12:30:14AM 1 point [-]

I definitely think this is top-level post material but I didn't have enough to say to not piss the people off that think all top level posts need to be at least 500 words long.

Comment author: JoshuaZ 07 May 2010 12:31:38AM 2 points [-]

I think this is very interesting but I'm not sure it should be a top-level post. Not due to the length but simply because it isn't terribly relevant to LW. Something can be very interesting and still not the focus here.

Comment author: Kevin 07 May 2010 12:33:18AM 0 points [-]

There is interesting discussion to be had that is relevant to LW.

Comment author: JoshuaZ 07 May 2010 12:34:41AM 0 points [-]

How so? I'm not seeing it.

Comment author: Kevin 07 May 2010 12:36:27AM 0 points [-]

That's because there isn't a top-level post yet! :P

The point being that many, many more people read /new than read 400 comments deep in the open thread.

Comment author: RobinZ 07 May 2010 01:44:15AM *  1 point [-]

It is easier to convince people that there is an interesting discussion to be had relevant to LW if you can discuss its relevance to LW in an interesting fashion when you post it.

More seriously, if there isn't some barrier to posting, /new will be suffer a deluge of marginally interesting material, and after the transients die out nobody will be watching the posts there, either. I read most new posts because most new posts are substantive.

Comment author: Kevin 06 May 2010 11:37:59PM 0 points [-]
Comment author: ciphergoth 06 May 2010 04:27:14PM 5 points [-]

The Cognitive Bias song:

http://www.youtube.com/watch?v=3RsbmjNLQkc

Not very good, but, you know, it's a song about cognitive bias, how cool is that?

Comment author: Morendil 06 May 2010 10:10:01AM 1 point [-]

The unrecognized death of speech recognition

Interesting thoughts about the limits encountered in the quest for better speech recognition, the implications for probabilistic approaches to AI, and "mispredictions of the future".

What do y'all think?

Comment author: thomblake 06 May 2010 01:18:33PM 0 points [-]

As noted in the comments, artificial natural speech recognition is in the 85-95% range, and human natural speech recognition is also around 95%. I was skeptical of that article when I first read it because it did not even mention how good humans are at it to compare to the machines.

Comment author: NancyLebovitz 05 May 2010 11:13:03PM 0 points [-]

Convergence: Threat or Menace? : How to Create the Ultimate TED Talk.

Comment author: Morendil 06 May 2010 06:13:25AM 0 points [-]

At 4:06 we see a slide that provides less than overwhelming support for the Red Bias hypothesis.

Comment author: alexflint 05 May 2010 10:41:50PM 2 points [-]

Apparently it is all too easy to draw neat little circles around concepts like "science" or "math" or "rationality" and forget the awesome complexity and terrifying beauty of what is inside the circles. I certainly did. I recommend all 1400 pages of "Molecular Biology Of The Cell" (well, at least the first 600 pages) as an antidote. A more spectacularly extensive, accessible, or beautifully illustrated textbook I have never seen.

Comment author: byrnema 05 May 2010 09:29:07PM *  1 point [-]

Curiously, what happens when I refresh LW (or navigate to a particular LW page like the comments page) and I get the "error encountered" page with those little witticisms? Is the site 'busy' or being modified or something else ...? Also, does everyone experience the same thing at the same moment or is it a local phenomenon?

Thanks ... this will help me develop my 'reddit-page' worldview.

Comment author: RobinZ 05 May 2010 09:34:03PM 0 points [-]

This has happened twice in the past two days - generally there is some specific comment which is broken and causes pages which would display it to crash. My analysis of the previous and current pattern here.

Comment author: byrnema 05 May 2010 09:37:43PM 0 points [-]

To test this hypothesis, the Recent Comments should work as soon as the bad comment moves to a new page..

Comment author: RobinZ 05 May 2010 09:39:31PM 0 points [-]

I predict that it will with confidence - it has in previous instances.

Comment deleted 05 May 2010 03:41:31PM [-]
Comment author: Vladimir_Nesov 05 May 2010 03:46:06PM *  0 points [-]

And http://www.rokomijic.com/ doesn't work as well...

Comment author: ata 05 May 2010 08:47:44AM 2 points [-]

Is it possible to change the time zone in which LW displays dates/times?

Comment author: JamesPfeiffer 05 May 2010 05:38:32AM *  8 points [-]

I noticed something recently which might be a positive aspect of akrasia, and a reason for its existence.

Background: I am generally bad at getting things done. For instance, I might put off paying a bill for a long time, which seems strange considering the whole process would take < 5 minutes.

A while back, I read about a solution: when you happen to remember a small task, if you are capable of doing it right then, then do it right then. I found this easy to follow, and quickly got a lot better at keeping up with small things.

A week or two into it, I thought of something evil to do, and following my pattern, quickly did it. Within a few minutes, I regretted it and thankfully, was able to undo it. But it scared me, and I discontinued my habit.

I'm not sure how general a conclusion I can draw from this; perhaps I am unusually prone to these mistakes. But since then I've considered akrasia as a sort of warning: "Some part of you doesn't want to do this. How about doing something else?"

Now when the part of you protesting is the non-exercising part or the ice-cream eating part, then akrasia isn't being helpful. But... it's worth listening to that feeling and seeing why you are avoiding the action.

Comment author: Leafy 06 May 2010 08:05:49AM 3 points [-]

Continuing on the "last responsible moment" comment from one of the other responders - would it not be helpful to consider the putting off of a task until the last moment as an attempt to gather the largest amount of information persuant to the task without incurring any penalty?

Having poor focus and attention span I use an online todo-list for work and home life where I list every task as soon as I think of it, whether it is to be done within the next hour or year. The list soon mounts up, occassionally causing me anxiety, and I regularly have cause to carry a task over to the next day for weeks at a time - but what I have found is that a large number of tasks get removed because a change makes the task no longer necessary and a small proportion get notes added to them while they stay on the list so that the by the time the task gets actioned it has been enhanced by the extra information.

By having everything captured I can be sure no task will be lost, but by procrastinating I can ensure the highest level of efficiency in the tasks that I do eventually perform.

Thoughts?

Comment author: bogdanb 05 May 2010 12:34:36PM 3 points [-]

I suspect it’s just a figure of speech, but can you elaborate on what you meant by “evil” above?

Comment author: NancyLebovitz 05 May 2010 10:52:36AM *  8 points [-]

the most extreme example is depressed people having an increased risk of suicide if an antidepressant lifts their akrasia before it improves their mood.

Comment author: MineCanary 14 May 2010 05:17:50PM 1 point [-]

I've also read that people with bipolar disorder are more likely to commit suicide as their depression lifts.

But antidepressant effects can be very complicated. I know someone who says one med made her really really want to sleep with her feet where her head normally went. I once reacted to an antidepressant by spending three days cycling through the thoughts, "I should cut off a finger" (I explained to myself why that was a bad idea) "I should cut off a toe" (ditto) "I should cut all the flesh from my ribs" (explain myself out of it again), then back to the start.

The akrasia-lifting explanation certainly seems plausible to me (although "mood" may not be the other relevant variable--it may be worldview and plans; I've never attempted suicide, but certainly when I've self-harmed or sabotaged my own life it's often been on "autopilot", carrying out something I've been thinking about a lot, not directly related to mood--mood and beliefs are related, but I've noticed a lag between one changing and the other changing to catch up to it; someone might no longer be severely depressed but still believe that killing themself is a good course of action). Still, I would also believe an explanation that certain meds cause suicidal impulses in some people, just as they can cause other weird impulses.

Comment author: CronoDAS 14 May 2010 05:29:34PM 0 points [-]

My antidepressant gave me a sweet tooth.

Comment author: Nisan 10 May 2010 03:35:50PM 2 points [-]

Interesting. Are you sure that is going on when antidepressants have paradoxical effects?

Comment author: CronoDAS 14 May 2010 05:34:15PM 0 points [-]

My mom is a psychiatrist, and she's given an explanation basically equivalent to that one - that people with very severe depression don't have the "energy" to do anything at all, including taking action to kill themselves, and that when they start taking medication, they get their energy back and are able to act on their plans.

Comment author: NancyLebovitz 10 May 2010 04:36:33PM 3 points [-]

Not absolutely certain. It's an impression I've picked up from mass media accounts, and it seems reasonable to me.

It would be good to have both more science and more personal accounts.

Thanks for asking.

Comment author: Morendil 05 May 2010 08:46:10AM 5 points [-]

Good observations.

Sometimes I procrastinate for weeks about doing something, generally non-urgent, only to have something happen that would have made the doing of it unnecessary. (For instance, I procrastinate about getting train tickets for a short trip to visit a client, and the day before the visit is due the client rings me to call it off.)

The useful notion here is that it generally pays to defer action or decision until "the last responsible moment"; it is the consequence of applying the theory of options valuation, specifically real options, to everyday decisions.

A top-level post about this would probably be relevant to the LW readership, as real options are a non-trivial instance of a procedure for decision under uncertainty. I'm not entirely sure I'm qualified to write it, but if no one else steps up I'll volunteer to do the research and write it up.

Comment author: ig0r 08 May 2010 05:07:58PM 2 points [-]

I work in finance (trading) and go through my daily life quantifying everything in terms of EV.

I would just caution in saying that, yes procrastinating provides you with some real option value as you mentioned but you need to weigh this against the probability of you exercising that option value as well as the other obvious costs of delaying the task.

Certain tasks are inherently valuable to delay as long as possible and can be identified as such beforehand. As an example, work related emails that require me to make a decison or choice I put off as long as is politely possible in case new information comes in which would influence my decision.

On the other hand, certain tasks can be identified as possessing little or no option value when weighted with the appropriate probabilities. What is the probability that delaying the payment of your cable bill will have value to you? Perhaps if you experience an emergency cash crunch. Or the off chance that your cable stops working and you decide to try to withhold payment (not that this will necessarily do you any good).

Comment author: cousin_it 07 May 2010 10:35:31AM 0 points [-]

I'd be interested in reading it.

Comment author: Jack 04 May 2010 09:50:07PM 2 points [-]
Comment author: NancyLebovitz 04 May 2010 10:10:41PM 0 points [-]

Can you control the colors? Dark red on black is hard to read.

Comment author: Jack 04 May 2010 10:14:42PM 0 points [-]

Nah. I just used this

Comment author: NancyLebovitz 04 May 2010 10:30:40PM 2 points [-]

Oh my dear God. Indeed, human values differ as much as values can differ.

If I hadn't started with sites that did quiet, geekish design, I would have fled the net and never come back.

Comment author: AllanCrossman 04 May 2010 09:18:29PM 3 points [-]

Is Eliezer alive and well? He's not said anything here (or on Hacker News, for that matter) for a month...

Comment author: TraditionalRationali 16 May 2010 01:48:16AM *  5 points [-]

Eliezer Yudkowsky and Massimo Pigliucci just recently had a dialogue on Bloggingheads.tv. The title is The Great Singularity Debate.

After Yudkowsky at the beginning gives three different definitions of "the singularity" they discuss strong artificial intelligence and consciousness. Pigliucci is the one who quite quickly takes the discussion from intelligence to consciousness. Just before that they discuss whether simulated intelligence is actually intelligence. Yudkowsky made an argument (something like) if the AI can solve problems over a sufficiently broad range of areas and give answers then that is what we mean by intelligence, so if it manages to do that then it has intelligence. I.e., it is then not "just simulating to have intelligence" but is actually intelligent. Pigliucci however seems to want to distinguish between those and say that "well it may then just simulate intelligence, but maybe it is not actually having it". (Too difficult for me to summarize it very well, you have too look for yourself if you want it more accurately.)

There it seemed to me (but I am certainly not an expert in the field) that Yudkowsky's definition looked reasonable. It would have been interesting to have that point elaborated in more detail though.

Pigliucci's point seemed to be something like that for the only intelligence that we know so far (humans (and to lesser extent other higher animals)) intelligence comes together with consciousness. And for consciousness we know less, maybe only that the human biological brain somehow manages to have it, and therefore we of course do not know whether or not e.g. a computer simulating the brain on a different substrate will also be conscious. Yudkowsky seemed to think this very likely while Pigliucci seemed to think that very unlikely. But what I lacked in that discussion is what do we know (or reasonable conjecture) about the connection between intelligence and consciousness? Of course Pigliucci is right in that for the only intelligence we know of so so far (the human brain) intelligence and consciousness comes together. But for me (who do not know much about this subject matter) that seems not a strong argument for discussing them so closely together when it comes to artificial intelligence. Maybe someone here on Less Wrong who knows more about connection or not between intelligence and consciousness? For a naive non-expert like me intelligence seems (rather) easy to test if anything has: just test how good it is to solve general problems? While to test if anything has consciousness I would guess that a working theory of consciousness would have to be developed before a test could be designed?

This was the second recent BHTV dialogue where Pigliucci discussed singularity/transhumanism related questions. The previous I mentioned here. As mentioned there it seems to have started with a blogg-post of Pigliucci's where he criticized transhumanism. I think it interesting that Pigliucci continues his interest in the topic. I personally see it as a very positive establishing of contact between "traditional rationalist/skeptic/(cis-)humanist"-community and "LessWrong-style rationalist/trans-humanist".community. Massimo Pigliucci very much gave the impression of enjoying the discussion with Elizer Yudkowsky! I am also pleased to have noticed that recently Pigliucci's blog has now and then linked to LessWrong/ElizerYudkowsky (mostly Julias Galef if I remember correctly (too lazy to locate the exact links right now)). I would very much like to see this continue (e.g. Yudkowsky discussing with people like e.g. Paul Kurtz, Michael Shermer, Richard Dawkins, Sean Carroll, Steven Weinberg, Victor Stenger (realizing of course that they are probably too busy for it to happen)).

Previous BHTV dialogues with Elizer Yudkowsky I have seen noticed here on LessWrong but not this one (hope it is not that I have just missed that post). Therefore I posted this here, I did not find a perfect place for it, this was the least-bad I noticed. Although my post here is only partly about "Is Elizer alive and well" (he surely looked so on BHTV), I hope it is not considered too much off-topic.

Comment author: Zack_M_Davis 20 May 2010 11:15:28PM 2 points [-]

I personally see it as a very positive establishing of contact between "traditional rationalist/skeptic/(cis-)humanist"-community

I'm going to have to remember to use the word cishumanism more often.

Comment author: komponisto 21 May 2010 01:12:35AM 1 point [-]
Comment author: kodos96 20 May 2010 09:28:30PM *  5 points [-]

I found this diavlog entertaining, but not particularly enlightening - the two of them seemed to mostly just be talking past each other. Pigliucci kept on conflating intelligence and consciousness, continually repeating his photosynthesis analogy, which makes sense in the context of consciousness, but not intelligence, and Eliezer would respond by explaining why that doesn't make sense in the context of intelligence, and then they'd just go in circles. I wish Eliezer had been more strict about forcing him to explicitly differentiate between intelligence/consciousness. Frustrating.... but worth watching regardless.

Note that I'm not saying I agree with Pigliucci's photosynthesis analogy, even when applied to consciousness, just that it seems at least to be coherent in that context, unlike in the context of intelligence, in which case it's just silly. Personally, I don't see any reason for consciousness to be substrate-dependant, but I feel much less confident in asserting that it isn't, just because I don't really know what consciousness is, so it seems more arrogant to make any definitive pronouncement about it.

Comment author: Christian_Szegedy 23 May 2010 08:21:17AM 5 points [-]

That diavlog was a total shocker!

Pigliucci is not a nobody: he is a university professor, authored several books, holds 3 PhD's.

Still, he made an utterly confused impression on me. I don't think people must agree on everything, especially when it comes to hard questions like consciousness,but his views were so weak and incoherent that it was just too painful to watch. My head still aches... :(

Comment author: Kevin 16 May 2010 10:58:41AM 0 points [-]

You should post this as a top-level post for +10x karma.

Comment author: PeerInfinity 16 May 2010 02:51:08AM 0 points [-]

random, possibly off-topic question:

Is there an index somewhere of all of Eliezer's appearances on BHTV? Or a search tool on the BHTV site that I can use to find them?

Comment author: ata 16 May 2010 10:32:35AM 2 points [-]
Comment author: PeerInfinity 16 May 2010 04:35:07PM 1 point [-]

Thanks! I had tried using the search tool before, but I guess I hadn't tried searching for "Yudkowsky, Eliezer"

... oh, and it turns out that there was a note right beside the search box saying "NAME FORMAT = last, first". oops...

anyway, now I know, thanks :)

Comment author: John_Maxwell_IV 22 May 2010 07:39:21PM 4 points [-]

In general, google's site: operator is great for websites that have missing or uncooperative search functionality:

site:bloggingheads.tv eliezer

Comment author: NancyLebovitz 16 May 2010 10:27:33AM 0 points [-]

Orange button called "search" in the upper right hand corner.

Comment author: Jack 16 May 2010 01:57:58AM 3 points [-]

SIAI may have built an automaton to keep donors from panicking

Comment author: gwern 05 May 2010 01:47:57AM 5 points [-]

You can tell he's alive and well because he's posted several chapters in his Harry Potter fanfiction in that time; his author's notes lead me to believe that, as he stated long ago, he's letting LW drift so he has time to write his book.

Comment author: Mass_Driver 05 May 2010 02:22:07AM 4 points [-]

Anyway, he can't be hurt; "Somebody would have noticed."

Comment author: gwern 05 May 2010 02:41:43AM 0 points [-]

Well, he would've noticed, but he's not us...

Comment author: Jack 04 May 2010 09:42:44PM 0 points [-]

Question: Who is moderating if Eliezer isn't?

Comment author: ata 05 May 2010 12:32:38AM *  0 points [-]

The other moderators appear to be Robin Hanson, matt, and wmoore. None of them have posted in the past few days, but maybe at least one of them has been popping in to moderate from time to time. And/or maybe Eliezer is too, just not posting.

Comment author: RobinZ 04 May 2010 09:26:18PM 0 points [-]

Harry Potter and the Methods of Rationality updated on Sunday; it could be that writing that story is filling much of his off time.

Comment author: CarlShulman 04 May 2010 09:25:35PM 4 points [-]

He's writing his book.

Comment author: gwern 04 May 2010 09:17:37PM *  9 points [-]

I have a (short) essay, 'Drug heuristics' in which I take a crack at combining Bostrom's evolutionary heuristics and nootropics - both topics I consider to be quite LW-germane but underdiscussed.

I'm not sure, though, that it's worth pursuing in any greater depth and would appreciate feedback.

Comment author: Metacognition 05 May 2010 07:07:14PM 0 points [-]

Interesting essay.

Comment author: jimmy 04 May 2010 11:06:58PM 0 points [-]

I'd like to see this pursued further. In particular, I'd like to hear your thoughts on modafinil.

JustinShovelain's post on caffeine was similar, and upvoted.

Comment author: gwern 18 July 2011 01:15:52AM 1 point [-]
Comment author: wedrifid 18 July 2011 02:53:28AM 0 points [-]

As of the time I reply there is nothing about modafinil on that page.

Comment author: gwern 18 July 2011 03:10:44AM 1 point [-]

I use aggressive caching settings on gwern.net since most of the content doesn't change very often. Force-refresh, and you'll see it.

Comment author: gwern 05 May 2010 07:21:51PM 0 points [-]

Anything besides modafinil? In part I'm stuck because I don't know what else to discuss; Justin's post was similarly short, but it was mainly made of links.

Comment author: NancyLebovitz 04 May 2010 09:50:07PM 0 points [-]

I'd like to see it pursued further. where does alcohol fit in your schema?

Comment author: jimmy 04 May 2010 11:10:28PM 0 points [-]

You might find this paper interesting.

In a sentence, it suggests that people drink to signal trustworthiness.

Comment author: gwern 04 May 2010 10:19:20PM 1 point [-]

I don't know terribly much about alcohol, so take this with a grain of salt.

I think I would probably put it as an out-of-date adaptation; my understanding is that alcoholic beverages would have been extremely energy-rich, and also hard to come by, and so is in the same category as sugars and fats - they are now bad for us though they used to be good. ('Superstimulus', I think, is the term.)

Given that, it's more harmful than helpful and to be avoided.

I'll admit that the issue of resveratrol confuses me. But assuming that it has any beneficial effect in humans, AFAIK one should be able to get it just by drinking grape juice - resveratrol is not created in the fermentation process.

Comment author: CronoDAS 07 May 2010 12:56:52AM 3 points [-]

Fermented beverages also had the advantage of usually being free of dangerous bacteria; ethanol is an antiseptic that kills the bacteria that cause most water-borne diseases. (And water-borne disease used to be very common.)

Comment author: gwern 07 May 2010 02:04:32PM 0 points [-]

That's a good second way it's an out of date optimization.

Comment author: RobinZ 04 May 2010 07:38:08PM *  2 points [-]

By the way: getting crashes on the comments page again. Prior to 1yp8 works and subsequent to 1yp8 works; I haven't found the thread with the broken comment.

Edit: It's not any of the posts after 23andme genome analysis - $99 today only in Recent Posts, I believe.

Edit 2: Recent Comments still broken for me, but ?before=t1_1yp8 is no longer showing the most recent comments to me - ?before=t1_1yqo continues where the other is leaving off.

Edit 3: Recent Comments has now recovered for me.

Comment author: RobinZ 05 May 2010 08:55:19PM *  0 points [-]

Having Recent Comments problems again: after 1yyu and before 1yyu work. The sidebar "Recent Comments" circa 1yyw does not include 1yyu - skips straight from 1yyv to 1yyt.

No crashes are observed in the comment threads of "Antagonizing Opioid Receptors for (Prevention of) Fun and Profit" through "Possibilities for converting useless fun into utility in Online Gaming".

Edit: byrnema has discovered the guilty comment - it appears to have been on this post.

Comment author: JoshuaZ 05 May 2010 09:06:23PM 0 points [-]

Having similar problems. Getting error messages when I click "Recent comments."

Comment author: RobinZ 05 May 2010 09:30:53PM 0 points [-]

Usually the way these work is that any page which would include a specific comment fail with an error message. The "before 1yyu" page should show more recent comments than the broken one - if the most recent comments in the sidebar don't appear on that page, replace the "1yyu" at the end of the string with the identifier of a more recent comment or see if the plain old "Recent Comments" page has fixed itself.

Comment author: NancyLebovitz 05 May 2010 09:37:54PM *  0 points [-]

What's the coding system for urls for the recent comments pages? Why "1yyu"?