All of Jess_Riedel's Comments + Replies

Many people within the Boston EA community seem to have come to it post college and through in-person discussions.

Hmm. I haven't spent much time in the area, but I went to the Cambridge, MA LessWrong/Rationality "MegaMeetup" and it was almost exclusively students. Is there a Boston EA community substantially disjoint from this LW/Rationality group that you're talking about?

More generally, are there many historical examples of movements that experience rapid growth on college campuses but then were able to grow strongly elsewhere? Civil righ... (read more)

1jefftk
Weird; that's not my memory of it or my perception of the group. The meetup was at Harvard, which meant we had a couple more students than usual, but I think 80%+ of the local people at the meetup were out of school. At the meetup yesterday night, which I remember better, there were about 15 of us and I think only one student (late 30s statistics grad student). There's a lot of overlap, but it's a separate group. Looking over the rsvps at our most recent dinner I count 8 people who also go to lesswrong, and 15 who don't. On the same list i count three students. The history of movements is something I'd like to know more about, but haven't really looked into much. (One thing I found frustrating when I did is that there's a huge amount of survivorship bias.) Facebook did this, though it's not a movement. I agree, and am similarly pessimistic. But $100k is still a lot of money, and we don't yet have that much experience trying to figure out how to spend it. There are very few Bobs who are supported by EA funding, but I can think of several people who switched to EA after lots of talking with existing EAs. Right now we have relatively little personal outreach and relatively more digital/idea-based outreach, so we should expect to meet more people who were receptive to the arguments when they heard them remotely. I'm not sure the church was strategic or flexible enough to do this, and even then I doubt kids were anywhere near as expensive as then. Specifically, I think the age at which a kid went from net-consumer to net-producer was something like 9 compared to today's 22. (But I'm not very informed on this.) Yes!

I mostly disagree with both parts of the sentence "Except that it's much cheaper to convince other people's kids to be generous, and our influence on the adult behavior of our children is not that big." I would argue that

(1) Almost all new EA recruits are converted in college by friends and/or by reading a very small number of writers (e.g. Singer). This is something that cannot be replicated by most adults, who are bad writers and who are not friends with college students. We still need good data on the ability of typical humans to covert... (read more)

2jefftk
Maybe currently, but it doesn't have to be. Many people within the Boston EA community seem to have come to it post college and through in-person discussions. Do college EAs need more support? Would better versions of things like ThINK's modules help? Funding for free food for meetings? Would subsidizing TLYCS distribution or some upcoming EA book do much to increase the spread of ideas? If you can convince one new person to be an EA for $100k you're more efficient than successfully raising your kid to be one, and that's ignoring time-discounting. I think religions mostly expand at first through conversion and then once they start getting diminishing returns switch to expanding through reproduction. EA isn't to this changeover point yet, and isn't likely to be for a while. But I also don't know that much about it.

The invention of nuclear weapons seems like the overwhelmingly best case study.

  1. New threat/power comes from fundamental new scientific insight.
  2. Existential risks (nuclear winter, run-away nitrogen fusion in atmosphere).
  3. Massive potential effects, both positive and negative (nuclear power for everything, medical treatments, dam building and other manipulation of Earth's crust, space exploration, elimination of war, nuclear war, increased asymmetric warfare, reactor meltdowns, increased stability of dictatorships). Some were realized.
  4. Very large first-move
... (read more)

With respect, I've always found the dynamic inconsistency explanation silly. Such an analysis feels like one is forcing, in the face of contradictory evidence, to model human beings as rational agents. In other words, you look at a person's behavior, realize that it doesn't follow a time-invariant utility function, and say "Aha! Their utility function just varies with time, in a manner leading to a temporal conflict of interests!" But given sufficient flexibility in utility function, you can model any behavior as that of a utility-maximizing ... (read more)

4Academian
Utility theory is a normative theory of rationality; it's not taken seriously as a descriptive theory anymore. Rationality is about how we should behave, not how we do. This is a common confusion about the what dynamic inconsistency really means, although I'm now noticing that Wikipedia doesn't explain it so clearly, so I should give an example: Monday self says: I should clean my room on Thursday, even if it will be extremely annoying to do so (within the usual range of how annoying the task can be), because of the real-world benefits of being able to have guests over on the weekend. Thursday-self says: Oh, but now that it's Thursday and I'm annoyed, I don't think it's worth it anymore. This is a disagreement between what your Monday-self and your Thursday-self think you should do on Thursday. It's a straight-up contradiction of preferences among outcomes. There's no need to think about utility theory at all, although preferences among outcomes, and not items is exactly what it's designed to normatively govern. ETA: The OP now links to a lesswrongwiki article on dynamic inconsistency.

I agree with you in general, and would especially like to hear from some LW psychologists. I think this field is pretty new, though, and not heavily dependent on any canon.

I've never heard of willpower depletion....Surely willpower is a long-term stat like CON, not an diminishable resource like HP.

In fact, previous research has shown that it is a lot like HP in many situations. See the citations near the beginning of the article.

3TobyBartels
Yeah, I see that now, but it's still very weird to me. And the new article seems to explain why: I think of willpower as like CON, so for me it's like CON. Others think of it as HP, so for them it's like HP. I just didn't realise that there was anybody like those others before!

Sure, on average it's negative sum. But I have to guess that society as a whole suffers greatly from having many (most?) of its technically skilled citizens at the low end of the social-ability spectrum. The question would be whether you could design a set of institutions in this area which could have a net positive benefit on society. (Probably not something I'll solve on a Saturday afternoon...)

I'm pretty sure this varies state-to-state.

6Airedale
"Common law" is the court-made law historically developed in England and exported to most (all?) English colonies. These courts came up with the principle of mitigating certain homicides to manslaughter, including in the case of the murderous husband. It’s possible that the reasoning behind the very first use of the principle may have been something like the ad hoc lowering of the sentence in Yvain’s example. (It’s also possible that some of the early judges may have stated their reasoning differently and in a way that is in conflict with modern values; one thread that runs through some of the early murderous husband authority is that the husband is partially justified in the killing because he is protecting his “property” from “trespass.”) As courts continued to apply the reasoning, this principle became a well-established part of the “common law.” As both of my comments suggested, there are likely variations in the current state of the law among jurisdictions. This is true even among common law jurisdictions, that is, among other countries with a common law background and among the states with such background (Louisiana has a civil law background). I believe that no states currently rely on the common law for homicide law, but instead all states have enacted statutes defining the various degrees of homicide, that is, defining murder in various degrees, manslaughter (voluntary/involuntary), negligent homicide, etc. (States have taken somewhat different approaches here; per the same book I quoted previously, “reform of the common law has taken three separate paths,” including a version dividing homicide into three offenses, murder, manslaughter, and negligent homicide, which I believe is similar to what jimrandomh was describing. But further discussion of those paths in this comment seems like too much of a detour.) At any rate, in most states, the definitions in the penal code draw strongly from the original common law definitions as well as from later adaptation

Well, there's three kinds of meetups I can imagine.

(1) You go for the intellectual content of the meeting. This is what I was hoping for in Santa Barbara. For the reasons I mentioned above, I now think it's unlikely that the intellectual content will ever be worthwhile unless somebody does some serious planning/preparation.

(2) You go for the social enjoyment of the meeting. I confirmed my suspicion in SB that I personally wouldn't socially mesh with the LW crowd, although maybe this was a small sample size thing.

(3) You go to meet interesting people. ... (read more)

2Zachary_Kurtz
The NYC group, and olimay in particular, has certainly challenged my thinking. I might be coming from a very different place than you, however.

I suffer from exactly the same thing, but I don't think this what Roko is worring about, is it? He seems to worry about "ugh fields" around important life decisions (or "serious personal problems"), whereas you and I experience them around normal tasks (e.g. responding to emails, tackling stuck work, etc.). The latter may be important tasks -- making this an important motivation/akrasia/efficiency issue -- but it's not a catastrophic/black-swan type risk.

For example, if one had an ugh field around their own death and this prevented th... (read more)

3Roko
It could be either, as far as I can see, though I expect that an Ugh Field around responding to emails is not really about email, but rather about some other, deeper threat that it because associated with. Merely recieving an email doesn't have the power to condition you, but your boss' power over you might well do.

Could you suggest a source for further reading on this?

I attended a meetup in Santa Barbara which I found largely to be a waste of time. The problem there--and I think, frankly, with LW in general--is that there just aren't that many of us with something insightful to say. (I certainly don't have much.) While it's great, I guess, that the participants acknowledge the importance behind some of the ideas championed by Yudkowsky and Hanson, most of us don't have anything to add. Some of us may be experts in other fields, but not in rationality.

Here's the perfect analogy: it's like listening to a bunch of coll... (read more)

0olimay
I always find it worthwhile, but maybe it's not what you are expecting or looking for. It's become a social group, with a slightly intellectual bent. It's not an attempt to recreate LessWrong in-person. The core group really has become a community, as in: make connections, understand each other, communicate, and in certain ways, offer mutual support. I find the discussion almost always stimulating, and even though I only go up once every month. Q: Generally, what kinds of meetups would you enjoy attending?

What happens at the meetups?

1olimay
Discussion, mostly ad-hoc. On some occasions the discussion has been more focused it was assumed participants had read certain LW related things.
2Roko
Orgies.

In most books, insurance fraud is morally equivalent to stealing. A deontological moral philosophy might commit you to donating all your disposable income to GiveWell-certified charities while not permitting you to kill yourself for the insurance money. But, yea, utilitarians will have a hard time explaining why they don't do this.

Exactly. If a parent doesn't think cryonics makes sense, then they wouldn't get it for their kids anyways. Eliezer's statement can only criticize parents who get cryonics for themselves but not their children. This is a small group, and I assume it is not the one he was targeting.

Yes, of course it is weak evidence. But I can come up with a dozen examples off the top of my head where powerful organizations did realize important things, so you're examples are very weak evidence that this behavior is the norm. So weak that it can be regarded as negligible.

3alyssavance
Important things that weren't recognized by the wider populace as important things? Do you have citations? Even for much more mundane things, governments routinely fail to either notice them, or to act once they have noticed. Eg., Chamberlain didn't notice that Hitler wanted total control of Europe, even though he said so in his publicly-available book Mein Kampf. Stalin didn't notice that Hitler was about to invade, even though he had numerous warnings from his subordinates.

The existence of historical examples where people in powerful organizations failed to realize important things is not evidence that it is the norm or that it can be counted on with strong confidence.

4alyssavance
Yes, it is. How could examples of X not be evidence that the "norm is X"? It may not be sufficiently strong evidence, but if this one example is not sufficiently damning, there are certainly plenty more.

It's hard to think of a policy which would have a smaller impact on a smaller fraction of the wealthiest population on earth. And it faces extremely dedicated opposition.

2CronoDAS
Well, I mean "low-hanging fruit" in that it doesn't really cost any money to implement. Symbolism is cheap; providing material benefits is more expensive, especially in developed countries. I don't know much about the political situation in Scotland; I know about a few miscellaneous stupidities in the U.S. federal government that I'd like to get rid of (abstinence-only sex education, "alternative" medicine research) but I suspect that Scotland and the rest of the U.K. is stupid in different ways than the U.S. is.

I still think that Caplan's position is dumb. It's not so much a question of whether his explanation fits the data (although I think Psychohistorian has shown that in this case it does not), it's that it's just plain weird to characterize the obsessive behavior done by people with OCD as a "preference". I mean, suppose that you were able to modify the explanation you've offered (that OCD people just have high preferences for certainty) in a way that escapes Psychohistorian's criticism. Suppose, for instance, you simply say "OCD people jus... (read more)

I agree that everything you do, you genuinely want to do, in the sense that you're not doing it under duress.

I really think this is a bad way to think about it. Please see my comment elsewhere on this page.

EDIT: Unless of course you just define "genuinely wanting to do something" as anything one does while not under duress. But in that case, what counts as duress?

This is one place where Caplan seems to go off the deep end. I think it illustrates what happens if you take the Cynic's view to the logical conclusion. In his "gun to the head" analogy, Caplan suggests that OCD isn't really a disease! After all, if we put a gun to the head of someone doing (say) repetitive hand washing, we could convince them to stop. Instead, Caplan thinks it's better to just say that the person just really likes doing those repetitive behaviors.

As one commenter points out, this is equivalent to saying a person with a broke... (read more)

4SilasBarta
I agree with your point here -- strongly. But I also think you're being unfair to Caplan. While his position is (I now realize) ridiculous, the example you gave is not. His position would not be that they like doing those behaviors per se, but rather, they have a very strange preference that makes those behaviors seem optimal. Caplan would probably call it "a preference for an unusually high level of certainty about something". For example, someone with OCD needs to perceive 1 million:1 odds that they're hands are now clean, while normal people need only 100:1 odds. So the preference is for cleanliness-certainty, not the act of hand-washing. To get that higher level of certainty requires that they wash their hands much more often. Likewise, an OCD victim who has to lock their door 10 times before leaving has an unusually high preference for "certainty that the door is locked", not for locking doors. Again, I don't agree with this position, but it's handling of OCD isn't that stupid.

Careful. The term "graph theory" is usually used to refer to a specific branch of mathematics which I don't think you're referring to.

1Fetterkey
My mistake, I was referring to the Edward Tufte stuff. Thank you for correcting me.

I think the problem is much more profound than you suggest. It is not something that rationalists can simply take on with a non-infinitesimal confidence that progress will be made. Certainly not amateur rationalists doing philosophy in their spare time (not that this isn't healthy). I don't mean to say that rationalists should give up, but we have to choose how to act in the meantime.

Personally, I find the situation so desperate that I am prepared to simply assume moral realism when I am deciding how to act, with the knowledge that this assumption is ve... (read more)

I've read it before. Though I have much respect for Eliezer, I think his excursions into moral philosophy are very poor. They show a lack of awareness that all the issues he raises have been hashed out decades or centuries ago at a much higher level by philosophers, both moral realists and otherwise. I'm sure he believes that he brings some new insights, but I would disagree.

Moral skepticism is not particularly impressive as it's the simplest hypothesis. Certainly, it seems extremely hard to square moral realism with our immensely successful scientific picture of a material universe.

The problem is that we still must choose how to act. Without a morality, all we can say is that we prefer to act in some arbitrary way, much as we might arbitrarily prefer one food to another. And...that's it. We can make no criticism whatsoever about the actions of others, not even that they should act rationally. We cannot say that striving ... (read more)

0Ziphead
My point is that people striving to be rational should bite this bullet. As you point out, this might cause some problems - which is the challenge I propose that rationalists should take on. You may wish to think of your actions as non-arbitrary (that is, justified in some special way, cf. the link Nick Tarleton provided), and you may wish to (non-arbitrarily) criticize the actions of others etc. But wishing doesn't make it so. You may find it disturbing that you can't "non-arbitrarily" say that "striving for truth is better than killing babies". This kind of thing prompts most people to shy away from moral skepticism, but if you are concerned with rationality, you should hold yourself to a higher standard than that.
0Nick_Tarleton
OB: "Arbitrary" (Wait, Eliezer's OB posts have been imported to LW? Win!)

The quotation refers to punitive damages in civil cases. What evidence is there that this phenomenon exists with criminal penalties? (I don't deny that it exists, but it is probably suppressed. That is, criminal penalties are more likely to reflect probability of detection than punitive damages).

For instance, there are road signs in northern Virginia warning of a $10,000 fine for littering. The severity of the fine is surely due to the difficulty in catching someone in the act.

Should we be worried that people will vote stuff up just because it is already popular? There is currently no penalty for voting against the crowd, so wouldn't people (rightly) want to do this?

(Of course, we assume people are voting based on their personal impressions. It's clear that votes bases on Bayesian beliefs are not are useful here.)

Exactly. It seems unlikely that prestigious researchers will be unable to publish their brilliant but unconventional idea because they can't fully utilize their fame to sway editors. In fact, prestigious researchers have exactly what is needed to ensure their idea will take hold if it has merit: job security. They have plenty of time to nurture and develop their idea until it is accepted.

2Erik
The title of the post is "Does blind review slow down science?", not "Does blind review stop science?". The prestigious researchers may have the time, but there are plenty of members of humanity that don't. Science is slow enough as it is. We would be well advised to consider any factors that may speed up progress.

That's exactly the point: voting is supposed to put comments in order according to quality, so that you can read the worthwhile comments in a reasonable time. My claim is that the current voting system will not do this well at all and that a dual voting system will be better. (That second bit is just a guess). The opinion poll information is just a nice side effect.

3steven0461
OK, so according to you and Benja the point is to have the agree/disagree buttons there mainly as a lightning rod to prevent agreement from affecting quality votes. That's a good point, but I wonder if it's worth it and if there are better ways to accomplish the same thing. I also wonder if there should be a button labeled "malevolent cantaloupe" so the unserious people will click on that instead of voting.

Yep, what I wrote is just based on my best guess. A usability study would be great.

Also, I am going with the crowd and changing to a user name with an underscore