I haven't read the other comments here and I know this post is >10yrs old, but…
For me, (what I'll now call) effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don't want to help strangers, animals, future people, etc. But I think I "want to want to" help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don't really...
Some other literature OTOH:
On Foretell moving to ARLIS… There's no way you could've known this, but as it happens Foretell is moving from one Open Phil grantee (CSET) to another (UMD ARLIS). TBC I wasn't involved in the decision for Foretell to make that transition, but it seems fine to me, and Foretell is essentially becoming another part of the project I funded at ARLIS.
I don't think I had seen that, and wow, it definitely covers basically all of what I was thinking about trying to say in this post, and a bit more.
I do think there is something useful to say about how reference class combinations work, and using causal models versus correlational ones for model combination given heterogeneous data - but that will require formulating it more clearly than I have in my head right now. (I'm working on two different projects where I'm getting it straighter in my head, which led to this post, as a quick explanatio...
Nice write-up!
A few thoughts re: Scott Alexander & Rob Wiblin on prediction.
If the U.S. kept racing in its military capacity after WW2, the U.S. may have been able to use its negotiating leverage to stop the Soviet Union from becoming a nuclear power: halting proliferation and preventing the build up of world threatening numbers of high yield weapons.
BTW, the most thorough published examination I've seen of whether the U.S. could've done this is Quester (2000). I've been digging into the question in more detail and I'm still not sure whether it's true or not (but "may" seems reasonable).
Interesting historical footnote from Louis Francini:
This issue of differing "capacities for happiness" was discussed by the classical utilitarian Francis Edgeworth in his 1881 Mathematical Psychics (pp 57-58, and especially 130-131). He doesn't go into much detail at all, but this is the earliest discussion of which I am aware. Well, there's also the Bentham-Mill debate about higher and lower pleasures ("It is better to be a human being dissatisfied than a pig satisfied"), but I think that may be a slightly different issue.
Cases where scientific knowledge was in fact lost and then rediscovered provide especially strong evidence about the discovery counterfactauls, e.g. Hero's eolipile and al-Kindi's development of relative frequency analysis for decoding messages. Probably we underestimate how common such cases are, because the knowledge of the lost discovery is itself lost — e.g. we might easily have simply not rediscovered the Antikythera mechanism.
This scoring rules has some downsides from a usability standpoint. See Greenberg 2018, a whitepaper prepared as background material for a (forthcoming) calibration training app.
Some other people at Open Phil have spent more time thinking about two-envelope effects more than I have, and fwiw some of their thinking on the issue is in this post (e.g. see section 1.1.1.1).
Yes, I meant to be describing ranges conditional on each species being moral patients at all. I previously gave my own (very made-up) probabilities for that here. Another worry to consider, though, is that many biological/cognitive and behavioral features of a species are simultaneously (1) evidence about their likelihood of moral patienthood (via consciousness), and (2) evidence about features that might affect their moral weight *given* consciousness/patienthood. So, depending on how you use that evidence, it's important to watch out for double-counting.
I'll skip responding to #2 for now.
For anyone who is curious, I cite much of the literature arguing over criteria for moral patienthood/weight in the footnotes of this section of my original moral patienthood report. My brief comments on why I've focused on consciousness thus far are here.
Probably not suitable for launch, but given that the epistemic seriousness of the users is the most important "feature" for me and some other people I've spoken to, I wonder if some kind of "user badges" thing might be helpful, especially if it influences the weight that upvotes and downvotes from those users have. E.g. one badge could be "has read >60% of the sequences, as 'verified' by one of the 150 people the LW admins trust to verify such a thing about someone" and "verified superforecaster" an
Today I encountered a real-life account of a the chain story — involving a cow rather than an elephant — around 24:10 into the "Best of BackStory, Vol. 1" episode of the podcast BackStory.
Source. But the non-cached page says "The details of this job cannot be viewed at this time," so maybe the job opening is no longer available.
FWIW, I'm a bit familiar with Dafoe's thinking on the issues, and I think it would be a good use of time for the right person to work with him.
I guess subjective logic is also trying to handle this kind of thing. From Jøsang's book draft:
...Subjective logic is a type of probabilistic logic that allows probability values to be expressed with degrees of uncertainty. The idea of probabilistic logic is to combine the strengths of logic and probability calculus, meaning that it has binary logic’s capacity to express structured argument models, and it has the power of probabilities to express degrees of truth of those arguments. The idea of subjective logic is to extend probabilistic logic by also expre
Are you able to report the median AGI timeline for ~all METR employees? Or are you just saying that the "more than half" is how many responded to the survey question?