Alternatively, if the Covid-19 deaths in NY state go above 3,333 in the first week of April, that seems like it would also falsify the hypothesis. (NY state has fewer than one third the population of the UK.) Unfortunately I think this is >80% to happen.
On April 4, the death toll in NY state surpassed 3,333. As of April 10, there are 7,844 deaths.
The "Rationalist prepper thread" was actually posted on January 28, not January 20.
This is indeed what I meant. Also I was thinking about once-the-dust-settles IFR, not "crude IFR".
If the IFR is indeed .003% (the upper end of your range), then assuming the worst case scenario that 100% of the population of the UK gets infected eventually, only .003%*66.4 million = approx 2000 people will die total.
Would you consider the theory falsified if the death toll in the UK surpasses 2000?
I'm confused why you assume that 36-68% of the population in the UK is infected. I thought, based on comments here, that those numbers were the output of a model that made highly optimistic assumptions about IFR, not an attempt at estimating the actual proportion of infections.
Do you think this is a realistic range for the proportion already infected in the UK?
What is your personal point estimate or credible interval for IFR?
Epidemiologist Behind Highly-Cited Coronavirus Model Admits He Was Wrong, Drastically Revises Model (archive)
Epidemiologist Neil Ferguson, who created the highly-cited Imperial College London coronavirus model, which has been cited by organizations like The New York Times and has been instrumental in governmental policy decision-making, offered a massive revision to his model on Wednesday....
Ferguson’s model projected 2.2 million dead people in the United States and 500,000 in the U.K. from COVID-19 if no action were taken to slow the virus and blunt
I'd greatly appreciate it if you could respond here:
Does anyone have thoughts on the recent Oxford study that claims that only a very small minority of infections lead to hospitalization or death, and that >50% of the UK population is already infected?
https://www.dropbox.com/s/oxmu2rwsnhi9j9c/Draft-COVID-19-Model%20%2813%29.pdf
Does anyone have thoughts on the recent Oxford study that claims that only a very small minority of infections lead to hospitalization or death, and that >50% of the UK population is already infected?
Questions about buying chloroquine:
1. Is it better to buy hydroxychloroquine or regular chloroquine? The studies I've found suggest hydroxychloroquine is safer and more potent, but it is a bit more expensive.
2. How many days worth of the drug is it reasonable to buy per person?
3. How much should someone take per day and how should the dosage be timed?
4. Can someone confirm that the products you can find on reliablerxpharmacy.com when searching for "Lariago" (500 mg chloroquine as phos) and "OXCQ" (200 mg Hydroxychloroquine Sulfate) are the right things to buy? If not, is there any other reputable or semi-reputable source that sells the right product?
Maybe birth rates will increase if there are massive quarantines, for the same reason birth rates are said to increase during natural disasters (???). Very uncertain. Just throwing this idea out there, since I've seen little discussion of it.
Is the 5-10% global mortality prediction conditional on COVID-19 infecting >10% of the world, or unconditional?
What do you think of the prospects for antivirals like remdesivir to be tested and mass-produced? How much could they lower CFR?
Why do you think other predictions, such as those given by Metaculus 1, 2, 3 are much less pessimistic?
Do you think shorting the market is a good idea still?
It confuses me that I seem to be the first person to talk much about this on either LW or EA Forum, given that there must be people who have been exposed to the current political environment earlier or to a greater extent than me.
This isn't an answer to your historical question, but I would like to point out that an EA recently wrote up his thoughts on speech policing here on the EA Forum, and I recall some previous relevant discussions as well (example).
Do any AI safety researchers have little things they would like to get done, but don't have the time for?
I'm willing to help out for no pay.
I have a backgound in computer science and mathematics, and I have basic familiarity with AI alignment concepts. I can write code to help with ML experiments, and can help you summarize research or do literature reviews.
Email me at buck@intelligence.org with some more info about you and I might be able to give you some ideas (and we can maybe talk about things you could do for ai alignment more generally)
If you're interested in this idea, you may want to join the "Reducing pain in farm animals" Facebook group. (It's currently very small.)
I thought you were a negative utilitarian, in which case disaster recovery seems plausibly net-negative. Am I wrong about your values?
Could you please try to keep discussion on topic and avoid making everything about politics? Your comment does not contribute to the discussion in any way.
Here's the latest working link (all three above are dead)
Also, here's an archive in case that one ever breaks!
I believe I already told you that I don't consider "spreading wild animal suffering" to be absurd; it's a plausible scenario. What may be intuitively absurd is the claim that "destroying nature is a good thing" -- which is not necessarily the same as the claim that "spreading wild animal suffering to new realms is bad, or ought to be minimized". (And there are possible interventions to reduce non-human suffering conditional on spreading non-human life. E.g. "value spreading" is often discussed in the EA community.)
Anyway, I'm done with this conversation for now as I believe other activities have higher EV.
Yes, I think it does because it's a plausible scenario and most plausible (IMO) ethical views say that causing non-human suffering is bad. Further exploration of the probability of such scenarios could influence my EA cause priorities, donation targets, and/or general worldview of the future.
seems like you'll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...
Those have very low prior probabilities and low decision-relevance to me.
I don't see much in the way of empirical claims here (these would require a hard definition of "suffering" and falsifiability to start with), so I guess I'm talking about counterintuitive normative claims.
Fair point. This is one problem I have had with moral realist utilitarianism. Although I think it may still be the case that sentience and suffering are objective, just not (currently) measurable. Regardless, I don't think the claim of net-suffering in nature is all that absurd.
...The claim is a bit different: that we should not spread (non-hu
Are you referring to empirical or normative claims? I don't consider the idea that wild animals experience net suffering absurd, although the idea that habitat destruction is morally beneficial is counterintuitive to most people. I think the idea that we should reduce the chance of spreading extreme involuntary suffering, including wild-animal suffering, throughout the universe is much less counterintuitive, and is consistent with a wide range of moral views.
Since I give significant (but not 100%) weight to "the overwhelming importance of the far futu...
Someone once proposed a possible s-risk:
If the suffering of hypothetical entities is morally relevant, then Brian Tomasik’s electron thought experiment was a crime of unimaginable proportions. In fact, it may well be that Tomasiks spontaneously forming in empty space outweigh every “conventional” source of suffering in the Universe. I call this the Boltzmann Brian problem.
No, it doesn't necessarily imply that. Suppose wild animals have net-positive aggregate welfare, but a subset of these lives contain extreme involuntary suffering. Spreading this throughout the universe would still be considered an s-risk according to FRI's definition: "Finally, some futures may contain both vast amounts of happiness and vast amounts of suffering, which constitutes an s-risk but not necessarily a (severe) x-risk. For instance, an event leading to a future containing 10^35 happy individuals and 10^25 unhappy ones, would constitute an
FRI has focused on a few s-risks that you didn't mention (perhaps because they are not "colossal" enough):
Spread of wild animals (Related to your #2, "Normal Level") - "Humans may colonize other planets, spreading suffering-filled animal life via terraforming. Some humans may use their resources to seed life throughout the galaxy, which some sadly consider a moral imperative."
A possible compromise between the pro-panspermia and suffering-focused groups would be directed panspermia based on gradients of bliss (if Pearce's aboli...
Some people in the EA community have already written a bit about this.
I think this is the kind of thing Mike Johnson (/user/johnsonmx) and Andres Gomez Emilsson (/user/algekalipso) of the Qualia Research Institute are interested in, though they probably take a different approach. See:
Effective Altruism, and building a better QALY
The Foundational Research Institute also takes an interest in the issue, but they tend to advocate an eliminativist, subje...
What would count as "LessWrong-esque"?
And the concept is much older than that. The 2011 Felicifia post "A few dystopic future scenarios" by Brian Tomasik outlined many of the same considerations that FRI works on today (suffering simulations, etc.), and of course Brian has been blogging about risks of astronomical suffering since then. FRI itself was founded in 2013.
Oh, in those cases, the considerations I mentioned don't apply. But I still thought they were worth mentioning.
In Star Trek, the Federation has a "Prime Directive" against interfering with the development of alien civilizations.
The flip side of this idea is "cosmic rescue missions" (term coined by David Pearce), which refers to the hypothetical scenario in which human civilization help to reduce the suffering of sentient extraterrestrials (in the original context, it referred to the use of technology to abolish suffering). Of course, this is more relevant for simple animal-like aliens and less so for advanced civilizations, which would presumably have already either implemented a similar technology or decided to reject such technology. Brian Tomasik argues that cosmic r...
Want to improve the wiki page on s-risk? I started it a few months ago but it could use some work.
Thank you very much!
I don't know specifically. Where would be the best place to start?
What are good introductory books on chemistry and biology that do not require any background knowledge? I'm ashamed to say it, but I don't really even have a high-school level knowledge of either subject, and what little I knew is now forgotten. My background in basic (classical) physics is much better, but I have forgotten some of that too.
Has anyone here had success with the method of loci (memory palace)? I've seen it mentioned a few times on LW but I'm not sure where to start, or whether it's worth investing time into.
You need at least 10 karma points to vote (you currently have 2 points, according to your profile). Once you have 10 points you should be able to see the voting buttons. Incidentally, after a troll downvoted me from 12 to 4, I lost the ability to vote, and now I can no longer see the buttons.
It might be that downvote troll everyone keeps talking about. Eugine?
yep,
The UK death toll currently stands at 10,612 according to:
https://www.worldometers.info/coronavirus/country/uk/