Statements of (purported) empirical fact are often strong Bayesian evidence of the speaker's morality and politics (i.e., what values one holds, what political coalition one supports), and this is a root cause of most group-level bad epistemics. For example someone who thinks eugenics is immoral is less likely (than someone who doesn't think that) to find it instrumentally useful to make a statement like (paraphrasing) "eugenics may be immoral, but is likely to work in the sense that selective breeding works for animals", so when someone says that, it is evidence for them not thinking that eugenics is immoral and therefore not belonging to a political coalition that holds "eugenics is immoral" as a part of its ideology.
I think for many people this has been trained into intuition/emotion (you automatically think that someone is bad/evil or hate them if they express a statement of fact that your political enemies are much more likely to make than your political allies) or even reflexive policy (automatically attack anyone who makes such statements).
This seems pretty obvious to me, but some people do not seem to be aware of it (e.g., Richard Dawkins seemed surprised by people's reacti
...what needs explaining is how good epistemic norms like “free inquiry” and “civil discourse” ever became a thing.
An idea for explaining this: some group happened to adopt such norms due to a historical accident, and there happened to be enough low hanging epistemic fruit that could be picked up by a group operating under such norms that the group became successful enough for the norms to spread by conquest and emulation. This also suggests that one reason for the decay of these norms is that we are running out of low hanging fruit.
Anyone else look at the coronavirus outbreak and think this is how a future existential accident will play out, with policy always one step behind what's necessary to control it, because governments don't want to take the risk of "overreacting" and triggering large or very large political and economic costs "for no good reason". So they wait until absolutely clear evidence emerge, by which time it will be too late.
Why will governments be more afraid of overreacting than underreacting? (Parallel: We don't seem to see governments doing anything in this outbreak that could be interpreted as overreacting.) Well, every alarm up to that point will have been a false alarm. (Parallel: SARS, MERS, Ebola, every flu pandemic in recent history.)
See these news stories about the WHO being blamed for being too aggressive about swine flu, which probably caused it to learn a wrong lesson:
Global equity markets may have underestimated the economic effects of a potential COVID-19 pandemic because the only historical parallel to it is the 1918 flu pandemic (which is likely worse than COVID-19 due to a higher fatality rate) and stock markets didn't drop that much. But maybe traders haven't taken into account (and I just realized this) that there was war-time censorship in effect which strongly downplayed the pandemic and kept workers going to factories, which is a big disanalogy between the two cases, so markets could drop a lot more this time around. The upshot is that maybe it's not too late to short the markets.
I bought some S&P 500 put options (SPXW Apr 17 2020 3000 Put (PM) to be specific) a couple of weeks ago. They were 40% underwater at some point because the market kept going up (which confused me a lot), but is up 125% as of today. (Note that it's very easy to lose 100% of your investment when trading options. In my case, all I'd have to do is not sell the options until April 17 and S&P 500 hasn't dropped to 3000 by then.) I had to open a brokerage account (most have no minimum I think) then apply to trade options then wait a day to be approved. You can also sell stocks short. You can also bet against foreign markets and specific stocks this way.
The above is for information purposes only. It is not intended to be investment advice. Seek a duly licensed professional for investment advice.
The option I bought is up 700% since I bought them, implying that as of 2/10/2020 the market thought there was less than 1/8 chance things would be as bad as they are today. At least for me this puts a final nail in the coffin of EMH.
Added on Mar 24: Just in case this thread goes viral at some point, to prevent a potential backlash against me or LW (due to being perceived as caring more about making money than saving lives), let me note that on Feb 8 I thought of and collected a number of ideas for preventing or mitigating the pandemic that I foresaw and subsequently sent them to several people working in pandemic preparedness, and followed up with several other ideas as I came across them.
From someone replying to you on Twitter:
Someone made a profitable trade ergo markets aren’t efficient?
This is why I said "at least for me". You'd be right to discount the evidence and he would be right to discount the evidence even more, because of more room for selection bias.
ETA: Hmm, intuitively this makes sense but I'm not sure how it squares up with Aumann Agreement. Maybe someone can try to work out the actual math?
An update on this trade in case anyone is interested. The position is now up 1500%. I also have another position which is up 2300% (it's a deeper out-of-the-money put, which I realized would be an even better idea after seeing a Facebook post by Danielle Fong). For proper calibration I should mention that a significant part of these returns is due to chance rather than skill:
Another reason for attributing part of the gains (from betting on the coronavirus market crash) to luck, from Rob Henderson's newsletter which BTW I highly recommend:
The geneticist Razib Khan has said that the reason the U.S. took so long to respond to the virus is that Americans do not consider China to be a real place. For people in the U.S., “Wuhan is a different planet, mentally.” From my view, it didn’t seem “real” to Americans (or Brits) until Italy happened.
Not only have I lived in China, my father was born in Wuhan and I've visited there multiple times.
I did sell some of the puts, but not enough of them and not near enough to the bottom to not leave regrets. I definitely underestimated how fast and strong the monetary and fiscal responses were, and paid too much attention to epidemiological discussions relative to developments on those policy fronts. (The general lesson here seems to be that governments can learn to react fast on something they have direct experience with, e.g., Asian countries with SARS, the US with the 2008 financial crisis.) I sold 1/3 of remaining puts this morning at a big loss (relative to paper profits at the market bottom) and am holding the rest since it seems like the market has priced in the policy response but is being too optimistic about the epidemiology. The main reason I sold this morning is that the Fed might just "print" as much money as needed to keep the market at its current level, no matter how bad the real economy gets.
Epistemic status: I am not a financial advisor. Please double-check anything I say before taking me seriously. But I do have a little experience trading options. I am also not telling you what to do, just suggesting some (heh) options to consider.
Your "system 1" does not know how to trade (unless you are very experienced, and maybe not even then). Traders who know what they are doing make contingency plans in advance to avoid dangerous irrational/emotional trading. They have a trading system with rules to get them in and out. Whatever you do, don't decide it on a whim. But doing nothing is also a choice.
Options are derivatives, which makes their pricing more complex than the underlying stock. Options have intrinsic value, which is what they're worth if exercised immediately, and the rest is extrinsic value, which is their perceived potential to have more intrinsic value before they expire. Options with no intrinsic value are called out of the money. Extrinsic value is affected by time remaining and the implied volatility (IV), or the market-estimated future variance of the underlying. When the market has a big selloff like this, IV increases, which inflates the extrinsic value of o
...the absolutely important part that people seem to miss with a basic 101 understanding of EMH is "hard" in no way means "impossible"
People do hard things all the time! It takes work and time and IQ and learning from experience but they do it.
I rolled a lot of the puts into later expirations, which have become almost worthless. I did cash out or convert into long positions some of them, and made about 10x my initial bet as a result. (In other words I lost about 80% of my paper profits.) It seems like I have a tendency to get out of my bets too late (same thing happened with Bitcoin), which I'll have to keep in mind in the future. BTW I wrote about some of my other investments/bets recently at https://ea.greaterwrong.com/posts/g4oGNGwAoDwyMAJSB/how-much-leverage-should-altruists-use/comment/RBXqgYshRhCJsCvWG, in case you're interested.
Offering 100-300h of technical work on an AI Safety project
I am a deep learning engineer (2y exp), I currently develop vision models to be used on satellite images (I also do some software engineering around that) (Linkedin profile https://www.linkedin.com/in/maxime-riche-73696182/). On my spare time, I am organizing a EA local group in Toulouse (France), learning RL, doing a research project on RL for computer vision (only expecting indirect utility from this) and developing an EAA tool (EffectiveAnimalAdvocacy). I have been in the French EA community for 4 years. In 2020, I chose to work part time to dedicate 2 to 3 days of work per week to EA aligned projects.Thus for the next 8 months, I have ~10h / week that I want to dedicate to assist an AI safety project. For myself, I am not looking for funds, nor to publish myself a paper, nor a blog post.To me the ideal project would be:
I think I figured out a stunningly simple way to modulate interstellar radio signals so that they contain 3-D spatial information on the point of origin of an arbitrarily short non-repeating signal. I applied this method to well-known one-off galactic radio transients and got sane results. I would love to write this up for ARXIV.
Anybody got a background in astronomy that can help make sure I write this up for ARXIV in a way that uses the proper terminology and software?
A descriptive model of moral change, virtue signaling, and cause allocation that I thought of in part as response to Paul Christiano's Moral public goods . (It was previously posted deep in a subthread and I'm reposting it here to get more attention and feedback before possibly writing it up as a top-level post.)
After the hospitals fill up, the COVID-19 death rate is going to get a lot higher. How much higher? What's the fatality rate from untreated COVID-19?
This article may be an answer: it lumps together ICU, mechanical ventilation, and death into a "primary composite end point". That seems like an OK proxy for "death without treatment", right?
If so, Table 1 suggests fatality rate of 6% overall, 0% ages 0-14, 2% ages 15-49, 7% ages 50-64, 21% ages 65+. There's more in the table about pre-existing conditions and so on.
(ETA one more: 2.5% for people of all ages with no preexisting condition.)
Thoughts? Any other data?
(ETA: This should be viewed as a lower bound on fatality rate, see comments.)
Is there something you think we can all do on LessWrong can do to help with the coronavirus?
We have a justified practical advice thread and some solid posts about quarantine prepations, not acting out of social fears, and a draft model of risks from using delivery services.
We also have a few other questions:
Finally, here's the advice that my house and some friends put together.
I'm interested if people have ideas for better ways we could organise info on LessWrong or something.
Steven Pinker is running a general-education course on rationality at Harvard University. There are some interesting people booked as guest lecturers. Details on Pinker's website, including links that will get you to video of all the lectures (there have been three so far).
I've watched only the first, which suggests unsurprisingly that a lot of the material will be familiar to LW regulars.
The syllabus also includes (either as required or optional reading) https://www.lesswrong.com/posts/ujTE9FLWveYz9WTxZ/what-cost-for-irrationality , https://www.lesswrong.com/posts/XTXWPQSEgoMkAupKt/an-intuitive-explanation-of-bayes-stheorem ,
https://www.lesswrong.com/posts/QxZs5Za4qXBegXCgu/introduction-to-game-theorysequence-guide , https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/ , https://80000hours.org/key-ideas/ , and https://arbital.com/p/bayes_rule/?l=1zq ; its "other resources" sections also include the following mentions:
LessWrong.com is a forum for the “Rationality community,” an informal network of bloggers who seek to call attention to biases and fallacies and apply reason more rigorously (sometimes to what may seem like extreme lengths).
Slate Star Codex https://slatestarcodex.com/ is an anagram of “Scott Alexander,” the author of the tutorial recommended above and a prominent member of the “rationality community.” This deep and witty blog covers diverse topics in social science, medicine, events, and everyday life.
80,000 Hours, https://80000hours.org/, an allusion to the number of hours in your career, is a...
An observation on natural language being illogical: I've noticed that at least some native Chinese speakers use 不一定 (literally "not certain") to mean "I disagree", including when I say "I think there's 50% chance that X." At first I was really annoyed with the person doing that ("I never said I was certain!") but then I noticed another person doing it so now I think it's just a standard figure of speech at this point, and I'm just generally annoyed at ... cultural evolution, I guess.
Google's AI folks have made a new chatbot using a transformer-based architecture (but a network substantially bigger than full-size GPT2). Blog post; paper on arXiv. They claim it does much better than the state of the art (though I think everyone would agree that the state of the art is rather unimpressive) according to a human-evaluated metric they made up called "sensibleness and specificity average", which means pretty much what you think it does, and apparently correlates with perplexity in the right sort of way.
I'd be curious how people relate to this Open Thread compared to their personal ShortForm posts. I'm trying to get more into LessWrong posting and don't really understand the differences between these.
This has probably already been discussed, and if so please link me to that discussion if it's easy.
I noticed that even though I may not be as optimized in the matter of investments as others (hi, Wei Dai!), the basic rationality principles still help a lot. This morning, when I went to invest my usual chunk of my paycheck, I reflected on my actions and realized that the following principles were helping me (and had helped me in the past) buy stuff that were likely undervalued:
Further alarming evidence of humanity's inability to coordinate (especially in an emergency), and also relevant to recent discussions around terminology: ‘A bit chaotic.’ Christening of new coronavirus and its disease name create confusion
Hi! I have been reading lesswrong for some years but have never posted, and I'm looking for advice about the best path towards moving permanently to the US to work as a software engineer.
I'm 24, single, currently living in Brazil and making 13k a year as a full stack developer in a tiny company. This probably sounds miserable to a US citizen but it's actually a decent salary here. However, I feel completely disconnected from the people around me; the rationalist community is almost nonexistent in Brazil, specially in a small town like the on...
ETA: It's out of stock again just a couple of hours later, but you can sign up for be notified when it's back in stock.
Possible source of medicine for COVID-19. Someone in the FB group Viral Exploration suggested inhousepharmacy.vu as an online pharmacy to buy medicine without prescription. I don't know them personally but they seem trustworthy enough. (ETA: Another seemingly trustworthy person has also vouched for it.) Hydroxychloroquine has been out of stock until a few minutes ago. I bought some myself in case the medical system get overwhelmed. Relevan
...As of right now, I think that if business-as-usual continues in AI/ML, most unskilled labor in the transportation/warehousing of goods will be automatable by 2040.
Scott Anderson, Amazon’s director of Robotics puts it at over 10 years. https://www.theverge.com/2019/5/1/18526092/amazon-warehouse-robotics-automation-ai-10-years-away.
I don’t think it requires any fundamental new insights to happen by 2040, only engineering effort and currently available techniques.
I believe the economic incentives will align with this automation once it becomes achievable.
Tran
...
If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)
And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.
The Open Thread sequence is here.