All of Scott Alexander's Comments + Replies

You might want to try adapting some of the ones from http://slatestarcodex.com/2018/02/06/predictions-for-2018/ and the lists linked at the bottom.

1jbeshir
Sounds good. I've looked over them and I could definitely use a fair few of those.

Agreed that some people were awful, but I still think this problem applies.

If somebody says "There's a 80% chance of rain today, you idiot, and everyone who thinks otherwise deserves to die", then it's still not clear that a sunny day has proven them wrong. Or rather, they were always wrong to be a jerk, but a single run of the experiment doesn't do much to prove they were wronger than we already believed.

Vaniver150
Or rather, they were always wrong to be a jerk, but a single run of the experiment doesn't do much to prove they were wronger than we already believed.

To be clear, I agree with this. Furthermore, while I don't remember people giving probability distributions, I think it's fair to guess that critics as a whole (and likely even the irrational critics) put higher probability on the coarse description of what actually happened than Duncan or those of us that tried the experiment, and that makes an "I told you so!" about assigning lower probability to something that didn't happen hollow.

4Duncan Sabien (Deactivated)
I agree with this. Perhaps a better expression of the thing (if I had felt like it was the right spot in the piece to spend this many words) would've been: I suspect that coming out of the gate with that many words would've pattern-matched to whining, though, and that my specific parenthetical was still stronger once you take into account social reality. I'm curious if you a) agree or disagree or something-else with the quote above, and b) agree or disagree or something-else with my prediction that the above would've garnered a worse response.

The weatherman who predicts a 20% chance of rain on a sunny day isn't necessarily wrong. Even the weatherman who predicts 80% chance of rain on a sunny day isn't *necessarily* wrong.

If there's a norm of shaming critics who predict very bad outcomes, of the sort "20% chance this leads to disaster", then after shaming them the first four times their prediction fails to come true, they're not going to mention it the fifth time, and then nobody will be ready for the disaster.

I don't know exactly how to square this with the g... (read more)

Vaniver120
If there's a norm of shaming critics who predict very bad outcomes

I think it is hugely important to point out that this is not the norm Duncan is operating under or proposing. I understand Duncan as saying "hey, remember those people who were nasty and uncharitable and disgusted by me and my plans? Their predictions failed to come true."

Like, quoting from you during the original discussion of the charter:

I would never participate in the linked concept and I think it will probably fail, maybe disastrously.
But I also have a (only partially en
... (read more)

The problem is absolutely not that people were predicting very bad outcomes. People on Tumblr were doing things like (I'm working from memory here) openly speculating about how incredibly evil and sick and twisted Duncan must be to even want to do anything like this, up to something like (again, working from memory here) talking about conspiring to take Duncan down somehow to prevent him from starting Dragon Army.

Re: the "when friends and colleagues first come across this conclusion..." quote:

A world where everybody's true desire is to rest in bed as much as possible, but where they grudgingly take the actions needed to stay alive and maintain homeostasis, seems both very imaginable, and also very different from what we observe.

1jamii
Agreed. 'Rest in bed as much as possible but grudgingly take the actions needed to stay alive' sounds a lot like depression, but there exist non-depressed people who need explaining. I wonder if the conversion from mathematics to language is causing problems somewhere. The prose description you are working with is 'take actions that minimize prediction error' but the actual model is 'take actions that minimize a complicated construct called free energy'. Sitting in a dark room certainly works for the former but I don't know how to calculate it for the latter. In the paper I linked, the free energy minimizing trolleycar does not sit in the valley and do nothing to minimize prediction error. It moves to keep itself on the dynamic escape trajectory that it was trained with and so predicts itself achieving. So if we understood why that happens we might unravel the confusion.

I think it says something good about our community that whoever implemented this feature assumed people would be more likely to want to write mathematics than to discuss amounts of money.

6Said Achmiz
That is a nice thought, but it seems more likely that they just didn’t think of it… (also, I don’t think that particular bit was custom-written for LW, though the dev team can correct me on that if I’m mistaken)

I can't click your link, but I disagree. MIRI got most of its money from Vitalik, who I think was into crypto first and then found rationality/LW. We don't get any credit for that.

Also, MIRI got a 500,000 dollar (why can't I make the dollar sign on this site?) worth of Ripple donation in 2014. If they had kept it as Ripple, it would be worth 50 million now. Instead they sold it for 500,000 dollars (I'm not blaming them, this made sense at the time).

So although MIRI and CFAR lucked out into getting some money from crypto, I don't think it was primarily because of their (or our) great decisions. And if people had made great decisions they could have gotten much more.

ryjm190

Taking my place in history - one of my first tasks as an intern at MIRI was to write some ruby scripts that dealt with some aspects of that donation.

Not only did that experience land me my first programming job, but just realizing now that it was also the impetus that led me to grab more bitcoin (I had sold mine at the first peak in 2013) AND look into Stellar. Probably the most lucrative internship ever.

(Shoutout to Malo/Alex if you guys are still lurking LW)

ESRogs150
MIRI got most of its money from Vitalik

While not technically part of the winter fundraiser, don't forget that MIRI also got a million dollar ETH donation in the spring. For the year, it's more than half crypto, even after accounting for the 1.25M from Open Phil.

cata130

Maybe this is nitpicking, but per their post MIRI got a plurality of cryptocurrency from Vitalik but not a majority. If the website is accurate, then out of 66% of the funds raised ($1.656m) Vitalik contributed $763k, the other $893k of cryptocurrency being from other donors.

1tristanm
(Re-writing this comment from the original to make my point a little more clear). I think it is probably quite difficult to map the decisions of someone on a continuum from really bad to really good if you can't simulate the outcomes of many different possible actions. There's reason to suspect that the "optimal" outcome in any situation looks vastly better than even very good but slightly sub-optimal decisions, and vise-versa for the least optimal outcome. In this case we observed a few people who took massive risks (by devoting their time and energy into understanding or developing a particular technology which very well may have turned out to be a boondoggle) receive massive rewards from the success of it, although it could have very well turned out differently, based on what everyone knew at the time. I think the arguments for cryptocurrency becoming sucessful that existed in the past were very compelling but they weren't exactly airtight logical proofs (and still aren't even now). Not winning hugely because a legitimately large risk wasn't taken isn't exactly "losing" (and while buying bitcoins when they were cheap wasn't a large risk, investing time and energy into becoming knowledgable enough about crypto to know it was worth taking the chance may have been. A lot of the biggest winners were people who were close to the development of cryptocurrencies). But even so, a few of these winners are close to the LW community and have invested in its development or some of its projects. Doesn't that count for something? Can they be considered part of the community too? I see no reason to keep the definition so strict.
gjm100

You can't write a dollar sign because it's interpreted as "start writing mathematics". But if you type a backslash first it gets escaped and you get the dollar sign you hoped for: $.