All of Dylan Richardson's Comments + Replies

Some people here seem to think that motivated reasoning is only something that people who want an outcome do, meaning that people concerned about doom and catastrophe can’t possibly be susceptible. This is a mistake. Everyone desires vindication. No one want to be that guy that was so cautious that he fails to be praised for his insight. This drives people to favoring extreme outcomes, because extreme views are much more attention grabbing and a chance to be seen as right feels a lot better than being wrong feels bad (It's easy to avoid fault for false pre... (read more)

3Seth Herd
Definitely. Excellent point. See my short bit on motivated reasoning, in lieu of the full post I have on the stack that will address its effects on alignment research. I frequently check how to correct my timelines and takes based on potential motivated reasoning effects for myself. The result is usually to broaden my estimates and add uncertainty, because it's difficult to identify which direction MR might've been pushing me during all of the mini-decisions that led to forming my beliefs and models. My motivations are many and which happened to be contextually relevant at key decision points is hard to guess. On the whole I'd have to guess that MR effects are on average larger on long timelines and low p(dooms). They both allow us to imagine a sunny near future, and to work on our preferred projects instead of panicking and having to shift to work that can help with alignment if AGI happens soon.  Sorry. This is worth a much more careful discussion, that's just my guess in the absence of pushback.
5Daniel Kokotajlo
Not only is that just one possible bias, it's a less-common bias than its opposite. Generally speaking, more people are afraid to stick their necks out and say something extreme than actively biased towards doing so. Generally speaking, being wrong feels more bad than being right feels good. There are exceptions; some people are contrarians, for example (and so it's plausible I'm one of them) but again, talking about people in general, the bias goes in the opposite direction from what you say.

This isn't "cheating", neither is it at all illegal. Essentially it entails nothing more than a conversation about politics. 

2Terence Coelho
On a first glance, this looked really sketchy to me, and I think with politics people need to be really careful to avoid being misinterpreted. I don't really blame the above comments for misunderstanding how this works. To make it clearer: * There are 3 people involved in a trade: 1 swing state voter and 2 non-swing-state voters. * All 3 people involved would prefer Kamala to Trump but do not want to vote for Kamala for some reason (probably related to Gaza). * The 3 people agree to only cast one collective vote to Kamala, in the state where it matters.   The reason they have to word it in a funny way is to convince themselves that the two in a non-swing-state would have really voted for Kamala without the trade and the one in a swing state would have really voted third party without the trade.

Since this comment got linked to, and we are throwing around anecdotal evidence, I'll add mine: the animal rights vegan club at my uni had at least one individual quite keen on supplementing (not in a wacky way, mostly commonsensical) and I didn't hear any push back from the other members. And none of them ever heard of EA. And my very leftist vegan roommate had B12 & Creatine (I assume they took them). And I assume EA is at a equal, likely higher epistemic standpoint.

Sentient is wrong, correct. "Capable of language" would be more accurate though, with the implication being that they are intelligent. Only humans are capable of language (as opposed to mere communication) and it is thought by some to be either the cause or consequence of our unique human intelligence. 

Do we know that the audience understood the proposition of the pro side during the first poll? I noticed that they didn't actually explain what an x-risk is until part way into the debate. And it seems to me that some number of the public just imagine it as a general pessimism around AI, not an actual belief in a chance of extinction in 30+ years.