Comments

On reflection, I endorse the conclusion and arguments in this post. I also like that it's short and direct. Stylistically, it argues for a behavior change among LessWrong readers who sometimes make surveys, rather than being targeted at general LessWrong readers. In particular, the post doesn't spend much time or space building interest about surveys or taking a circumspect view of them. For this reason, I might suggest a change to the original post to add something to the top like "Target audience: LessWrong readers who often or occasionally make formal or informal surveys about the future of tech; Epistemic status: action-oriented; recommends behavior changes." It might be nice to have a longer version of the post that takes a more circumspect view of surveys and coordination surveys, that is more optimized for interestingness to general LessWrong readers, and that is less focused on recommending a change of behavior to a specific subset of readers. I wouldn't want this shorter more direct version to be fully replaced by the longer more broadly interesting version, though, because I'm still glad to have a short and sweet statement somewhere that just directly and publically explains the recommended behavior change.

I've been trying to get MIRI to switch to stop calling this blackmail (extortion for information) and start calling it extortion (because it's the definition of extortion). Can we use this opportunity to just make the switch?

I support this, whole-heartedly :) CFAR has already created a great deal of value without focusing specifically on AI x-risk, and I think it's high time to start trading the breadth of perspective CFAR has gained from being fairly generalist for some more direct impact on saving the world.

"Brier scoring" is not a very natural scoring rule (log scoring is better; Jonah and Eliezer already covered the main reasons, and it's what I used when designing the Credence Game for similar reasons). It also sets off a negative reaction in me when I see someone naming their world-changing strategy after it. It makes me think the people naming their strategy don't have enough mathematician friends to advise them otherwise... which, as evidenced by these comments, is not the case for CFAR ;) Possible re-naming options that contrast well with "signal boosting"

  • Score boosting
  • Signal filtering
  • Signal vetting

This is a cryonics-fails story, not a cryonics-works-and-is-bad story.

Seems not much worse than actual-death, given that in this scenario you could still choose to actually-die if you didn't like your post-cryonics life.

Seems not much worse than actual-death, given that in this scenario you (or the person who replaces you) could still choose to actually-die if you didn't like your post-cryonics life.

Seems not much worse than actual-death, given that in this scenario you could still choose to actually-die if you didn't like your post-cryonics life.

This is an example where cryonics fails, and so not the kind of example I'm looking for in this thread. Sorry if that wasn't clear from the OP! I'm leaving this comment to hopefully prevent more such examples from distracting potential posters.

Hmm, this seems like it's not a cryonics-works-for-you scenario, and I did mean to exclude this type of example, though maybe not super clearly:

OP: There's a separate question of whether the outcome is positive enough to be worth the money, which I'd rather discuss in a different thread.

Load More