I feel like Thomas was trying to contribute to this conversation by making an intellectually substantive on-topic remark and then you kind of trampled over that with vacuous content-free tone-policing.
I could be described as supporting the book in the sense that I preordered it, and I bought two extra copies to give as gifts. I'm planning to give one of them tomorrow to an LLM-obsessed mathematics professor at San Francisco State University. But the reason I'm giving him the book is because I want him to read it and think carefully about the arguments on the merits, because I think that mitigating the risk of extinction from AI should be a global priority. It's about the issue, not supporting MIRI or any particular book.
Thank you for clarifying.
I think this was fairly obvious
No, it was not obvious!
You replied to a comment that said, verbatim, "what we should indeed sacrifice is our commitment to being anal-retentive about practices that we think associate with getting the precise truth, over and beyond saying true stuff and contradicting false stuff", with, "This paragraph feels righter-to-me".
That response does prompt the reader to wonder whether you believe the quoted statement by Malcolm McLeod, which was a prominent thesis sentence of the comment that you were endorsing as feeling righter-to-you! I understand that "This feels righter-to-me" does not mean the same thing as "This is right." That's why I asked you to clarify!
In your clarification, you have now disavowed the quoted statement with your own statement that "We absolutely should have more practices that drive at the precise truth than saying true stuff and contradicting false stuff."
I emphatically agree with your statement for the reasons I explained at length in such posts as "Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think" and "Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists", but I don't think the matter is "fairly obvious." If it were, I wouldn't have had to write thousands of words about it.
One is "We must never abandon this relentless commitment to precise truth. All we say, whether to each other or to the outside world, must be thoroughly vetted for its precise truthfulness." To which my reply is: how's that been working out for us so far?
[...]
We can win without sacrificing style and integrity.
But you just did propose sacrificing our integrity: specifically, the integrity of our relentless commitment to precise truth. It was two paragraphs ago. The text is right there. We can see it. Do you expect us not to notice?
To be clear, in this comment, I'm not even arguing that you're wrong. Given the situation, maybe sacrificing the integrity of our relentless commitment to precise truth is exactly what's needed!
But you can't seriously expect people not to notice, right? You are including the costs of people noticing as part of your consequentialist decision calculus, right?
How do you think norm enforcement works, other than by threatening people who don't comply with the norm?
Like feeling the rain on your skin, no one else can feel it for you.
This is a deliberate reference to the lyrics of Natasha Bedingfield's thematically-relevant song "Unwritten", right? (Seems much more likely than coincidence or cryptomnesia.) I can empathize with it feeling too cute not to use, but it seems like a bad (self-undermining) choice in the context of an essay about the importance of struggling to find original words?
(Fixed; thanks for your patience.)
There has been some progress in robust ML since the days of DeepDream (2015).