MondSemmel

Wikitag Contributions

Comments

Sorted by

While the framing of treating lack of social grace as a virtue captures something true, it's too incomplete and imo can't support its strong conclusion. The way I would put it is that you have correctly observed that, whatever the benefits of social grace are, it comes at a cost, and sometimes this cost is not worth paying. So in a discussion, if you decline to pay the cost of social grace, you can afford to buy other virtues instead.[1]

For example, it is socially graceful not to tell the Emperor Who Wears No Clothes that he wears no clothes. Whereas someone who lacks social grace is more likely to tell the emperor the truth.

But first of all, I disagree with the frame that lack of social grace is itself a virtue. In the case of the emperor, for example, the virtues are rather legibility and non-deception, traded off against whichever virtues the socially graceful response would've gotten.

And secondly, often the virtues you can buy with social grace are worth far more than whatever you could gain by declining to be socially graceful. For example, when discussing politics with someone of an opposing ideology, you could decline to be socially graceful and tell your interlocutor to their face that you hate them and everything they stand for. This would be virtuously legible and non-deceptive, at the cost of immediately ending the conversation and thus forfeiting any chance of e.g. gains from trade, coming to a compromise, etc.

One way I've seen this cost manifest on LW is that some authors complain that there's a style of commenting here that makes it unenjoyable to post here as an author. As a result, those authors are incentivized to post less, or to post elsewhere.[2]

And as a final aside, I'm skeptical of treating Feynman as socially graceless. Maybe he was less deferential towards authority figures, but if he had told nothing but the truth to all the authority figures (who likely included some naked emperors) throughout his life, his career would've presumably ended long before he could've gotten his Nobel Prize. And b), IIRC the man's physics lectures are just really fun to watch, and I'm pretty confident that a sufficiently socially graceless person would not make for a good teacher. For example, it is socially graceful not to belittle fledgling students as intellectual inferiors, even though they in some ways are just that.

  1. ^

    Related: I wrote this comment and this follow-up where I wished that Brevity was considered a rationalist virtue. Because if there's no counterbalancing virtue to trade off against other virtues like legibility and truth-seeking, then supposedly virtuous discussions are incentivized to become arbitrarily long.

  2. ^

    The moderation log of users banned by other users is a decent proxy for the question of which authors have considered which commenters to be too costly to interact with, whether due to lack of social grace of something else.

Yes, my disagreement was mostly with the first paragraph, which read to me like "who are you going to believe, the expert or your own lying eyes". I'm not an expert, but I do have a sense of aesthetics, that sense of aesthetics says the cover looks bad, and many others agree. I don't care if the cover was designed by a professional; to shift my opinion as a layperson, I would need evidence that the cover is well-received by many more people than dislike it, plus A/B tests of alternative covers that show it can't be easily improved upon.

That said, I also disagreed somewhat with the fourth paragraph, because when it comes to AI Safety, MIRI really needs no introduction or promotion of their authors. They're well-known, the labs just ignore their claim that "if anyone builds it, everyone dies".

+1 on the cover looking outright terrible. To make this feedback more specific and actionable:

  • If you care about the book bestseller lists, why doesn't this book cover look like previous bestsellers? To get a sense of how those look like, here is an "interactive map of over 5,000 book covers" from the NYT "Best Selling" and "Also Selling" lists between 2008 and 2019.
  • In particular, making all words the same font size seems very bad, and making title and author names the same size and color is a baffling choice.
    • Why is the subtitle in the same font size as the title?
    • And why are your author names so large, anyway? Is this book called "If Anyone Build It, Everyone Dies", or is it called "Eliezer Yudkowsky & Nate Soares"?
    • Plus someone with a 17-character name like "Eliezer Yudkowsky" simply can't have such a large author font. You're spending three lines of text on the author names!
    • Plus I would understand making the author names so large if you had a humungous pre-existing readership (when you're Stephen King or J. K. Rowling, the title of your book is irrelevant). But even Yudkowsky doesn't have that, and Nate certainly doesn't. So why not make the author names smaller, and let the title speak for itself?
  • I understand the artistic desire to have the irrecoverable red event horizon of superintelligence to underscore the "would kill us all" part, but since it makes the words "kill us all" harder to read, I'm not sure if this current design underscores or obscures that.
  • Surely it would've been possible to somehow make title and subtitle less than four lines of text each?
  • And overall, the cover just looks very cheap and low-effort.

EDIT: More fundamentally: in all media, title & cover art are almost as important as content, because you can't get people to click on your video, or to pick up your book in a bookstore, if title & cover aren't good. Then it doesn't matter how great the content is, if nobody ever sees it.

Anyway, the title is good, the cover is bad, and I can't assess the content yet. You say this book has been in the works for over a year, and that you spent lots of effort on polishing its content. Then if it's a good fraction of MIRI's output for that time, and e.g. a cover is responsible for (say) 20% of a book's impact, then wouldn't that justify spending >>>$100k on the cover design? This one looks more like a cover purchased on Fiverr.

Also see this discussion on the need to spend a significant fraction of one's effort on a piece of content, on stuff like title and cover and thumbnail and book blurb.

Agree. There's no way to present the alarming idea of AI doom without sounding alarmist. So it seems to me the next-best thing is to communicate it clearly in plain English without complicated words, and which can't be misunderstood (in contrast to co-opted terms like "AI Safety"). That's what this title does, so I like it.

I've been trying to use that feature to help point out typos and grammar errors in public webfiction chapters, but I often (but not always) get a nondescript "Content blocked: Content not permitted" error.

Have crossposted it here. If you have an account on the EA forum to link to, I can probably add you as a "co-author" there so you get karma for your post.

that paperclips or self-replication or free energy are worth the side effect of murdering or causing the undue suffering of billions of conscious beings...

make a leap from "we have AGI or near-AGI that isn't capable of causing much harm or suffering" to "we have an ASI that disregards human death and suffering for the sake of accomplishing some goal, because it is SOOOooo smart that it made some human-incomprehensible logical leap that actually a solar-system wide Dyson Sphere is more important than not killing billions of humans".

1) One of the foundational insights of alignment discussions is the Orthogonality Thesis, which says that this is absolutely 100% allowed. You can be arbitrarily intelligent AND value arbitrary things. An arbitrary unaligned ASI values all of humanity at 0, so anything it values at ε > 0 is infinitely more valuable to it, and worth sacrificing all of humanity for it.

2) In no way are current LLMs even "moderately aligned". The fact that current LLMs can be jailbroken to do things they were supposedly trained not to do should be more than enough counterevidence to make this obvious.

intelligent beings seem to trend towards conscientious behavior

3) There are highly intelligent human sociopaths, but that hardly matters: you're comparing intelligent humans to intelligent aliens, and deciding that aliens must be humanlike and care about conscious beings, just because all examples of intelligence you've seen so far in reality are humans. You can't generalize in this manner.

There's no reason for me to believe on principle that AI, especially super intelligent AI, will kill humans in any way analogous to how we killed native animals. The smartest humans on our planet are, far as I've seen, far more understanding of and interest in the impact of human influence on the planet and its many ecosystems.

If a superintelligent AI turns the solar system into a Dyson Sphere, us humans will die merely "because [it was] busy reshaping the world", consistent with the original quote. AIs could murder us, but intentional and malicious genocide is not at all required for human extinction. Lack of care for us is more than enough, and that's the default state of any unaligned AI.

There's no reason for me to believe on principle that AI, especially super intelligent AI, will kill humans in any way analogous to how we killed native animals.

Think microbes or viruses, not animals. Essentially invisible living beings which you as a human don't care for at all. Now apply that same lack of care to the relationship between an unaligned AI and all life on Earth.

My point was that Altman doesn't adhere to vague statements, and he's a known liar and manipulator, so there's no reason to believe his word would be worth any more in concrete statements.

Load More