Evan_Gaensbauer

Wiki Contributions

Comments

Sorted by

Do you mean Evan Hubinger, Evan R. Murphy, or a different Evan? (I would be surprised and humbled if it was me, though my priors on that are low.)

How do you square encouraging others to weigh in on EA fundraising, and presumably the assumption that anyone in the EA community can trust you as a collaborator of any sort, with your intentions, as you put it in July, to probably seek to shut down at some point in the future?

The Substack post only mentions that a researcher leaked the document, not that any researcher authored it. The document could've been written up by one or more Google staffers who aren't directly doing the research themselves, like a project manager or a research assistant. 

Nothing in the document should necessarily be taken as representative of Google, or any particular department, though the value of any insights drawn from the document could vary based on what AI research project(s)/department the authors of the document work on/in. This document is scant evidence in any direction of how representative the statements made are of Google and its leadership, or any of the teams or leaders of any particular projects or departments at Google focused on the relevant approaches to AI research.

Thanks for making this comment. I had a similar comment in mind. You're right nobody should assume any statements in this document represent the viewpoint of Google, or any of its subsidiaries, like DeepMind, or any department therein. Neither should be assumed that the researcher(s) who authored or leaked this document are department or project leads. The Substack post only mentions that a researcher leaked the document, not that any researcher authored it. The document could've been written up by one or more Google staffers who aren't directly doing the research themselves, like a project manager or a research assistant. 

On the other hand, there isn't enough information to assume it was only one or more "random" staffers at Google. Again, nothing in the document should necessarily be taken as representative of Google, or any particular department, though the value of any insights drawn from the document could vary based on what AI research project(s)/department the authors of the document work on/in. 

That might not be a useful question to puzzle over much, since we could easily never find out who the anonymous author(s) of the document is/are. Yet that the chance the authors aren't purely "random" researchers should still also be kept in mind.

Thank you for this detailed reply. It's valuable, so I appreciate the time and effort you've put into it. 

The thoughts I've got to respond with are EA-focused concerns that would be tangential to the rationality community, so I'll draft a top-level post for the EA Forum instead of replying here on LW. I'll also read your EA Forum post and the other links you've shared to incorporate into my later response. 

Please also send me a private message if you want to set up continuing the conversation over email, or over a call sometime. 

I've edited the post so It's now "resentment from rationalists elsewhere to the Bay Area community" to "resentment from rationalists elsewhere toward the Bay Area community" because that seems to reduce the ambiguity some. My use of the word 'resentment' was intentional. 

Thanks for catching those. The word 'is' was missing. The word "idea" was meant to be "ideal." I've made the changes. 

I'm thinking of asking as another question post, or at least a post seeking feedback probably more than trying to stake a strong claim. Provoking debate for the sake of it would hinder that goal, so I'd try to writing any post in a way to avoid that. Those filters applied to any post I might write wouldn't hinder any kind of feedback I'd seek. The social barriers to posting raised by others with the concerns you expressed are seeming high enough that I'm unsure I'll post it after all.

This is a concern I take seriously. While it is possible increasing awareness of the problem of AI will make things worse overall, I think a more likely outcome is that it will be neutral to good.

Another consideration is how it may be a risk for long-termists to not pursue new ways of conveying the importance and challenge of ensuring human control of transformative AI. There is a certain principle of being cautious in EA. Yet in general we don't self-reflect enough to notice when being cautious by default is irrational on the margin. 

Recognizing the risks of acts of omission is a habit William MacAskill has been trying to encourage and cultivate in the EA community during the last year. Yet it's been a principle we've acknowledged since the beginning. Consequentialism doesn't distinguish between action, and inaction, as a failure to take any appropriate, crucial or necessary action to prevent a negative outcome. Risk aversion is focused on in the LessWrong Sequences more than most cognitive biases.

It's now evident that past attempts at public communication about existential risks (x-risks) from AI have altogether proven to be neither sufficient nor adequate. It may not be a matter of not drawing more attention to the matter so much as drawing more of the right kind of attention. In other words, carefully conducing changes in how AI x-risks are perceived by various sections of the public is necessary. 

The way we together as a community help you ensure how you write the book strikes the right balance may be to keep doing what MacAskill recommends: 

  • Stay in constant communication about our plans with others, inside and outside of the EA community, who have similar aims to do the most good they can
  • Remember that, in the standard solution to the unilateralist’s dilemma, it’s the median view that’s the right (rather than the most optimistic or most pessimistic view)
  • Are highly willing to course-correct in response to feedback
Load More