One final thing is that I typically didn't emphasize loss of control//superintelligence//recursive self-improvement. I didn't hide it, but I included it in a longer list of threat models
I'd be very interested to see that longer threat model list!
[Cross-commenting from the EA Forum.]
[Disclaimers: My wife Deena works with Kat as a business coach. I briefly met Kat and Emerson while visiting in Puerto Rico and had positive interactions with them. My personality is such that I have a very strong inclination to try to see the good in others, which I am aware can bias my views.]
A few random thoughts related to this post:
1. I appreciate the concerns over potential for personal retaliation, and the other factors mentioned by @Habryka and others for why it might be good to not delay this kind of post. I th...
Kat, Emerson, and Drew's reputation is not my concern.
One of their friends called me yesterday saying that me publishing it would probably be the end for Nonlinear, so I should delay and give them time to prepare a response. I assured them that I was not considering that when choosing to share this information.
Any chance we can get an Android app version?
The more I think about this post, the more I think it captures my frustrations with a large percentage of the public discourse on AI x-risks, and not just this one debate event.
I think so too, and I'm going to keep these points in mind when talking to risk skeptics. I wish others would too.
I couldn't bear to watch this particular debate, since the summary made it clear that little communication happened. Debates are a terrible way to arrive at the truth, since they put people into a soldier mindset. But even many conversations that don't start as debates turn into them, and investigating these cruxes could help turn them back into honest discussions in search of understanding.
You should make this a top level post so it gets visibility. I think it's important for people to know the caveats attached to your results and the limits on its implications in real-world dynamics.
When you say that you'd give different probability estimates on different days, do you think you can represent that as you sampling on different days from a probability distribution over your "true" latent credence? If yes, do you think it would be useful to try to estimate what that distribution looks like, and then report the mean or perhaps the 90% CI or something like that? So for example, if your estimate typically ranges between 33% and 66% depending on the day with a mean of say 50%, then instead of reporting what you think today (the equivalent of taking a single random sample from the distribution), maybe you could report 50% because that's your mean and/or report that your estimate typically ranges from 33% to 66%.
From a Facebook discussion with Scott Aaronson yesterday:
Yann: I think neither Yoshua nor Geoff believe that AI is going kill us all with any significant probability.
Scott: Well, Yoshua signed the pause letter, and wrote an accompanying statement about what he sees as the risk to civilization (I agree that there are many civilizational risks short of extinction). In his words: “No one, not even the leading AI experts, including those who developed these giant AI models, can be absolutely certain that such powerful tools now or in the future cannot be used...
The conversation took place in the comments section to something I posted on Facebook: https://m.facebook.com/story.php?story_fbid=pfbid0qE1PYd3ijhUXVFc9omdjnfEKBX4VNqj528eDULzoYSj34keUbUk624UwbeM4nMyNl&id=100010608396052&mibextid=Nif5oz
Sometimes it's better in the long run to take a good chunk of time off to do things for fun and write or work less. Sometimes less is more. But this is very much a YMMV thing.
This is actually another related area of my research: To the extent that we cannot get people to sit down and agree on double cruxes, can we still assign some reasonable likelihoods and/or uncertainty estimates for those likelihoods? After all, we do ultimately need to make decisions here! Or if it turns out that we literally cannot use any numbers here, how do we best make decisions anyway?
I have now posted a "Half-baked AI safety ideas thread" (LW version, EA Forum version) - let me know if that's more or less what you had in mind.
Just putting in my vote for doing both broader and deeper explorations of these topics!
My impression - which I kind of hope is wrong - has been that it is much easier to get an EA grant the more you are an "EA insider" or have EA insider connections. The only EA connection that my professor has is me. On the other hand, I understand the reluctance to some degree in the case of AI safety because funders are concerned that researchers will take the money and go do capabilities research instead.
Honestly I suspect this is going to be the single largest benefit from paying Scott to work on the problem. Similarly, when I suggested in an earlier comment that we should pay other academics in a similar manner, in my mind the largest benefit of doing so is because that will help normalize this kind of research in the wider academic community. The more respected researchers there are working on the problem, the more other researchers start thinking about it as well, resulting (hopefully) in a snowball effect. Also, researchers often bring along their grad students!
Hopefully. I have a feeling it won't be so easy, but we'll see.
Yes! I actually just discussed this with one of my advisors (an expert on machine learning), and he told me that if he could get funding to do it he would definitely be interested in dedicating a good chunk of his time to researching AGI safety. (For any funders who might read this and might be interested in providing that funding, please reach out to me by email Aryeh.Englander@jhuapl.edu. I'm going to try to reach out to some potential funders next week.)
I think that there are a lot of researchers who are sympathetic to AI risk concerns, but they either ...
There's been discussion about there being a surplus of funding in EA and not enough people who want to get funded to do important work. If that is true, shouldn't it be relatively easy for your presumably competent advisor to get such funding to work on AI safety?
It also depends on your target audience. (Which is basically what you said, just in slightly different words.) If you want to get Serious Researchers to listen to you and they aren't already within the sub-sub-culture that is the rationality community and its immediate neighbors, then in many (most?) cases ranting and freaking out is probably going to be actively counterproductive to your cause. Same if you're trying to build a reputation as a Serious Researcher, with a chance that decision makers who listen to Serious Researchers might listen to you. On t...
I'm pretty sure that's the whole purpose of having province governors and sub-kingdoms, and various systems in place to ensure loyalty. Every empire in history did this, to my knowledge. The threat of an imperial army showing up on your doorstep if you fail to comply has historically been sufficient to ensure loyalty, at least while the empire is strong.
We have a points system in our family to incentivize the kids to do their chores. But we have to regularly update the rules because it turns out that there are ways to optimize for the points that we didn't anticipate and that don't really reflect what we actually want the kids to be incentivized to do. Every time this happens I think - ha, alignment failure!
Alexey Turchin and David Denkenberger describe several scenarios here: https://philpapers.org/rec/TURCOG-2 (additional recent discussion in this comment thread)
Eliezer's go-to scenario (from his recent post):
...The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point. My lower-bound model of "how a sufficiently powerful intelligence would kill everyone, if it didn't want to not do that" is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA
https://www.gwern.net/fiction/Clippy (very detailed but also very long and very full of technical jargon; on the other hand, I think it's mostly understandable even if you have to gloss over most of the jargon)
Please describe or provide links to descriptions of concrete AGI takeover scenarios that are at least semi-plausible, and especially takeover scenarios that result in human extermination and/or eternal suffering (s-risk). Yes, I know that the arguments don't necessarily require that we can describe particular takeover scenarios, but I still find it extremely useful to have concrete scenarios available, both for thinking purposes and for explaining things to others.
One of the most common proposals I see people raise (once they understand the core issues) is some form of, "can't we just use some form of slightly-weaker safe AI to augment human capabilities and allow us to bootstrap to / monitor / understand the more advanced versions?" And in fact lots of AI safety agendas do propose something along these lines. How would you best explain to a newcomer why Eliezer and others think this will not work? How would you explain the key cruxes that make Eliezer et al think nothing along these lines will work, while others think it's more promising?
[Note that two-axis voting is now enabled for this post. Thanks to the mods for allowing that!]
This website looks pretty cool! I didn't know about this before.
I haven't even read the post yet, but I'm giving a strong upvote in favor of promoting the norm of posting unpopular critical opinions.
Such a policy invites moral hazard, though. If many people followed it, you could farm karma by simply beginning each post with the trite "this is going to get downvoted" thing.
I forgot about downvotes. I'm going to add this in to the guidelines.
Background material recommendations (more in depth): Please recommend your favorite AGI safety background reading / videos / lectures / etc. For this sub-thread more in-depth recommendations are allowed, including material that requires technical expertise of some sort. (Please specify what kind of background knowledge / expertise is required to understand the material you're recommending.) This is also the place to recommend general resources people can look at if they want to start doing a deeper dive into AGI safety and related topics.
Background material recommendations (popular-level audience, several hours time commitment): Please recommend your favorite basic AGI safety background reading / videos / lectures / etc. For this sub-thread please only recommend background material suitable for a popular level audience. Time commitment is allowed to be up to several hours, so for example a popular-level book or sequence of posts would work. Extra bonus for explaining why you particularly like your suggestion over other potential suggestions, and/or for elaborating on which audiences might benefit most from different suggestions.
Background material recommendations (popular-level audience, very short time commitment): Please recommend your favorite basic AGI safety background reading / videos / lectures / etc. For this sub-thread please only recommend background material suitable for complete newcomers to the field, with a time commitment of at most 1-2 hours. Extra bonus for explaining why you particularly like your suggestion over other potential suggestions, and/or for elaborating on which audiences might benefit most from different suggestions.
Quick thought: What counts as a "company" and what counts as "one year of effort"? If Alphabet's board and directors decided for some reason to divert 99% of the company's resources towards buying up coal companies and thereby becomes a world leader in the coal industry, does that count? What if Alphabet doesn't buy the companies outright but instead headhunts all of their employees and buys all the necessary hardware and infrastructure?
Similarly, you specified that it needs to be a "tech company", but what exactly differentiates a tech company from a regu...
A friend pointed out on Facebook that Gato uses TPU-v3's. Not sure why - I thought Google already had v4's available for internal use a while ago? In any case, the TPU-v4 might potentially help a lot for the latency issue.
"More specifically, says my Inner Eliezer, it is less helpful to reason from or about one's priors about really smart, careful-thinking people making or not making mistakes, and much more helpful to think directly about the object-level arguments, and whether they seem true."
When you say it's much more helpful, do you mean it's helpful for (a) forming accurate credences about which side is in fact correct, or do you just mean it's helpful for (b) getting a much deeper understanding of the issues? If (b) then I totally agree. If (a) though, why would I expe...
Heh, no problem. At least I changed my LessWrong username from Iarwain to my real name a while back.
Darn, there goes my ability to use Iarwain as a really unusual pseudonym. I've used it off and on for almost 20 years, ever since my brother made me a new email address right after having read the LOTR appendixes.
...sincere apologies.
Thanks, looks useful!
How about, "the words "hello world!" written on a piece of paper"? Or you could substitute "on a compute screen" instead of a piece of paper, or you could just leave out the writing medium entirely. I'm curious if it can handle simple words if asked specifically for them.
Yes, I'm aware of that. But that's a yearly list, and I'm asking for all-time favorites.
I keep having kind of off-the-cuff questions I would love to ask the community, but I don't know where the right place is to post those questions. I don't usually have the time to go polish up the questions so that they are high quality, cite appropriate sources and previous discussions, etc., but I would still like them answered! Typically these are the types of questions I might post on Facebook, but I think I would get higher quality answers here.
Do questions of this sort belong as question posts, shortform posts, or comments on the monthly open threads...
My general impression based on numerous interactions is that many EA orgs are specifically looking to hire and work with other EAs, many longtermist orgs are looking to specifically work with longtermists, and many AI safety orgs are specifically looking to hire people who are passionate about existential risks from AI. I get this to a certain extent, but I strongly suspect that ultimately this may be very counterproductive if we are really truly playing to win.
And it's not just in terms of who gets hired. Maybe I'm wrong about this, but my impression is t...
There is a precedent for doing secret work of high strategic importance, which is every intelligence agency and defense contractor ever.
in-group bias
I'm shocked, shocked, to find gambling in this establishment.
Also note the Percy Liang's Stanford Center for Research on Foundation Models seems to have a strong focus on potential risks as well as potential benefits. At least that's what it seemed to me based on their inaugural paper and from a lot of the talks at the associated workshop last year.
I think part of what I was reacting to is a kind of half-formed argument that goes something like:
Meta-comment:
I noticed that I found it very difficult to read through this post, even though I felt the content was important, because of the (deliberately) condescending style. I also noticed that I'm finding it difficult to take the ideas as seriously as I think I should, again due to the style. I did manage to read through it in the end, because I do think it's important, and I think I am mostly able to avoid letting the style influence my judgments. But I find it fascinating to watch my own reaction to the post, and I'm wondering if others have any (co...
When I try to mentally simulate negative reader-reactions to the dialogue, I usually get a complicated feeling that's some combination of:
I had a pretty strong negative reaction to it. I got the feeling that the post derives much of its rhetorical force from setting up an intentionally stupid character who can be condescended to, and that this is used to sneak in a conclusion that would seem much weaker without that device.
Things I instinctively observed slash that my model believes that I got while reading that seem relevant, not attempting to justify them at this time:
I find it concerning that you felt the need to write "This is not at all a criticism of the way this post was written. I am simply curious about my own reaction to it" (and still got downvoted?).
For my part, I both believe that this post contains valuable content and good arguments, and that it was annoying / rude / bothersome in certain sections.
I've gotten one private message expressing more or less the same thing about this post, so I don't think this is a super unusual reaction.
I agree that the first can be framed as a meta-crux, but actually I think the way you framed it is more of an object-level forecasting question, or perhaps a strong prior on the forecasted effects of technological progress. If on the other hand you framed it more as conflict theory vs. mistake theory, then I'd say that's more on the meta level.
For the second, I agree that's for some people, but I'm skeptical of how prevalent the cosmopolitan view is, which is why I didn't include it in the post.