I think your first point basically covers why-- people are worried about alignment difficulties in superhuman systems, in particular (because those are the dangerous systems which can cause existential failures). I think a lot of current RLHF work is focused on providing reward signals to current systems in ways that don't directly address the problem of "how do we reward systems with behaviors that have consequences that are too complicated for humans to understand".
Chris Olah wrote this topic prompt (with some feedback from me (Asya) and Nick Beckstead). We didn’t want to commit him to being responsible for this post or responding to comments on it, so we submitted this on his behalf. (I've changed the by-line to be more explicit about this.)
Thanks for writing this! Would "fine-tune on some downstream task and measure the accuracy on that task before and after fine-tuning" count as measuring misalignment as you're imagining it? My sense is that there might be a bunch of existing work like that.
This RFP is an experiment for us, and we don't yet know if we'll be doing more of them in the future. I think we'd be open to including research directions we think that are promising that apply equally well to both DL and non-DL systems-- I'd be interested in hearing any particular suggestions you have.
(We'd also be happy to fund particular proposals in the research directions we've already listed that apply to both DL and non-DL systems, though we will be evaluating them on how well they address the DL-focused challenges we've presented.)
Getting feedback in the next week would be ideal; September 15th will probably be too late.
Different request for proposals!
Thank you so much for writing this! I've been confused about this terminology for a while and I really like your reframing.
An additional terminological point that I think it would be good to solidify is what people mean when they refer to "inner alignment" failures. As you alude to, my impression is that some people use it to refer to objective robustness failures, broadly, whereas others (e.g. Evan) use it to refer to failures that involve mesa optimization. There is then additional confusion around whether we should think "inner alignment" failures that ...
I feel pretty bad about both of your current top two choices (Bellingham or Peekskill) because they seem too far from major cities. I worry this distance will seriously hamper your ability to hire good people, which is arguably the most important thing MIRI needs to be able to do. [Speaking personally, not on behalf of Open Philanthropy.]
I think moving to the country could possibly be justified despite harms to recruitment and the rationality community, but in the official MIRI explanations, the downsides are quite underdiscussed.
To expand on this a bit, I think that people with working partners would be the group most likely to be deterred from working at MIRI if it was in either Bellingham or Peekskill. The two-body problem can be a serious constraint, and large metro areas tend to be much easier to find two jobs in. That may be getting better with the rise of remote work, but I do think it's worth keeping in mind.
Announcement: "How much hardware will we need to create AGI?" was actually inspired by a conversation I had with Ronny Fernandez and forgot about, credit goes to him for the original idea of using 'the weights of random objects' as a reference class.
https://i.imgflip.com/1xvnfi.jpg
Planned summary for the Alignment Newsletter:
...This post describes the author’s insights from extrapolating the performance of GPT on the benchmarks presented in the <@GPT-3 paper@>(@Language Models are Few-Shot Learners@). The author compares cross-entropy loss (which measures how good a model is at predicting the next token) with benchmark performance normalized to the difference between random performance and the maximum possible performance. Since <@previous work@>(@Scaling Laws for Neural Language Models@) has shown that cross-entropy loss s
AI Impacts now has a 2020 review page so it's easier to tell what we've done this year-- this should be more complete / representative than the posts listed above. (I appreciate how annoying the continuously updating wiki model is.)
From Part 4 of the report:
Nonetheless, this cursory examination makes me believe that it’s fairly unlikely that my current estimates are off by several orders of magnitude. If the amount of computation required to train a transformative model were (say) ~10 OOM larger than my estimates, that would imply that current ML models should be nowhere near the abilities of even small insects such as fruit flies (whose brains are 100 times smaller than bee brains). On the other hand, if the amount of computation required to train a transformative model were...
So exciting that this is finally out!!!
I haven't gotten a chance to play with the models yet, but thought it might be worth noting the ways I would change the inputs (though I haven't thought about it very carefully):
I'm a bit confused about this as a piece of evidence-- naively, it seems to me like not carrying the 1 would be a mistake that you would make if you had memorized the pattern for single-digit arithmetic and were just repeating it across the number. I'm not sure if this counts as "memorizing a table" or not.
This recent post by OpenAI is trying to shed some light in this question: https://openai.com/blog/ai-and-efficiency/
I really like this post.
Self-driving cars are currently illegal, I assume largely because of these unresolved tail risks. But I think excluding illegality I'm not sure their economic value is zero-- I could imagine cases where people would use self-driving cars if they wouldn't be caught doing it. Does this seem right to people?
Intuitively it doesn't seem like economic value tails and risk tails should necessarily go together, which makes me concerned about cases similar to self-driving cars that are harder to regulate legally.
What's the corresponding story here for trading bots? Are they designed in a sufficiently high-assurance way that new tail problems don't come up, or do they not operate in the tails?
I rewrote the question-- I think I meant 'counterfactual' in that this isn't a super promising idea if in fact we are just taking medical supplies from one group of people and transferring them to another.
I don't know anything about maintenance/cleaning, was thinking it would be in particular useful if we straight up run out of ICU space-- i.e., there is no alternative to go to an ICU. (Maybe this is a super unlikely class of scenarios?)
You're totally not obligated to do this, but I think it might be cool if you generated a 3D picture of hills representing your loss function-- I think it would make the intuition for what's going on clearer.
We're not going to do this because we weren't planning on making these public when we conducted the conversations, so we want to give people a chance to make edits to transcripts before we send them out (which we can't do with audio).
The claim is a personal impression that I have from conversations, largely with people concerned about AI risk in the Bay Area. (I also don't like information cascades, and may edit the post to reflect this qualification.) I'd be interested in data on this.
Thanks for writing this up-- at least for myself, I think I agree with the majority of this, and it articulates some important parts of how I live my life in ways that I hadn't previously made explicit for myself.