By bad I mean dishonest, and by 'we' I mean the speaker (in this case, MIRI).
I take myself to have two central claims across this thread:
I do not see where your most recent comment has any surface area with either of these claims.
I do want to offer some reassurance, though:
I do not take "One guy who's thought about this for a long time and some other people he recruited think it's definitely going to fail" to be descriptive of the MIRI comms strategy.
Oh, I feel fine about saying ‘draft artifacts currently under production by the comms team ever cite someone who is not Eliezer, including experts with a lower p(doom)’ which, based on this comment, is what I take to be the goalpost. This is just regular coalition signaling though and not positioning yourself as, terminally, a neutral observer of consensus.
“You haven’t really disagreed that [claiming to speak for scientific consensus] would be more effective.”
That’s right! I’m really not sure about this. My experience has been that ~every take someone offers to normies in policy is preceded by ‘the science says…’, so maybe the market is kind of saturated here. I’d also worry that precommitting to only argue in line with the consensus might bind you to act against your beliefs (and I think EY et al have valuable inside-view takes that shouldn’t be stymied by the trends of an increasingly-confused and poisonous discourse). That something is a local credibility win (I’m not sure if it is, actually) doesn’t mean it’s got the best nth order effects among all options long-term (including on the dimension of credibility).
I believe that Seth would find messaging that did this more credible. I think ‘we’re really not sure’ is a bad strategy if you really are sure, which MIRI leadership, famously, is.
I do mean ASI, not AGI. I know Pope + Belrose also mean to include ASI in their analysis, but it’s still helpful to me if we just use ASI here, so I’m not constantly wondering if you’ve switched to thinking about AGI.
Obligatory ‘no really, I am not speaking for MIRI here.’
My impression is that MIRI is not trying to speak for anyone else. Representing the complete scientific consensus is an undue burden to place on an org that has not made that claim about itself. MIRI represents MIRI, and is one component voice of the ‘broad view guiding public policy’, not its totality. No one person or org is in the chair with the lever; we’re all just shouting what we think in directions we expect the diffuse network of decision-makers to be sitting in, with more or less success. It’s true that ‘claiming to represent the consensus’ is a tacking one can take to appear authoritative, and not (always) a dishonest move. To my knowledge, this is not MIRI’s strategy. This is the strategy of, ie, the CAIS letter (although not of CAIS as a whole!), and occasionally AIS orgs cite expert consensus or specific, otherwise-disagreeing experts as having directional agreement with the org (for an extreme case, see Yann LeCun shortening his timelines). This is not the same as attempting to draw authority from the impression that one’s entire aim is simply ‘sharing consensus.’
And then my model of Seth says ‘Well we should have an org whose entire strategy is gathering and sharing expert consensus, and I’m disappointed that this isn’t MIRI, because this is a better strategy,’ or else cites a bunch of recent instances of MIRI claiming to represent scientific consensus (afaik these don’t exist, but it would be nice to know if they do). It is fair for you to think MIRI should be doing a different thing. Imo MIRI’s history points away from it being a good fit to take representing scientific consensus as its primary charge (and this is, afaict, part of why AI Impacts was a separate project).
I think MIRI comms are by and large well sign-posted to indicate ‘MIRI thinks x’ or ‘Mitch thinks y’ or ‘Bengio said z.’ If you think a single org should build influence and advocate for a consensus view then help found one, or encourage someone else to do so. This just isn’t what MIRI is doing.
Good point - what I said isn’t true in the case of alignment by default.
Edited my initial comment to reflect this
(I work at MIRI but views are my own)
I don't think 'if we build it we all die' requires that alignment be hard [edit: although it is incompatible with alignment by default]. It just requires that our default trajectory involves building ASI before solving alignment (and, looking at our present-day resource allocation, this seems very likely to be the world we are in, conditional on building ASI at all).
[I want to note that I'm being very intentional when I say "ASI" and "solving alignment" and not "AGI" and "improving the safety situation"]
Does it seem likely to you that, conditional on ‘slow bumpy period soon’, a lot of the funding we see at frontier labs dries up (so there’s kind of a double slowdown effect of ‘the science got hard, and also now we don’t have nearly the money we had to push global infrastructure and attract top talent’), or do you expect that frontier labs will stay well funded (either by leveraging low hanging fruit in mundane utility, or because some subset of their funders are true believers, or a secret third thing)?
Only the first few sections of the comment were directed at you; the last bit was a broader point re other commenters in the thread, the fooming shoggoths, and various in-person conversations I’ve had with people in the bay.
That rationalists and EAs tend toward aesthetic bankruptcy is one of my chronic bones to pick, because I do think it indicates the presence of some bias that doesn’t exist in the general population, which results in various blind spots.
Sorry for not signposting and/or limiting myself to a direct reply; that was definitely confusing.
I think you should give 1 or 2 a try, and would volunteer my time (although if you’d find a betting structure more enticing, we could say my time is free iff I turn out to be wrong, and otherwise you’d pay me).
If this is representative of the kind of music you like, I think you’re wildly overestimating how difficult it is to make that music.
The hard parts are basically infrastructural (knowing how to record a sound, how to make different sounds play well together in a virtual space). Suno is actually pretty bad at that, though, so if you give yourself the affordance to be bad at it, too, then you can just ignore the most time-intensive part of music making.
Pasting things together (as you did here), is largely The Way Music Is Made in the digital age, anyway.
I think, in ~one hour you could:
The actual arrangement of the Suno piece is somewhat ambitious (not in that it does anything hard, just in that it has many sections), but this was the part you had to hack together yourself anyway, and getting those features in a human-made song is more about spending the time to do it, than it is about having the skill (there is a skill to doing an awesome job of it, but Suno doesn’t have that skill either).
Suno’s outputs are detectably bad to me and all of my music friends, even the e/acc or ai-indifferent ones, and it’s a significant negative update for me on the broader perceptual capacities of our community that so many folks here prefer Suno to music made by humans.
Your version of events requires a change of heart (for 'them to get a whole lot more serious'). I'm just looking at the default outcome. Whether alignment is hard or easy (although not if it's totally trivial), it appears to be progressing substantially more slowly than capabilities (and the parts of it that are advancing are the most capabilities-synergizing, so it's unclear what the oft-lauded 'differential advancement of safety' really looks like).