?
I’m saying he’s projecting his biases onto others. He clearly does think PauseAI rhymes with unabomber somehow, even if he personally knows better. The weird pro-tech vs anti-tech dichotomy, and especially thinking that others are blanketly anti-tech, is very rationalist.
Do you think those causes never had organizing before the big protest?
I think the relevant question is how often social movements begin with huge protests, and that’s exceedingly rare. It’s effective to create the impression that the people just rose up, but there’s basically always organizing groundwork for that to take off.
Do you guys seriously think that big protests just materialize?
Yeah the SF protests have been about constant (25-40) in attendance, but we have more locations now and have put a lot more infrastructure in place
The thing is there isn’t a great dataset— even with historical case studies where the primary results have been achieved, there are a million uncontrolled variables and we don’t and will never have experimentally established causation. But, yes, I’m confident in my model of social change.
What leapt out to me about your model was that is was very focused how an observer of the protests would react with a rationalist worldview. You didn’t seem to have given much thought to the breadth of social movements and how a diverse public would have experienced them. Like, most people aren’t gonna think PauseAI is anti-tech in general and therefore similar to the unabomber. Rationalists think that way, and few others.
Sounds like you are saying that you have those associations and I still see no evidence to justify your level of concern.
Small protests are the only way to get to big protests, and I don’t think there’s a significant risk of backfire or cringe reaction making trying worse than not trying. It’s the backfire supposition that is baseless.
The point that "small protests are the only way to get big protests" may be directionally accurate, but I want to note that there have been large protests that happened without that. Here's a shoggoth listing a bunch, including the 1989 Tiananmen Square Protests, the 2019 Hong Kong Anti-Extradition Protests, the 2020 George Floyd Protests, and more.
The shoggoth says spontaneous large protests tends to be in response to triggering events and does rely on pre-existing movements that are ready to mobilize, the latter of which your work is helping build.
Appreciate your conclusion tho— that reaching the public is our best shot. Fortunately, different approaches are generally multiplicative and complementary.
People usually say this when they personally don’t want to be associated with small protests.
- As-is, this is mostly going to make people's first exposure to AI X-risk be "those crazy fringe protestors". See my initial summary regarding effective persuasion: that would be lethal, gravely sabotaging our subsequent persuasion efforts.
Pretty strong conclusion with no evidence.
Yeah, this is the first time I’ve commented on lesswrong in months and I would prefer to just be out of here. But OP was such nasty meangirl bullying that, when someone showed it to me, I wanted to push back.
Come on, William. "But they said their criticism of this person's reputation wasn't personal" is not good enough. It's like calling to "no take backs" or something.
I have a history in animal activism (both EA and mainstream) and I think PETA has been massively positive by pushing the Overton window. People think PETA isn't working bc they feel angry at PETA when they feel judged or accused, but they update on how it's okay to treat animals, and that's the point. More moderate groups like the Humane Society get the credit, but it takes an ecosystem. You don't have to be popular and well-liked to push the Overton window. You also don't have to be a group that people want to identify with.
But I don't think PETA's ...
Yeah actually the employees of Lightcone have led the charge in trying to tear down Kat. Its you who has the better standards, Maxwell, not this site.
Getting a strong current of “being smart and having interesting and current tastes is more important than trying to combat AI Danger, and I want all my online spaces to reflect this” from this. You even seem upset that Kat is contaminating subreddits that used to not be about Safety with Safety content… Like you’re mad about progress in embrace of AI Safety. You critique her for making millennial memes as if millennials don’t exist anymore (lesswrong is millennial and older) and content should only be for you.
You seem kinda self-aware of this at one point,...
Meritorious!
I have! Multiple times at different stages of the bill (the first time like a month ago to Scott Wiener), as well as sent an email and asked like 3-4 other people to call.
The bill is in danger of not passing Appropriations because of lobbying and misinformation. That's what calling helps address. Calling does not make SB 1047 cheaper, and therefore does not address the Suspense File aspects of what it's doing in Appropriations.
Why is "dishonesty" your choice of words here? Our mistake cut against our goal of getting people to call at an impactful time. It wasn't manipulative. It was merely mistaken. I understand holding sloppiness against us but not "dishonesty".
I think the lack of charity is probably related to "activism dumb".
It seemed like a pretty predictable direction in which to make errors. I don't think we have great language about this kind of stuff, but I think it makes sense to call mistakes which very systematically fall along certain political lines "dishonest".
Again, I think the language that people have here is a bit messy and confusing, but given people's potential for self-deception, and selective error-correction, I think it's important to have language for that kind of stuff, and most of what people usually call deception falls under this kind of selective error-correction and related biases.
It was corrected.
What kind of securities fraud could he have committed?
I'm just a guy but the impression I get from occasionally reading the Money Stuff newsletter is that basically anything bad you do at a public company is securities fraud, because if you do a bad thing and don't tell investors, then people who buy the securities you offer are doing so without full information because of you.
No, sacrificing truth is fundamentally an act of self-deception. It is making yourself a man who believes a falsehood, or has a disregard for the truth. It is Gandhi taking the murder-pill. That is what I consider irreversible.
This is what I was talking about, or the general thing I had in mind, and I think it is reversible. Not a good idea, but I think people who have ever self-deceived or wanted to believe something convenient have come back around to wanting to know the truth. I also think people can be truthseeking in some domains while self-deceivi...
I get the sense that "but Google and textbooks exist" is more of a deontological argument, like if the information is public at all "the cat's out of the bag" and it's unfair to penalize LLMs bc they didn't cross any new lines, just increased accessibility.
Does that really seem true to you? Do you have no memories of sacrificing truth for something else you wanted when you were a child, say? I'm not saying it's just fine to sacrifice truth but it seems false to me to say that people never return to seeking the truth after deceiving themselves, much less after trying on different communication styles or norms. If that were true I feel like no one could ever be rational at all.
That’s why I said “financially cheap”. They are expensive for the organizer in terms of convincing people to volunteer and to all attendees as far as their time and talents, and getting people to put in sweat equity is what makes it an effective demonstration. But per dollar invested they are very effective.
I would venture that the only person who was seriously prevented from doing something else by being involved in this protest was me. Of course there is some time and labor cost for everyone involved. I hope it was complementary to whatever else they do, and, as Ben said, perhaps even allowing them to flex different muscles in an enriching way.
I’m down for a followup!
It’s hard to say what the true impact of the events will be at this time, but they went well! I’m going to write a post-mortem for the SF PauseAI protest yesterday and the Meta protest in September and post it on EAF/LW that will cover the short-term outcomes.
Considering they are financially cheap to do (each around $2000 if you don’t count my salary), I’d call them pretty successful already. Meta protest got good media coverage, and it remains to be seen how this one will be covered since most of the coverage happened in the two following weeks last time.
You could share the events with your friends and family who may be near, and signal boost media coverage of the events after! If you want to donate to keep me organizing events, I have a GoFundMe (and if anyone wants to give a larger amount, I'm happy to talk about how to do that :D). If you want to organize future events yourself, please DM me. Even putting the pause emoji ⏸️ in your twitter name helps :)
Here are the participating cities and links:
October 21st (Saturday), in multiple countries
Personally, I'm interested in targeting hardware development and that will be among my future advocacy directions. I think it'll be a great issue for corporate campaigns pushing voluntary agreements and for pushing for external regulations simultaneously. This protest is aimed more at governments (attending the UK Summit) and their overall plans for regulating AI, so we're pushing compute governance as way to most immediately address the creation of frontier models. Imo hardware tracking at the very least is going to have to be part of enforcing such...
If you found yourself interested in advocacy, the largest AI Safety protest ever is happening Saturday, October 21st!
https://www.lesswrong.com/posts/abBtKF857Ejsgg9ab/tomorrow-the-largest-ai-safety-protest-ever
Check out the LessWrong event here: https://www.lesswrong.com/events/ZoTkRYdqGuDCnojMW/global-pause-ai-protest-10-21
I think you’re correct that the paradigm has changed, Matthew, and that the problems that stood out to MIRI before as possibilities no longer quite fit the situation.
I still think the broader concern MIRI exhibited is correct: namely, that that an AI could appear to be aligned but not actually be aligned, and that this may not come to light until it is behaving outside of the context of training/in which the command was written. Because of the greater capabilities of an AI, the problem may have to do with differences in superficially similar goals that wou...
Whether MIRI was confused about the main issues of alignment in the past, and whether LLMs should have been a point of update for them is one of the points of contention here.
(I think the answer is no, see all the comments about this above)
Change log: I removed the point about Meta inaccurately calling itself "open source" because it was confusing.
Particularly in the rationalist community it seems like protesting is seen as a very outgroup thing to do. But why should that be? Good on you for expanding your comfort zone-- hope to see you there :)
^ all good points, but I think the biggest thing here is the policy of sharing weights continuing into the future with more powerful models.
Yeah, I’ve been weighing a lot whether big tent approaches are something I can pull off at this stage or whether I should stick to “Pause AI”. The Meta protest is kind of an experiment in that regard and it has already been harder than I expected to get the message about irreversible proliferation across well. Pause is sort of automatically a big tent because it would address all AI harms. People can be very aligned on Pause as a policy without having the same motivations. Not releasing model weights is more of a one-off issue and requires a lot of inferential distance crossing even with knowledgeable people. So I’ll probably keep the next several events focused on Pause, a message much better suited to advocacy.
Yeah, I’m afraid of this happening with AI even as the danger becomes clearer. It’s one reason we’re in a really important window for setting policy.
Reducing the harm of irreversible proliferation potentially addresses almost all AI harms, but my motivating concern is x-risk.
This strikes me as the kind of political thinking I think you’re trying to avoid. Contempt is not good for thought. Advocacy is not the only way to be tempted to lower your epistemic standards. I think you’re doing it right now when you other me or this type of intervention.
I commend your introspection on this.
I agree with your assessment of the situation a lot, but I disagree that there is all that much controversy about this issue in the broader public. There is a lot of controversy on lesswrong, and in tech, but the public as a whole is in favor of slowing down and regulating AI developments. (Although other AI companies think sharing weights is really irresponsible and there are anti-competitive issues with llama 2’s ToS, which why it isn’t actually open source.) https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launche...
I actually did not realize they released the base model. There's research showing how easy it is to remove the safety fine-tuning, which is where I got the framing and probably Zvi too, but perhaps that was more of a proof of concept than the main concern in this case.
The concept of being able to remove fine-tuning is pretty important for safety, but I will change my wording where possible to also mention it being bad to release the base model without any safety fine-tuning. Just asked to download llama 2 so I'll see what options they give.
Yeah, it felt like Eliezer was rounding off all of the bad faith in the post to this one stylistic/etiquette breach, but he didn't properly formulate the one rule that was supposedly violated.
Sorry, what harmful thing would this proposal do? Require people to have licenses to fine-tune llama 2? Why is that so crazy?
Nora didn't say that this proposal is harmful. Nora said that if Zach's explanation for the disconnect between their rhetoric and their stated policy goals is correct (namely that they don't really know what they're talking about) then their existence is likely net-harmful.
That said, yes requiring everyone who wants to finetune LLaMA 2 get a license would be absurd and harmful. la3orn and gallabyres articulate some reasons why in this thread.
Another reason is that it's impossible to enforce, and passing laws or regulations and then not enforcing them is re...
A weakness I often observe in my numerous rationalist friends is "rationalizing and making excuses to feel like doing the intellectually cool thing is the useful or moral thing". Fwiw. If you want to do the cool thing, own it, own the consequences, and own the way that changes how you can honestly see yourself.
Say more?
Unless you’re endorsing illusionism or something I don’t understand how people disagreeing about the nature of consciousness means the hard problem is actually a values issue. There’s still the issue of qualia or why it is “like” anything to have experiences when all the same actions could be accomplished without that. I don’t see how people having different ideas of what consciousness refers to or what is morally valuable about that makes the Hard Problem any less hard.
Yeah I suspect that these one-shot big protests are drawing on a history of organizing in those or preceding fields. The Women’s March coalition comes together all for one big event but draws on a far on deeper history involving small demonstrations and deliberate organizing to make it to that point, is my point. Idk about Free Internet but I would bet it leaned on Free Speech organizing and advocacy.
I sure wish someone would put on a large AI Safety protest if they know a way to do this in one leap. If I got a sponsor for a concert or some other draw then... (read more)