All of Holly_Elmore's Comments + Replies

Yeah I suspect that these one-shot big protests are drawing on a history of organizing in those or preceding fields. The Women’s March coalition comes together all for one big event but draws on a far on deeper history involving small demonstrations and deliberate organizing to make it to that point, is my point. Idk about Free Internet but I would bet it leaned on Free Speech organizing and advocacy.

I sure wish someone would put on a large AI Safety protest if they know a way to do this in one leap. If I got a sponsor for a concert or some other draw then... (read more)

?

I’m saying he’s projecting his biases onto others. He clearly does think PauseAI rhymes with unabomber somehow, even if he personally knows better. The weird pro-tech vs anti-tech dichotomy, and especially thinking that others are blanketly anti-tech, is very rationalist.

Do you think those causes never had organizing before the big protest? 

5habryka
The specific ones I was involved in? Pretty sure they didn't. They were SOPA related and related to what people thought was a corrupt construction of a train station in my hometown. I don't think there was much organizing for either of these before they took off. I knew some of the core organizers, they did not create many small protests before this.
5Kabir Kumar
yup.
9habryka
I don't understand, I don't think there was any ambiguity in what you said. Even not taking things literally, you implied that having big protests without having small protests is at least highly unusual. That also doesn't match my model. I think it's pretty normal. The thing that I think happens before big protests is big media coverage and social media discussion, not many months and years of small protests. I am not sure of this, but that's my current model. 

I think the relevant question is how often social movements begin with huge protests, and that’s exceedingly rare. It’s effective to create the impression that the people just rose up, but there’s basically always organizing groundwork for that to take off.

Do you guys seriously think that big protests just materialize?

7Thane Ruthenis
My model is that big protests require (1) raising the public awareness, then (2) solving the coordination problems to organize. Small protests are one way to incrementally raise awareness, and one way to solve coordination problems/snowball into big protests (as I'd outlined in a footnote in the post). But small protests can't serve their role in (2) without (1) being done first. You can't snowball public sentiments without those sentiments existing. So prior to the awareness-raising groundwork being made, the only role of protests is just (1): to incrementally raise public awareness, by physically existing in bystanders' fields of vision. I agree that protests can be a useful activity, potentially uniquely useful; including small protests. I am very skeptical that small protests are uniquely useful at this stage of the game.
8habryka
When I was involved with various forms of internet freedom activism, as well as various protests around government misspending in Germany, I do not remember a run-up of many months of small protests before the big ones. It seemed that people basically directly organized some quite big ones, and then they grew a bit bigger over the course of a month, and then became smaller again. I do not remember anything like the small PauseAI protests on those issues.  (This isn't to say it isn't a good thing in the case of AGI, I am just disputing that "small protests are the only way to get big protests")

Yeah the SF protests have been about constant (25-40) in attendance, but we have more locations now and have put a lot more infrastructure in place 

8Holly_Elmore
I think the relevant question is how often social movements begin with huge protests, and that’s exceedingly rare. It’s effective to create the impression that the people just rose up, but there’s basically always organizing groundwork for that to take off.

The thing is there isn’t a great dataset— even with historical case studies where the primary results have been achieved, there are a million uncontrolled variables and we don’t and will never have experimentally established causation. But, yes, I’m confident in my model of social change.

What leapt out to me about your model was that is was very focused how an observer of the protests would react with a rationalist worldview. You didn’t seem to have given much thought to the breadth of social movements and how a diverse public would have experienced them. Like, most people aren’t gonna think PauseAI is anti-tech in general and therefore similar to the unabomber. Rationalists think that way, and few others.

8Thane Ruthenis
My model of a normal person doesn't think PauseAI protests are anything in particular, yes. My model of a normal person also by-default feels an instinctive wariness towards an organized group of people who have physically assembled to stand against something, especially if their cause is unknown to me or weird-at-a-glance — which "AGI omnicide" currently is. (Because weird means unpredictable, and unpredictable physical thing means a possible threat). This wariness will be easy to transform into outright negative feelings/instinctive dismissal by, say, some news article bankrolled by an AGI lab explicitly associating PauseAI with environmental-activism vandals and violence. Doubly so in the current political climate, with pro-AI-progress people running the government. The difference between protests and other attempts at information proliferation is that (1) seeing a protest communicates little information on the cause (compared to e. g. a flyer or something, which can be instantly navigated to an information-dense resource if it contains links), so you can't immediately tell that the people behind it are thoughtful and measured and have expert support, instead of a chaotic extremist mob, (2) it is a deliberately loud physical anti-something activity, meaning the people engaging in it are interested in imposing their will on other people. Like, look at how much mileage they got out of Eliezer's statements about being willing to enforce the international AGI ban even in the face of nuclear retaliation. Obviously you can't protect against all possible misrepresentations, but I think some moves can be clearly seen to be exposing too much attack surface for the benefits they provide. Which, I'm not even saying this is necessarily the case for the protests PauseAI have been doing. But it seems like a reasonable concern to me. I would want to launch at least some organized inquiry to inform my CBAs, in your place.
6habryka
I am confused, did you somehow accidentally forget a negation here? You can argue that Thane is confused, but clearly Thane was arguing from what the public believes, and of course Thane himself doesn't think that PauseAI is similar to the Unabomber based on vague associations, and certainly almost nobody else on this site believes that (some might believe that non-rationalists believe that, but isn't that exactly the kind of thinking you are asking for?).

Sounds like you are saying that you have those associations and I still see no evidence to justify your level of concern.

2Thane Ruthenis
My understanding is that I am far from the only person in the LW/EA spaces who has raised this genre of concern against the policy of protests. Plenty of people at least believe that other people have this association, which is almost equivalent to this association actually exiting, and is certainly some evidence in that direction. Based on your responses, that hadn't prompted you to make any sort of inquiry – look up research literature, run polls, figure out any high-information-value empirical observations of bystander's reactions you can collect  – regarding whether those concerns are justified? That implies a very strong degree of confidence in your model. I'm only asking you to outline that model (or whatever inquires you ran, if you did run them).

Small protests are the only way to get to big protests, and I don’t think there’s a significant risk of backfire or cringe reaction making trying worse than not trying. It’s the backfire supposition that is baseless.

3Holly_Elmore
Do you guys seriously think that big protests just materialize?
4Elizabeth
Can you share data on the size of PauseAI protests over time?
2Thane Ruthenis
There is a potential instinctive association of protests with violence, riots, vandalism, obstruction of public property, crackpots/conspiracy theorists, et cetera. I don't think it's baseless to worry whether this association is strong enough, in a median person's mind, for any protest towards an unknown cause to be instinctively associated with said negative things, with this first impression then lingering. Anti-technology protests, in particular, might have an association with Unabomber-style terrorism, and certainly the AGI labs will be eager to reinforce this association. Protests therefore make your cause/movement uniquely vulnerable to this type of attack (via corresponding biased newspaper coverage). The marginal increase in visibility does not necessarily offset it. It doesn't seem obvious to me whether the net effects are positive or negative. Do you have theoretical or empirical support for the effects being positive? I don't think so, and I'm not even saying small protests bad. I'm saying small protests might be bad without the appropriate groundwork.
Ben Pace*172

The point that "small protests are the only way to get big protests" may be directionally accurate, but I want to note that there have been large protests that happened without that. Here's a shoggoth listing a bunch, including the 1989 Tiananmen Square Protests, the 2019 Hong Kong Anti-Extradition Protests, the 2020 George Floyd Protests, and more. 

The shoggoth says spontaneous large protests tends to be in response to triggering events and does rely on pre-existing movements that are ready to mobilize, the latter of which your work is helping build.

Appreciate your conclusion tho— that reaching the public is our best shot. Fortunately, different approaches are generally multiplicative and complementary. 

People usually say this when they personally don’t want to be associated with small protests. 

  • As-is, this is mostly going to make people's first exposure to AI X-risk be "those crazy fringe protestors". See my initial summary regarding effective persuasion: that would be lethal, gravely sabotaging our subsequent persuasion efforts.

Pretty strong conclusion with no evidence.

5Thane Ruthenis
By all means, my intuitive model might be wrong. Do you have evidence that small protests in the reference class of protests PauseAI are doing tend to have positive effects on the causes being championed?
4Holly_Elmore
Appreciate your conclusion tho— that reaching the public is our best shot. Fortunately, different approaches are generally multiplicative and complementary. 
2Holly_Elmore
People usually say this when they personally don’t want to be associated with small protests. 

Yeah, this is the first time I’ve commented on lesswrong in months and I would prefer to just be out of here. But OP was such nasty meangirl bullying that, when someone showed it to me, I wanted to push back.

Come on, William. "But they said their criticism of this person's reputation wasn't personal" is not good enough. It's like calling to "no take backs" or something. 

2WilliamKiely
Thanks for the feedback, Holly. I really don't want to accuse the OP of making a personal attack if OP's intent was to not do that, and the reality is that I'm uncertain and can see a clear possibility that OP has no ill will toward Kat personally, so I'm not going to take the risk by making the accusation. Maybe my being on the autism spectrum is making me oblivious or something, in which case sorry I'm not able to see things as you seem them, but this is how I'm viewing the situation.

I have a history in animal activism (both EA and mainstream) and I think PETA has been massively positive by pushing the Overton window. People think PETA isn't working bc they feel angry at PETA when they feel judged or accused, but they update on how it's okay to treat animals, and that's the point. More moderate groups like the Humane Society get the credit, but it takes an ecosystem. You don't have to be popular and well-liked to push the Overton window. You also don't have to be a group that people want to identify with. 

But I don't think PETA's ... (read more)

4RussellThor
I have argued with this before - I have absolutely been through an open minded process to discover the right approach and I genuinely believe the likes of MIRI, pause AI movements are mistaken and harmful now, and increase P(doom). This is not gatekeeping or trying to look cool!  You need to accept that there are people who have followed the field for >10 years, have heard all the arguments, used to believe Yud et all were mostly correct, and now agree with the positions of Pope/Belrose/Turntrout more. Do not belittle or insult them by assigning the wrong motives to them. If you want a crude overview of my position  * Superintelligence is extremely dangerous even though at least some of MIRI worldview is likely wrong. * P(doom) is a feeling, it is too uncertain to be rational about, however mine is about 20% if humanity develops TAI in the next <50 years. (This is probably more because of my personal psychology than a fact about the world and I am not trying to strongly pretend otherwise) * P(doom) if superintelligence was impossible is also about 20% for me, because the current tech (LLM etc) can clearly enable "1984" or worse type societies for which there is no comeback and extinction is preferable. Our current society/tech/world politics is not proven to stable. * Because of this, it is not at all clear what the best path forward is and people should have more humility about their proposed solutions. There is no obvious safe path forward given our current situation. (Yes if things had gone differently 20-50 years ago there perhaps could be...)
3WilliamKiely
Hey Holly, great points about PETA. I left one comment replying to a critical comment this post got saying that it wasn't being charitable (which turned into a series of replies) and now I find myself in a position (a habit?) of defending the OP from potentially-insufficiently-charitable criticisms. Hence, when I read your sentence... ...my thought is: Are you sure? When I read the post I remember reading: This series of questions seems to me like it's wondering whether Kat's strategy is effective at AI safety, which is the thing you're saying it's not doing. (I just scrolled up on my phone and saw that OP actually quoted this herself in the comment you're replying to. (Oops. I had forgotten this as I had read that comment yesterday.)) Sure, the OP is also clearly venting about her personal distaste for Kat's posts, but it seems to me that she is also asking the question that you say she isn't interested in: are Kat's posts actually effective? (Side note: I kind of regret leaving any comments on this post at all. It doesn't seem like the post did a good job encouraging a fruitful discussion. Maybe OP and anyone else who wants to discuss the topic should start fresh somewhere else with a different context. Just to put an idea out there: Maybe it'd be a more productive use of everyone's energy for e.g. OP, Kat, and you Holly to get on a call together and discuss what sort of content is best to create and promote to help the cause of AI safety, and then (if someone was interested in doing so) write up a summary of your key takeaways to share.)

Yeah actually the employees of Lightcone have led the charge in trying to tear down Kat. Its you who has the better standards, Maxwell, not this site.

Getting a strong current of “being smart and having interesting and current tastes is more important than trying to combat AI Danger, and I want all my online spaces to reflect this” from this. You even seem upset that Kat is contaminating subreddits that used to not be about Safety with Safety content… Like you’re mad about progress in embrace of AI Safety. You critique her for making millennial memes as if millennials don’t exist anymore (lesswrong is millennial and older) and content should only be for you.

You seem kinda self-aware of this at one point,... (read more)

6just_browsing
Thanks for the thoughtful reply. The post is both venting for the fun of it (which, clearly, landed with absolutely nobody here) and earnestly questioning whether the content is net positive (which, clearly, very few interpreted as being earnest):  There is precedent for brands and/or causes making bad memes and suffering backlash. I mention PETA in the post. Another example is this Pepsi commercial. There is also specifically precedent for memes getting backlash because they are dated, e.g. this Wendy's commercial. You might say that for brands all press is good press, but this seems less true to me when it comes to causes.  I don't know a lot about PETA and whether their animal activism is considered net positive. On the one hand a cursory google seems to say they caused some vegetarian options at fast food restaurants to exist. On the other hand it wouldn't be surprising if they shifted public sentiment negatively towards vegetarianism or veganism. That's what most people think of when they think of PETA. Anyway, you could imagine something similar happening with AI safety, where sufficiently bad memes cause people to not take it seriously. 
habryka110

I have! Multiple times at different stages of the bill (the first time like a month ago to Scott Wiener), as well as sent an email and asked like 3-4 other people to call.

The bill is in danger of not passing Appropriations because of lobbying and misinformation. That's what calling helps address. Calling does not make SB 1047 cheaper, and therefore does not address the Suspense File aspects of what it's doing in Appropriations. 

Why is "dishonesty" your choice of words here? Our mistake cut against our goal of getting people to call at an impactful time. It wasn't manipulative. It was merely mistaken. I understand holding sloppiness against us but not "dishonesty". 

I think the lack of charity is probably related to "activism dumb".

8[anonymous]
I feel some sort of "ugh, I don't want to be the Language Police" vibe, but here's my two cents: * I think I would've called this "misleading" or "inaccurate" but I think "dishonest" should be reserved for stronger violations.  * I also like Ben’s "conveniently misleading" or maybe even something like "inaccurate in a way that serves the interests of the OP.") * I think we should probably reserve terms like "dishonest" for more egregious forms of lying/manipulation.  * Outside of LW, I think "dishonest" often has a conscious/intentional/deliberate/premediated connotation. In many circles, dishonesty is a "charged" term that implies a higher degree of wrongness than we usually associate with things like imprecision, carelessness, or self-deception. * Separately, I do think it's important for those involved in advocacy to hold themselves to high standards of precision/accuracy and be "extra careful" to avoid systematically deceiving oneself or others. But I also think there are ways that the community could levy critiques in kinder and more productive ways, though.  * I think we would like to avoid worlds where advocacy people walk away with some sense of "ugh, LW people are mean and rude and call me dishonest and manipulative whenever I make minor mistakes" while still preserving the thoughtful/conscientious/precise/truth-seeking norms.
habryka147

It seemed like a pretty predictable direction in which to make errors. I don't think we have great language about this kind of stuff, but I think it makes sense to call mistakes which very systematically fall along certain political lines "dishonest". 

Again, I think the language that people have here is a bit messy and confusing, but given people's potential for self-deception, and selective error-correction, I think it's important to have language for that kind of stuff, and most of what people usually call deception falls under this kind of selective error-correction and related biases.

What kind of securities fraud could he have committed? 

I'm just a guy but the impression I get from occasionally reading the Money Stuff newsletter is that basically anything bad you do at a public company is securities fraud, because if you do a bad thing and don't tell investors, then people who buy the securities you offer are doing so without full information because of you.

No, sacrificing truth is fundamentally an act of self-deception. It is making yourself a man who believes a falsehood, or has a disregard for the truth. It is Gandhi taking the murder-pill. That is what I consider irreversible.

This is what I was talking about, or the general thing I had in mind, and I think it is reversible. Not a good idea, but I think people who have ever self-deceived or wanted to believe something convenient have come back around to wanting to know the truth. I also think people can be truthseeking in some domains while self-deceivi... (read more)

I get the sense that "but Google and textbooks exist" is more of a deontological argument, like if the information is public at all "the cat's out of the bag" and it's unfair to penalize LLMs bc they didn't cross any new lines, just increased accessibility.

Does that really seem true to you? Do you have no memories of sacrificing truth for something else you wanted when you were a child, say? I'm not saying it's just fine to sacrifice truth but it seems false to me to say that people never return to seeking the truth after deceiving themselves, much less after trying on different communication styles or norms. If that were true I feel like no one could ever be rational at all. 

1Shankar Sivarajan
I think that's a misunderstanding of what I mean by "sacrificing truth." Of course I have lied: I told my mom I didn't steal from the cookie jar. I have clicked checkboxes saying "I am over 18" when I wasn't. I enjoy a game of Mafia as much as the next guy. Contra Kant, I wholeheartedly endorse lying to your enemies to protect your friends. No, sacrificing truth is fundamentally an act of self-deception. It is making yourself a man who believes a falsehood, or has a disregard for the truth. It is Gandhi taking the murder-pill. That is what I consider irreversible. It's not so easy that I worry I might do it to myself by accident, so I'm not paranoid about it or anything. (One way to go about doing this would be to manipulate your language, redefining words as convenient: "The sky is 'green.' My definition of the word 'green' includes that color. It has always included that color. Quid est veritas?" Doing such things for a while until it becomes habitual should do it.) In this sense, no, I don't think I have ever done this. By the time I conceived of the possibility, I was old enough to resolve never to do it. Of course, the obvious counter is that if you had scifi/magic brain surgery tech, you could erase and rewrite your mind and memories as you wished, and set it to a state where you still sincerely valued truth, so it's not technically irreversible. My response to that is that a man willing to rewrite his own brain to deceive himself is certainly not one who values truth, and the resultant amnesiac is essentially a different person. But okay, fair enough, if this tech existed, I would reconsider my position on the irreversibility of sacrificing truth via self-deception.

That’s why I said “financially cheap”. They are expensive for the organizer in terms of convincing people to volunteer and to all attendees as far as their time and talents, and getting people to put in sweat equity is what makes it an effective demonstration. But per dollar invested they are very effective.

I would venture that the only person who was seriously prevented from doing something else by being involved in this protest was me. Of course there is some time and labor cost for everyone involved. I hope it was complementary to whatever else they do, and, as Ben said, perhaps even allowing them to flex different muscles in an enriching way.

2Raemon
Fwiw, since we decided to delay a couple days in curating, a thing I think would be cool for this one is to have either a "highlights" section at the beginning, or maybe a somewhat gearsier "takeaways" at the end.  Maybe this is more useful for someone else to do since it may be harder for you guys to know what felt particularly valuable for other people.

It’s hard to say what the true impact of the events will be at this time, but they went well! I’m going to write a post-mortem for the SF PauseAI protest yesterday and the Meta protest in September and post it on EAF/LW that will cover the short-term outcomes.

Considering they are financially cheap to do (each around $2000 if you don’t count my salary), I’d call them pretty successful already. Meta protest got good media coverage, and it remains to be seen how this one will be covered since most of the coverage happened in the two following weeks last time.

4habryka
I mean, to be clear, most of the cost is born by the protesters, so I don't think this argument goes through. I would value the time of many people attending that protest at $100/hr+, which moves the more realistic cost more into the $10k+ range.

You could share the events with your friends and family who may be near, and signal boost media coverage of the events after! If you want to donate to keep me organizing events, I have a GoFundMe (and if anyone wants to give a larger amount, I'm happy to talk about how to do that :D). If you want to organize future events yourself, please DM me. Even putting the pause emoji ⏸️ in your twitter name helps :)

Here are the participating cities and links:
October 21st (Saturday), in multiple countries

  • US, California, San Francisco (facebook)
  • US, Massachusetts, Bost
... (read more)

Personally, I'm interested in targeting hardware development and that will be among my future advocacy directions.  I think it'll be a great issue for corporate campaigns pushing voluntary agreements and for pushing for external regulations simultaneously. This protest is aimed more at governments (attending the UK Summit) and their overall plans for regulating AI, so we're pushing compute governance as way to most immediately address the creation of frontier models. Imo hardware tracking at the very least is going to have to be part of enforcing such... (read more)

If you found yourself interested in advocacy, the largest AI Safety protest ever is happening Saturday, October 21st! 

https://www.lesswrong.com/posts/abBtKF857Ejsgg9ab/tomorrow-the-largest-ai-safety-protest-ever 

Check out the LessWrong event here: https://www.lesswrong.com/events/ZoTkRYdqGuDCnojMW/global-pause-ai-protest-10-21

I think you’re correct that the paradigm has changed, Matthew, and that the problems that stood out to MIRI before as possibilities no longer quite fit the situation.

I still think the broader concern MIRI exhibited is correct: namely, that that an AI could appear to be aligned but not actually be aligned, and that this may not come to light until it is behaving outside of the context of training/in which the command was written. Because of the greater capabilities of an AI, the problem may have to do with differences in superficially similar goals that wou... (read more)

Whether MIRI was confused about the main issues of alignment in the past, and whether LLMs should have been a point of update for them is one of the points of contention here.

(I think the answer is no, see all the comments about this above)

8gallabytes
ML models in the current paradigm do not seem to behave coherently OOD but I'd bet for nearly any metric of "overall capability" and alignment that the capability metric decays faster vs alignment as we go further OOD.   See https://arxiv.org/abs/2310.00873 for an example of the kinds of things you'd expect to see when taking a neural network OOD. It's not that the model does some insane path-dependent thing, it collapses to entropy. You end up seeing a max-entropy distribution over outputs not goals. This is a good example of the kind of thing that's obvious to people who've done real work with ml but very counter to classic LessWrong intuitions and isn't learnable by implementing mingpt.

Change log: I removed the point about Meta inaccurately calling itself "open source" because it was confusing. 

Particularly in the rationalist community it seems like protesting is seen as a very outgroup thing to do. But why should that be? Good on you for expanding your comfort zone-- hope to see you there :)

^ all good points, but I think the biggest thing here is the policy of sharing weights continuing into the future with more powerful models. 

Yeah, I’ve been weighing a lot whether big tent approaches are something I can pull off at this stage or whether I should stick to “Pause AI”. The Meta protest is kind of an experiment in that regard and it has already been harder than I expected to get the message about irreversible proliferation across well. Pause is sort of automatically a big tent because it would address all AI harms. People can be very aligned on Pause as a policy without having the same motivations. Not releasing model weights is more of a one-off issue and requires a lot of inferential distance crossing even with knowledgeable people. So I’ll probably keep the next several events focused on Pause, a message much better suited to advocacy.

Yeah, I’m afraid of this happening with AI even as the danger becomes clearer. It’s one reason we’re in a really important window for setting policy.

Reducing the harm of irreversible proliferation potentially addresses almost all AI harms, but my motivating concern is x-risk.

This strikes me as the kind of political thinking I think you’re trying to avoid. Contempt is not good for thought. Advocacy is not the only way to be tempted to lower your epistemic standards. I think you’re doing it right now when you other me or this type of intervention.

1Quinn
This seems kinda fair, I'd like to clarify--- I largely trust the first few dozen people, I just expect depending on how growth/acquisition is done if there are more than a couple instances of protests to have to deal with all the values diversity underlying the different reasons for joining in. This subject seems unusually fraught in potential to generate conflationary alliance https://www.lesswrong.com/s/6YHHWqmQ7x6vf4s5C sorta things. Overall I didn't mean to other you-- in fact, never said this publicly, but a couple months ago there was a related post of yours that got me saying "yeah we're lucky holly is on this / she seems better suited than most would be to navigate this" cuz I've been consuming your essays for years. I also did not mean to insinuate that you hadn't thought it through-- I meant to signal "here's a random guy who cares about this consideration" just as an outside vote of "hope this doesn't get triage'd out". I basically assumed you had threatmodeled interactions with different strains of populism

I agree with your assessment of the situation a lot, but I disagree that there is all that much controversy about this issue in the broader public. There is a lot of controversy on lesswrong, and in tech, but the public as a whole is in favor of slowing down and regulating AI developments. (Although other AI companies think sharing weights is really irresponsible and there are anti-competitive issues with llama 2’s ToS, which why it isn’t actually open source.) https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launche... (read more)

I actually did not realize they released the base model. There's research showing how easy it is to remove the safety fine-tuning, which is where I got the framing and probably Zvi too, but perhaps that was more of a proof of concept than the main concern in this case. 

The concept of being able to remove fine-tuning is pretty important for safety, but I will change my wording where possible to also mention it being bad to release the base model without any safety fine-tuning. Just asked to download llama 2 so I'll see what options they give.

9Vladimir_Nesov
Here's my comment with references where I attempted to correct Zvi's framing. He probably didn't notice it, since he used the framing again a couple of weeks later.

Yeah, it felt like Eliezer was rounding off all of the bad faith in the post to this one stylistic/etiquette breach, but he didn't properly formulate the one rule that was supposedly violated. 

Sorry, what harmful thing would this proposal do? Require people to have licenses to fine-tune llama 2? Why is that so crazy?

Nora didn't say that this proposal is harmful. Nora said that if Zach's explanation for the disconnect between their rhetoric and their stated policy goals is correct (namely that they don't really know what they're talking about) then their existence is likely net-harmful.

That said, yes requiring everyone who wants to finetune LLaMA 2 get a license would be absurd and harmful. la3orn and gallabyres articulate some reasons why in this thread.

Another reason is that it's impossible to enforce, and passing laws or regulations and then not enforcing them is re... (read more)

8Nora Belrose
For one thing this is unenforceable without, ironically, superintelligence-powered universal surveillance. And I expect any vain attempt to enforce it would do more harm than good. See this post for some reasons for thinking it'd be net-negative.

A weakness I often observe in my numerous rationalist friends is "rationalizing and making excuses to feel like doing the intellectually cool thing is the useful or moral thing". Fwiw. If you want to do the cool thing, own it, own the consequences, and own the way that changes how you can honestly see yourself.

Say more?

Unless you’re endorsing illusionism or something I don’t understand how people disagreeing about the nature of consciousness means the hard problem is actually a values issue. There’s still the issue of qualia or why it is “like” anything to have experiences when all the same actions could be accomplished without that. I don’t see how people having different ideas of what consciousness refers to or what is morally valuable about that makes the Hard Problem any less hard.

5Nora Belrose
I hate the term “illusionism” for a lot of reasons. I think qualia is an incoherent concept, but I would prefer to use the term “qualia quietist” rather than illusionist. This paper by Pete Mandik summarizes what I think more or less https://core.ac.uk/download/pdf/199235518.pdf I think the question of “why it’s like something rather than not” is just like the question “why is there something rather than nothing” or “why am I me and not someone else?” These questions are unanswerable on their own terms.
Load More