I believe about Sacks views comes from regularly listening to the All-In Podcast where he regularly talks about AI.
I haven't looked much into Sacks' particular stance here, but I think concerns around censorship are typically along the lines of "the state should not be involved in telling companies what their models can/can't say. This can be weaponized against certain viewpoints, especially conservative viewpoints. Some folks on the left are trying to do this under the guise of terms like misinformation, fairness, and bias."
Sacks is smarter and more sophisticated than that.
Also things like "an AI model should not output bioweapons or other things that threaten national security" are "censorship" under some very narrow definition of censorship, but IME this is not what people mean when they say they are worried about censorship.
In the real world, efforts of the Department of Homeland security that started with censoring for reasons of national security ended up increasing the scope of what they censor. In the end the lab leak theory got censored and if you would ask the Department of Homeland security for their justification there's a good chance that they would say "national security".
If you take early writing of Eliezer, the idea is AI should be aligned with Coherent Extrapolated Volition. That's a different goal from aligning AI with the views of credentialed experts or the leadership of AI companies.
"How do you regulate AI companies so that they aren't enforcing Californian values on the rest of the United States and the world?" is an alignment question. If you have a good answer to that question, it would be easier to convince someone worried about those companies having enforced Californian values via censorship industrial complex doing the same thing with AI to regulate AI companies.
If you ignore the alignment questions that people like David Sachs care about, it's hard to convince them that you are sincere about the other alignment questions.
With David Sacks being the AI/Crypto czar, we likely won't be getting any US regulation on AI in the next years.
It seems to me like David Sacks perspective on the issue is that AI regulation is just another aspect of the censorship industrial complex.
To convince him of AI regulation, you would likely need to have an idea about how to do AI regulation without furthering the censorship industrial complex. The lack of criticism of the censorship industrial complex in the AI safety discourse now is a big problem because there are no available policy proposals.
From my conversations with Vassar, I think there's a sense of "There's a lot that's possible to do in the world, if you just ignore social conventions" that's downstream from being accepting what Vassar says. A person who previously didn't take any psychedelics because of social conventions, might become more open to taking psychedelics and thinking about whether it makes sense to take them.
Michael Vassar has lots of different ideas and is someone who's willing to share his ideas in a relatively unfiltered way. Some of them are ideas for experiments that could be done.
Without knowing concrete facts of what happened (I only talked to Michael when he was in Berlin):
Let's say, Michael suggest that doing a certain "psychological technique" might be a valuable experiment. Alice, did the experiment and it had outcome. Michael thinks it had a bad outcome. Alice, however think the outcome is great and continues doing the technique.
If you conclude from that that Michael is bad, because he proposed an experiment that had a bad outcome, you are judging people who are experimenting with the unknown for their love of experimenting with the unknown.
If you want to criticize Michael because he's to open to experimentation, do that more explicitly because then you need to actually argue the core of the issue. Michael is person who thinks that various Chesterton's fences are no reason to avoid experimentation.
Michael also is very open about talking to anyone even if the person might be "bad", so you might also criticize him for speaking with Olivia in the first place instead of kicking Olivia out from he conversations he had.
Given that Ziz was actually a student at CFAR, calling Ziz a CFARian and blaming CFAR for Ziz would make a lot more sense than blaming Michael for Olivia. Jessica suggests that Olivia was also trying to study from Anna Salomon, so probably Olivia was at CFAR at some point, so might also be called a CFARian.
How do you know that Michael Vassar or Jessica Taylor have been aggressive about asserting their point of view in the presence of people who take psychedelics?
What kind of student teacher relationship did Vassar and Olivia had and for what amount of time did they have it?
Did you come to "conspiratorial interpretations" of the behavior of your family in that process?
But I have observed this all directly.
This post feels like it's written on an unnecessarily high level of abstraction. What are the actual events you observed directly? What did you see with your own eyes or hear with your own ears?
I don't have specific recommendations for the past. I would expect a section in the next All-In Podcast in which David Sachs participates to law out his views a bit.
That's the question you would ask if you think the person who's drawing the line is aligned. If you think the people speaking about national security and using that to further different political and geopolitical ends are not aligned, it's not the most interesting question.
It sounds to me like you are taking this as an abstract policy issue while ignoring the real-world censorship industrial complex. It's like discussing union policy in the 1970s and 1980s in New York without taking into account that a lot of strikes are because someone failed to pay the Mafia.
If you don't know what the censorship industrial complex is, Joe Rogan had a good interview with Mike Benz, who is a former official with the U.S. Department of State and current Executive Director of the Foundation For Freedom Online.