I read your comment as conflating 'talking about the culture war at all' and 'agreeing with / invoking Curtis Yarvin', which also conflates 'criticizing Yarvin' with 'silencing discussion of the culture war'.
This reinforces a false binary between totally mind-killed wokists and people (like Yarvin) who just literally believe that some folks deserve to suffer, because it's their genetic destiny.
This kind of tribalism is exactly what fuels the culture war, and not what successfully sidesteps, diffuses, or rectifies it. NRx, like the Cathedral, is a mind-killing apparatus, and one can cautiously mine individual ideas presented by either side, on the basis of the merits of that particular idea, while understanding that there is, in fact, very little in the way of a coherent model underlying those claims. Or, to the extent that there is such a model, it doesn't survive (much) contact with reality.
[it feels useful for me to point out that Yarvin has ever said things I agree with, and that I'm sympathetic to some of the main-line wokist positions, to avoid the impression that I'm merely a wokist cosplaying centrism; in fact, the critiques of wokism I find most compelling are the critiques that come from the left, but it's also true that Yarvin has some views here that are more in contact with reality]
edit: I agree that people should say things they believe and be engaged with in good faith (conditional on they, themselves, are engaging in good faith)
I think you're saying something here but I'm going to factor it a bit to be sure.
One and three I'm just going to call 'subjective' (and I think I would just agree with you if the Wikipedia article were actually representative of the contents of the book, which it is not).
Re 4: The book itself is actually largely about his experiences as a professor, being subjected to the forces of elite coordination and bureaucracy, and reads a lot like Yarvin's critiques of the Cathedral (although Fisher identifies these as representative of a pseudo-left).
Re 2: The novelty comes from the contemporaneity of the writing. Fisher is doing a very early-20th century Marxist thing of actually talking about one's experience of the world, and relating that back to broader trends, in plain language. The world has changed enough that the work has become tragically dated, and I personally wouldn't recommend it to people who aren't already somewhat sympathetic to his views, since its strength around the time of its publication (that contemporaneity) has, predictably, becomes its weakness.
The work that more does the thing testingthewaters is gesturing toward, imo, is Exiting the Vampire Castle. The views expressed in this work are directly upstream of his death: his firm (and early) rebuke of cancel culture and identity politics precipitated rejection and bullying from other leftists on twitter, deepening his depression. He later killed himself.
Important note if you actually read the essay: he's setting his aim at similar phenomena to Yarvin, but is identifying the cause differently // he is a leftist talking to other leftists, so is using terms like 'capital' in a valenced way. I think the utility of this work, for someone who is not part of the audience he is critiquing, is that it shows that the left has any answer at all to the phenomena Yarvin and Ngo are calling out; that they're not, wholesale, oblivious to these problems and, in fact, the principal divide in the contemporary left is between those who reject the Cathedral and those who seek to join it.
(obligatory "Nick Land was Mark Fisher's dissertation advisor.")
(I basically endorse Daniel and Habryka's comments, but wanted to expand the 'it's tricky' point about donation. Obviously, I don't know what they think, and they likely disagree on some of this stuff.)
There are a few direct-work projects that seem robustly good (METR, Redwood, some others) based on track record, but afaict they're not funding constrained.
Most incoming AI safety researchers are targeting working at the scaling labs, which doesn't feel especially counterfactual or robust against value drift, from my position. For this reason, I don't think prosaic AIS field-building should be a priority investment (and Open Phil is prioritizing this anyway, so marginal value per dollar is a good deal lower than it was a few years ago).
There are various governance things happening, but much of that work is pretty behind the scenes.
There are also comms efforts, but the community as a whole has just been spinning up capacity in this direction for ~a year, and hasn't really had any wild successes, beyond a few well-placed op-eds (and the juries out on if / which direction these moved the needle).
Comms is a devilishly difficult thing to do well, and many fledgling efforts I've encountered in this direction are not in the hands of folks whose strategic capacities I especially trust. I could talk at length about possible comms failure modes if anyone has questions.
I'm very excited about Palisade and Apollo, which are both, afaict, somewhat funding constrained in the sense that they have fewer people than they should, and the people currently working there are working for less money than they could get at another org, because they believe in the theory of change over other theories of change. I think they should be better supported than they are currently, on a raw dollars level (but this may change in the future, and I don't know how much money they need to receive in order for that to change).
I am not currently empowered to make a strong case for donating to MIRI using only publicly available information, but that should change by the end of this year, and the case to be made there may be quite strong. (I say this because you may click my profile and see I work at MIRI, and so it would seem a notable omission from my list if I didn't mention why it's omitted; reasons for donating to MIRI exist, but they're not public, and I wouldn't feel right trying to convince anyone of that, especially when I expect it to become pretty obvious later).
I don't know how much you know about AI safety and the associated ecosystem but, from my (somewhat pessimistic, non-central) perspective, many of the activities in the space are likely (or guaranteed, in some instances) to have the opposite of their stated intended impact. Many people will be happy to take your money and tell you it's doing good, but knowing that it is doing good by your own lights (as opposed to doing evil or, worse, doing nothing*) is the hard part. There is ~no consensus view here, and no single party that I would trust to make this call with my money without my personal oversight (which I would also aim to bolster through other means, in advance of making this kind of call).
*this was a joke. Don't Be Evil.
[errant thought pointing a direction, low-confidence musing, likely retreading old ground]
There’s a disagreement that crops up in conversations about changing people’s minds. Sides are roughly:
This first strategy invites framing your argument around the question “How did I come to change my mind?”, and this second invites framing your argument around the question “How might I change my audience’s mind?”. I am sometimes characterized as advocating for approach 2, and have never actually taken that to be my position.I think there’s a third approach here, which will look to advocates of approach 1 as if it were approach 2, and look to advocates of approach 2 as if it were approach 1. That is, you should frame the strategy around the question “How might my audience come to change their mind?”, and then not even try to change it yourself.
This third strategy is about giving people handles and mechanisms that empower them to update based on evidence they will encounter in the natural course of their lives, rather than trying to do all of the work upfront. Don’t frame your own position as some competing argument in the market place of ideas; hand your interlocutor a tool, tell them what they might expect, and let their experience confirm your predictions.I think this approach has a few major differences over the other two approaches, from the perspective of its impact:
I think Eliezer has talked about some version of this in the past, and this is part of why people like predictions in general, but I think pasting a prediction at the end of an argument built around strategy 1 or 2 isn't actually Doing The Thing I mean here.
Friends report Logan's writing strongly has this property.
Do you think of rationality as a similar sort of 'object' or 'discipline' to philosophy? If not, what kind of object do you think of it as being?
(I am no great advocate for academic philosophy; I left that shit way behind ~a decade ago after going quite a ways down the path. I just want to better understand whether folks consider Rationality as a replacement for philosophy, a replacement for some of philosophy, a subset of philosophical commitments, a series of cognitive practices, or something else entirely. I can model it, internally, as aiming to be any of these things, without other parts of my understanding changing very much, but they all have 'gaps', where there are things that I associate with Rationality that don't actually naturally fall out of the core concepts as construed as any of these types of category [I suppose this is the 'being a subculture' x-factor]).
Question for Ben:
Are you inviting us to engage with the object level argument, or are you drawing attention to the existence of this argument from a not-obviously-unreasonable-source as a phenomenon we are responsible for (and asking us to update on that basis)?
On my read, he’s not saying anything new (concerns around military application are why ‘we’ mostly didn’t start going to the government until ~2-3 years ago), but that he’s saying it, while knowing enough to paint a reasonable-even-to-me picture of How This Thing Is Going, is the real tragedy.
I think the reason nobody will do anything useful-to-John as a result of the control critique post is that control is explicitly not aiming at the hard parts of the problem, and knows this about itself. In that way, control is an especially poorly selected target if the goal is getting people to do anything useful-to-John. I'd be interested in a similar post on the Alignment Faking paper (or model organisms more broadly), on RAT, on debate, on faithful CoT, on specific interpretability paradigms (circuits v SAEs, vs some coherentist approach vs shards vs....), and would expect those to have higher odds of someone doing something useful-to-John. But useful-to-John isn't really the metric I think the field should be using, either....
I'm kind of picking on you here because you are least guilty of this failing relative to researchers in your reference class. You are actually saying anything at all, sometimes with detail, about how you feel about particular things. However, you wouldn't be my first-pick judge for what's useful; I'd rather live in a world where like half a dozen people in your reference class are spending non-zero time arguing about the details of the above agendas and how they interface with your broader models, so that the researchers working on those things can update based on those critiques (there may even be ways for people to apply the vector implied by y'all's collective input, and generate something new / abandon their doomed plans).
there are plenty of cases where we can look at what people are doing and see pretty clearly that it is not progress toward the hard problem
There are plenty of cases where John can glance at what people are doing and see pretty clearly that it is not progress toward the hard problem.
Importantly, people with the agent foundations class of anxieties (which I embrace; I think John is worried about the right things!) do not spend time engaging on a gears level with prominent prosaic paradigms and connecting the high level objection ("it ignores the hard part of the problem") with the details of the research.
"But Tsvi and John actually spend a lot of time doing this."
No, they don't! They paraphrase the core concern over and over again, often seemingly without reading the paper. I don't think reading the paper would change your minds (nor should it!), but I think that there's a culture problem tied to this off-hand dismissal of prosaic work that disincentivizes potential agent foundations (or similar new thing that shares the core concerns of agent foundations) researchers from engaging with, i.e., John.
Prosaic work is fraught and, much of it, doomed. New researchers over-index on tractability because short feedback loops are comforting ('street-lighting'). Why aren't we explaining why that is, on the terms of the research itself, rather than expecting people to be persuaded by the same high level point getting hammered into them again and again?
I've watched this work in real-time. If you listen to someone talk about their work, or read their paper and follow up in person, they are often receptive to a conversation about worlds in which their work is ineffective, evidence that we're likely to be in such a world, and even to shifting the direction of their work in recognition of that evidence.
Instead, people with their eye on the ball are doing this tribalistic(-seeming) thing.
Yup, the deck is stacked against humanity solving the hard problems; for some reason, folks who know that are also committed to playing their hands poorly, and then blaming (only) the stacked deck!
John's recent post on control is a counter-example to the above claims and was, broadly, a big step in the right direction, but had some issues with it, as raised by Redwood in the comments, which are a natural consequence of it being ~a new thing John was doing. I look forward to more posts like that in the future, from John and others, that help new entrants to empirical work (which has a robust talent pipeline!) understand, integrate, and even pivot in response to, the hard parts of the problem.
[edit: I say 'gears level' a couple times, but mean 'more in the direction of gears-level than the critiques that have existed so far']
Ah, I think this just reads like you don't think of romantic relationships as having any value proposition beyond the sexual, other than those you listed (which are Things but not The Thing, where The Thing is some weird discursive milieu). Also the tone you used for describing the other Things is as though they are traps that convince one, incorrectly, to 'settle', rather than things that could actually plausibly outweigh sexual satisfaction.
Different people place different weight on sexual satisfaction (for a lot of different reasons, including age).
I'm mostly just trying to explain all the disagree votes. I think you'll get the most satisfying answer to your actual question by having a long chat with one of your asexual friends (as something like a control group, since the value of sex to them is always 0 anyway, so whatever their cause is for having romantic relationships is probably the kind of thing that you're looking for here).