All of c.trout's Comments + Replies

c.trout10

Right so you're worried about moral hazard generated by insurance (in the case where we have liability in place). For starters, the government arguably generates moral hazard for disasters of a certain size by default: it can't credibly commit ex ante to not bail out a critical economic sector or not provide relief to victims in the event of a major disaster: the government is always implicitly on the hook (see Moss, D. A. When All Else Fails: Government as the Ultimate Risk Manager. See the too-big-to-fail effect for an example). Charging a risk-pric... (read more)

My apologies if this is dumb but if when a model linearly represents features a and b it automatically linearly represents  and , then why wouldn't it automatically (i.e. without using up more model capacity) also linearly represent ? After all,  is equivalent to , which is equivalent to 

In general, since { } is truth functionally complete, if a and b are represented linearly won't the model have a linear representation of every expression of first o... (read more)

8ryan_greenblatt
I think the key thing is that a∧b and a∨b aren't separately linearly represented (or really even linearly represented in some strong sense). The model "represents" these by a+b with different thresholds like this: So, if we try to compute  a∨b∧¬(a∧b) linearly then we would do a∨b corresponds to (a+b), a∧b corresponds to (a+b) and ¬ corresponds to negation, so we get (a+b)−(a+b) which clearly doesn't do what we want! If we could apply a relu on one side, then we could get this: (a+b)−2relu((a+b)−1). And this could work (assuming a and b are booleans which are either 0 or 1). See also Fabien's comment which shows that you naturally get xor just using random MLPs assuming some duplication.
c.trout*20

There is an equivocation going on in the post that bothers me. Mot is at first the deity of lack of technology, where "technology" is characterized with the usual examples of hardware (wheels, skyscrapers, phones) and wetware (vaccines, pesticides). Call this, for lack of a better term, "hard technology". Later however, "technology" is broadened to include what I'll call "social technologies" – LLCs, constitutions, markets etc. One could also put in here voting systems (not voting machines, but e.g. first-past-the-post vs approval), PR campaigns, myths. So... (read more)

For posterity, we discussed in-person, and both (afaict) took the following to be clear predictive disagreements between the (paradigmatic) naturalist realists and anti-realists (condensed for brevity here, to the point of really being more of a mnemonic device):

Realists claim that:

  1. (No Special Semantics): Our use of "right" and "wrong" are picking up, respectively, on what would be appropriately called the rightness and wrongness features in the world.
  2. (Non-subjectivism/non-relativism): These features are largely independent of any particular homo sapiens a
... (read more)

I appreciate the comment – kinda gives me closure! I knew my comments on rational basis review were very much a stretch, but thought the Anderson test was closer to strict scrutiny. Admittedly here I was strongly influenced by Derfner and Herbert (Voting Is Speech, 34 Yale L. & Pol’y Rev. 471 (2016)) who obviously want Anderson to be stricter than rational basis. They made it seem as though the original Anderson test was substantially tougher than (and therefore not equivalent to) rational basis, but admitted that in subsequent rulings Anderson seemed ... (read more)

c.trout*10

How come they disagree on all those apparently non-spooky questions about relevant patterns in the world?

tl;dr: I take meta-ethics, like psychology and economics ~200 years ago, to be asking questions we don't really have the tools or know-how to answer. And even if we did, there is just a lot of work to be done (e.g. solving meta-semantics, which no doubt involves solving language acquisition. Or e.g. doing some sort of evolutionary anthropology of moral language). And there are few to do the work, with little funding.

Long answer: I take one of philosophy... (read more)

1c.trout
For posterity, we discussed in-person, and both (afaict) took the following to be clear predictive disagreements between the (paradigmatic) naturalist realists and anti-realists (condensed for brevity here, to the point of really being more of a mnemonic device): Realists claim that: 1. (No Special Semantics): Our use of "right" and "wrong" are picking up, respectively, on what would be appropriately called the rightness and wrongness features in the world. 2. (Non-subjectivism/non-relativism): These features are largely independent of any particular homo sapiens attitudes and very stable over time.  3. (Still Learning): We collectively haven't fully learned these features yet – the sparsity of the world does support and can guide further refinement of our collective usage of moral terms should we collectively wish to generalize better at identifying the presence of said features. This is the claim that leads to claims of there being a "moral attractor." Anti-realists may or may not disagree with (1) depending on how they cash out their semantics, but they almost certainly disagree with something like (2) and (3) (at least in their meta-ethical moments).
c.trout*21

(Re: The Tails Coming Apart As Metaphor For Life. I dunno, if most people, upon reflection, find that the extremes prescribed by all straightforward extrapolations of our moral intuitions look ugly, that sounds like convergence on... not following any extrapolation into the crazy scenarios and just avoiding putting yourself in the crazy scenarios. It might just be wrong for us to have such power over the world as to be directing us into any part of Extremistan. Maybe let's just not go to Extremistan – let's stay in Mediocristan (and rebrand it as Satisfici... (read more)

c.trout*10

(Sorry for delay! Was on vacation. Also, got a little too into digging up my old meta-ethics readings. Can't spend as much time on further responses...)

Although between Boyd and Blackburn, I'd point out that the question of realism falls by the wayside...

I mean fwiw, Boyd will say "goodness exists" while Blackburn is arguably committed to saying "goodness does not exist" since in his total theory of the world, nothing in the domain that his quantifiers range over corresponds to goodness – it's never taken as a value of any of his variables. But I'm pretty ... (read more)

3Charlie Steiner
How come they disagree on all those apparently non-spooky questions about relevant patterns in the world? I'm curious how you reconcile these. In science the data is always open to some degree of interpretation, but a combination of the ability to repeat experiments independent of the experimenter and the precision with which predictions can be tested tends to gradually weed out different interpretations that actually bear on real-world choices. If long-term disagreement is maintained, my usual diagnosis would be that the thing being disagreed about does not actually connect to observation in a way amenable to science. E.g. maybe even though it seems like "which patterns are important?" is a non-spooky question, actually it's very theory-laden in a way that's only tenuously connected to predictions about data (if at all), and so when comparing theories there isn't any repeatable experiment you could just stack up until you have enough data to answer the question. Alternately, maybe at least one of them is bad at science :P In the strong sense that everyone's use of "morality" converges to precisely the same referent under some distribution of "normal dynamics" like interacting with the world and doing self-reflection? That sort of miracle doesn't occur for the same reason coffee and cream don't spontaneously un-mix. But that doesn't happen even for "tiger" - it's not necessary that everyone means precisely the same thing when they talk about tigers, as long as the amount of interpersonal noise doesn't overwhelm the natural sparsity of the world that allows us to have single-world handles for general categories of things. You could still call this an attractor, it's just not a pointlike attractor - there's space for different people to use "tiger" in different ways that are stable under normal dynamics. If that's how it is for "morality" too ("if morality is as real as tigers" being a cheeky framing), then if we could somehow map where everyone is in concept sp
c.trout*21

Good models of moral language should be able to reproduce the semantics that normal people use every day.

Agreed. So much the worse for classic emotivism and error theory.

But semantics seems secondary to you (along with many meta-ethicists frankly – semantic ascent is often just used as a technique for avoiding talking past one another, allowing e.g. anti-realist views to be voiced without begging the question. I think many are happy grab whatever machinery from symbolic logic they need to make the semantics fit the metaphysical/epistemological views they h... (read more)

3Charlie Steiner
Can confirm. Although between Boyd and Blackburn, I'd point out that the question of realism falls by the wayside (they both seem to agree we're modeling the world and then pointing at some pattern we've noticed in the world, whether you call that realism or not is murky), and the actionable points of disagreement are things like "how much should we be willing to let complicated intuitions be overruled by simple intuitions?" If two people agree about how humans form concepts, and one says that certain abstract objects we've formed concepts for are "real," and another says they're "not real," they aren't necessarily disagreeing about anything substantive. Sometimes people disagree about concept formation, or (gasp) don't even give it any role in their story of morality. There's plenty of room for incoherence there. But along your Boyd-Blackburn axis, arguments about what to label "real" are more about where to put emphasis, and often smuggle in social/emotive arguments about how we should act or feel in certain situations.
c.trout*40

So the rules of chess are basically just a pattern out in the world that I can go look at. When I say I'm uncertain about the rules of chess, this is epistemic uncertainty that I manage the same as if I'm uncertain about anything else out there in the world.

The "rules of Morality" are not like this.

This and earlier comments are bald rejections of moral realism (including, maybe especially, naturalist realism). Can I get some evidence for this confident rejection?

I'm not sure what linking Yudkowsky's (sketch of a) semantics for moral terms is meant to tell ... (read more)

2Charlie Steiner
Sure. Here are some bullet points of evidence: * To all appearances, we're an evolved species on an otherwise fairly unremarkable planet in a universe that doesn't have any special rules for us. * The causal history of us talking about morality as a species runs through evolution and culture. * We learn to build models of the world, and can use language to communicate about parts of these models. Sometimes it is relevant that the map is not the territory, and the elements of our discourse are things on maps. In terms of semantics of moral language, I think the people who have to argue about whether they're realists or anti-realists are doing a fine job. Having fancy semantics that differentiate you from everyone else was a mistake. Good models of moral language should be able to reproduce the semantics that normal people use every day. E.g. "It's true that in baseball, you're out after three strikes." is not a sentence that needs deep revision after considering that baseball is an invented, contingent game. In terms of epistemology of morality, the average philosopher has completely dropped the ball. But since, on average, they think that as well, surely I'm only deferring to those who have thought longer on this when I say that.
c.trout*54

Harder, yes; extremely, I'm much less convinced. In any case, Chevron was already dealt a blow in 2022, so those lobbying Congress to create an AI agency of some sort should be encouraged to explicitly give it a broad mandate (e.g. that it has the authority to settle various major economic or political questions concerning AI.)

c.trout*10

Thanks for reading!

conflict theory with a degrowth/social justice perspective

Yea, I find myself interested in the topics LWers are interested in, but I'm disappointed certain perspectives are missing (despite them being prima facie as well-researched as the perspectives typical on LW). I suspect a bubble effect.

this is unfortunately where my willing suspension of disbelief collapsed

Yup, I suspected that last version would be the hardest to believe for LWers! I plan on writing much more in depth on the topic soon. You might be interested in Guive Assadi's r... (read more)

c.trout1-1

The mechanism of the compound interest yields utility.

Depends on what you mean by "utility." If "happiness" the evidence is very much unclear: though Life Satisfaction (LS) is correlated with income/GDP when we make cross-sectional measurement, LS is not correlated with income/GDP when we make time-series measurements. This is the Easterlin Paradox. Good overview of a recent paper on it, presented by its author. Full paper here. Good discussion of the paper on the EA forum here (responses from author as well Michael Plant in the comments).

c.trout*58

While I completely agree that care should be taken if we try to slow down AI capabilities, I think you might be overreacting in this particular case. In short: I think you're making strawmen of the people you are calling "neo-luddites" (more on that term below). I'm going to heavily cite a video that made the rounds and so I think decently reflects the views of many in the visual artist community. (FWIW, I don't agree with everything this artist says but I do think it's representative). Some details you seem to have missed:

  1. I haven't heard of visual artists
... (read more)
c.trout*20

That sounds about right. The person in the second case is less morally ugly than the first. This is spot on:

the important part is the internalized motivation vs reasoning out what to do from ethical principles.

 

What do you mean by this though?:

(although I notice my intuition has a hard time believing the premise in the 2nd case)

You find it hard to believe someone could internalize the trait of compassion through "loving kindness meditation"? (This last I assume is a placeholder term for whatever works for making oneself more virtuous). Also, any reaso... (read more)

2npostavs
Yes, the other examples seemed to be about caring about people you are close to more than strangers, but I wanted to focus on the ethical reasoning vs internal motivation part. Thanks, that's helpful.

Sorry, my first reply to your comment wasn't very on point. Yes, you're getting at one of the central claims of my post.

what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling

First, I wouldn't say "mostly." I think in excessive amounts it interferes. Regarding your skepticism: we already know that calculation (a maximizer's mindset) in other contexts interferes with affective attachment and positive evaluations towards the choices made by said calculation (see references to psych litt). Why shouldn't we expect th... (read more)

2DirectedEvolution
We've all sat around with thoughts whirling around in our heads, perseverating about ethics. Sometimes, a little ethical thinking helps us make a big decision. Other times, it's not much different from having an annoying song stuck in your head. When we're itchy, have the sun in our eyes, or, yes, can't stop thinking about ethics, that discomfort shows in our face, in our bearing, and in our voice, and it makes it harder to connect with other people. You and I both see that, just like a great song can still be incredibly annoying when it's stuck in your head, a great ethical system can likewise give us a terrible headache when we can't stop perseverating about it. So, for a person who streams a lot of consequentialism on their moral Spotify, it seems like you're telling them that if they'd just start listening to some nice virtue ethics instead and give up that nasty noise, they'd find themselves in a much more pleasant state of mind after a while. Personally, as a person who's been conversant with all three ethical systems, interfaces with many moral communities, and has fielded a lot of complex ethical conversations with a lot of people, I don't really see any more basis for thinking consequentialism is unusually bad as a "moral earworm," any more than (as a musician) I think that any particular genre of music is more prone to distressing earworms. To me, perseveration/earworms feels more like a disorder of the audio loop, in which it latches on to thoughts, words, or sounds and cycles from one to another in a way you just can't control. It doesn't feel particularly governed by the content of those thoughts. Even if it is enhanced by specific types of mental content, it seems like it would require psychological methodologies that do not actually exist in order to reliably detect an effect of that kind. We'd have to see the thoughts in people's heads, find out how often they perseverate, and try and detect a causal association. I think it's unlikely that convinc

In my view, the neural-net type of processing has different strength and weaknesses from the explicit reasoning, and they are often complementary.

Agreed. As I say in the post:

Of course cold calculated reasoning has its place, and many situations call for it. But there are many more in which being calculating is wrong.

I also mention that faking it til you make it (which relies on explicit S2 type processing) is also justified sometimes, but something one ideally dispenses with.

"moral perception" or "virtues" ...is not magic, bit also just a computation runn

... (read more)

... consequentialism judges the act of visiting a friend in hospital to be (almost certainly) good since the outcome is (almost certainly) better than not doing it. That's it. No other considerations need apply. [...] whether there exist other possible acts that were also good are irrelevant.

I don't know of any consequentialist theory that looks like that. What is the general consequentialist principle you are deploying here? Your reasoning seems very one off. Which is fine! That's exactly what I'm advocating for! But I think we're talking past each other ... (read more)

c.trout*20

If our motivation is just to make our friend feel better is that okay?

Absolutely. Generally being mindful of the consequences of one's actions is not the issue: ethicists of every stripe regularly reference consequences when judging an action. Consequentialism differentiates itself by taking the evaluation of consequences to be explanatorily fundamental – that which forms the underlying principle for their unifying account of all/a broad range of normative judgments. The point that Stocker is trying to make there is (roughly) that being motivated purely by... (read more)

1npostavs
Okay, I think my main confusion is that all the examples have both the motivation-by-ethical-reasoning and lack-of-personal-caring/empathy on the moral disharmony/ugliness side. I'll try to modify the examples a bit to tease them apart: * Visiting a stranger in the hospital in order to increase the sum of global utility is morally ugly * Visiting a stranger in the hospital because you've successfully internalized compassion toward them via loving kindness meditation (or something like that) is morally good(?) That is, the important part is the internalized motivation vs reasoning out what to do from ethical principles. (although I notice my intuition has a hard time believing the premise in the 2nd case)
c.trout*10

Here is my prediction:

I claim that one's level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer's mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one's decision (or the object at the center of their decision) in such scenarios.

More specifically I predict that, above a certain threshold of engagement with the community, increased eng

... (read more)
c.trout*10

It’s better when we have our heart in it, and my point is that moral reasoning can help us do that.

My bad, I should have been clearer. I meant to say "isn't it better when we have our heart in it, and we can dispense with the reasoning or the rule consulting?"

I should note, you would be in good company if you answered "no." Kant believed that an action has no moral worth if it was not motivated by duty, a motivation that results from correctly reasoning about one's moral imperatives. He really did seem to think we should be reasoning about our duties all the time. I think he was mistaken.

Regarding moral deference:
I agree that moral deference as it currently stands is highly unreliable. But even if it were, I actually don't think a world in which agents did a lot of moral deference would be ideal. The virtuous agent doesn't tell their friend "I deferred to the moral experts and they told me I should come see you." 

I do emphasize the importance of having good moral authorities/exemplars help shape your character, especially when we're young and impressionable. That's not something we have much control over – when we're older, we can som... (read more)

2DirectedEvolution
It sounds like what you really care about is promoting the experience of empathy and fellow-feeling. You don’t particularly care about moral calculation or deference, except insofar as they interfere or make room for with this psychological state. I understand the idea that moral deference can make room for positive affect, and what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling. It’s a hypothesis one could test, but it needs data.

Regarding feelings about disease far away:
I'm glad you have become concerned about these topics! I'm not sure virtue ethicists couldn't also motivate those concerns though. Random side-note: I absolutely think consequentialism is the way to go when judging public/corporate/non-profit policy. It makes no sense to judge the policy of those entities the same way we judge the actions of individual humans. The world would be a much better place if state departments, when determining where to send foreign aid, used consequentialist reasoning.

Regarding feelings t... (read more)

2DirectedEvolution
It’s better when we have our heart in it, and my point is that moral reasoning can help us do that. From my point of view, almost all the moral gains that really matter come from action on the level of global initiatives and careers directed at steering outcomes on that level. There, as you say, consequentialism is the way to go. For the everyday human acts that make up our day to day lives, I don’t particularly care which moral system people use - whatever keeps us relating well with others and happy seems fine to me. I’d be fine with all three ethical systems advertising themselves and competing in the marketplace of ideas, as long as we can still come to a consensus that we should fund bed nets and find a way not to unleash a technological apocalypse on ourselves.

I agree that, among ethicists, being of one school or another probably isn't predictive of engaging more or less in "one thought too many." Ethicists are generally not moral paragons in that department. Overthinking ethical stuff is kind of their job though – maybe be thankful you don't have to do it?

That said, I do find that (at least in writing) virtue ethicists do a better job of highlighting this as something to avoid: they are better moral guides in this respect. I also think that they tend to muster a more coherent theoretical response to the problem of self-effacement: they more or less embrace it, while consequentialists try to dance around it.

4DirectedEvolution
It sounds like you're arguing not so much for everybody doing less moral calculation, and more for delegating our moral calculus to experts. I think we meet even stronger limitations to moral deference than we do for epistemic deference: experts disagree, people pose as experts when they aren't, people ignore expertise where it exists, laypeople pick arguments with each other even when they'd both do better to defer, experts engage in interior moral disharmony, etc. When you can do it, I agree that deference is an attractive choice, as I feel I am able to do in the case of several EA institutions. I strongly dislike characterizations of consequentialism as "dancing around" various abstract things. It is a strange dance floor populated with strange abstractions and I think it behooves critics to say exactly what they mean, so that consequentialists can make specific objections to those criticisms. Alternatively, we consequentialists can volley the same critiques back at the virtue ethicists: the Catholic church seems to do plenty of dancing around its own seedy history of global-scale consquest, theft, and abuse, while asking for unlimited deference to a moral hierarchy it claims is not only wise, but infallible. I don't want to be a cold-hearted calculator, but I also don't want to defer to, say, a church with a recent history of playing the ultimate pedopheliac shell game. If I have to accept a little extra dancing to vet my experts and fill in where ready expertise is lacking, I am happy for the exercise.
c.trout*10

Great question! Since I'm not a professional ethicist, I can't say: I don't follow this stuff closely enough. But if you want a concrete falsifiable claim from me, I proposed this to a commenter on the EA forum:

I claim that one's level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer's mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one'

... (read more)

I agree with both of you that the question for consequentialists is to determine when and where an act-consequentialist decision procedure (reasoning about consequences), a deontological decision procedure (reasoning about standing duties/rules), or the decision procedure of the virtuous agent (guided by both emotions and reasoning) are better outcome producers. 

But you're missing part of the overall point here: according to many philosophers (including sophisticated consequentialists) there is something wrong/ugly/harmful about relying too much on re... (read more)

Agreed. Roberts, Kavanaugh, and Barrett are generally considered center-right.

In addition, Chief Justice Roberts made it clear on multiple occasions he is concerned with public confidence in the court. This would give them a chance to prove they are non-partisan, on an issue that literally pits the people against incumbent major parties. And as I point out in my case, allowing greater freedom of expression on the ballot that translates into more representative elected officials should help with public trust in general.

c.trout1-2

I think that's a bit reductionist. There are a number of ideologies/theories regarding how law should be interpreted and what role courts are meant to play etc. Parties certainly pick justices who have legal ideologies that favor the outcomes parties want, regarding current political issues. But I think those legal ideologies are more stable in the justices than their tendency to rule the way desired by the party which appointed them.

I am painfully aware of this. I've been doubting myself throughout, and for awhile just left the idea in the drawer precisely out of fear of its naïvety. 

Ultimately I did write it up and post, for three reasons: (1) to avoid getting instantly dismissed, to get my idea properly assessed by a legal expert in the first place, I needed to lay things out clearly; (2) I think it's at least possible that our voting system has largely become invisible, and that many high-powered legal experts are focused on other things (of course there are die-hard voting re... (read more)

Trying to! Any guidance would be welcome. So far I've only sent it to the First Amendment Lawyers Association because it seemed like they would be receptive to it. Should I try the ACLU? Was also thinking of the Institute for Free for Free Speech, though they seem to lean conservative which might make them less receptive. I wonder if there is a high power libertarian leaning firm that specializes in constitutional law... ideally we're looking for lawyers who are receptive to the case, but who also would not be looked upon by the Court as judicial activists... (read more)

5Jiro
Let me rephrase: There's a long tendency of amateurs to come up with some sort of "clever idea" in a field that has been around a long time, and think they've got some sort of new insight that people in the field have somehow never managed to consider. And they're always wrong, because if an amateur can come up with some idea, so can a person in the field, and if the idea hasn't taken over the field, there's a reason why. This is true in the sciences (it happens a lot), but it's also true in fields such as law. If you have not contacted a lawyer and the lawyer has at least told you "no, it's not obviously wrong, and no, there isn't an existing body of literature explaining why it's non-obviously wrong", chances are negligible that your idea will pan out. Finding a clever, new, legal argument for something that no lawyer has considered is about as likely as coming up with a clever, new, argument for why Enstein is wrong.
1the gears to ascension
i wonder if this conversation would come up in court.

Agreed, but that doesn't make for a legal case today. The Originalism many on today's Court subscribe to does not take into consideration the intent of lawmakers (in this case the framers), but instead simply asks: what would reasonable persons living at the time of its adoption have understood the ordinary meaning of the text to be? This is original meaning theory, in contrast with original intent theory.

Very true! I should get feedback from legal experts though before I sink any more time into this.

Yes, such sentences are a thing. Kendall Walton calls them "principles of generation" because, according to his analysis, they generate fictional truths (see his Mimesis as Make-Believe). Pointing at the sand and shouting "There is lava there!" we have said something fictionally true, in virtue of the game rule pronounced earlier.  "Narrative syncing" sounds like a broader set of practices that generate and sustain such truths – I like it! (I must say "principles of generation" is a bit clunky anyway – but it's also more specific. Maybe "rule decreein... (read more)

c.trout*20

I don't follow the reasoning. How do you get from "most people's moral behaviour is explainable in terms of them 'playing' a status game" to "solving (some versions of) the alignment problem probably won't be enough to ensure a future that's free from astronomical waste or astronomical suffering"?

More details:
Regarding the quote from The Status Game: I have not read the book, so I'm not sure what the intended message is but this sounds like some sort of unwarranted pessimism about ppl's moral standing (something like a claim like "the vast majority of ppl ... (read more)

I'm not down or upvoting, but I will say, I hope you're not taking this exercise too seriously...

Are we really going to analyze one person's fiction (even if rationalist, it's still fiction), in an attempt to gain insight into this one person's attempt to model an entire society and its market predictions – and all of this in order to try and better judge the probability of certain futures under a number of counterfactual assumptions? Could be fun, but I wouldn't give its results much credence.

Don't forget Yudkowsky's own advice about not generalizing from... (read more)

2Thomas Kwa
Yeah, I think the level of seriousness is basically the same as if someone asked Eliezer "what's a plausible world where humanity solves alignment?" to which the reply would be something like "none unless my assumptions about alignment are wrong, but here's an implausible world where alignment is solved despite my assumptions being right!" The implausible world is sketched out in way too much detail, but lots of usefulness points are lost by its being implausible. The useful kernel remaining is something like "with infinite coordination capacity we could probably solve alignment" plus a bit because Eliezer fiction is substantially better for your epistemics than other fiction. Maybe there's an argument for taking it even less seriously? That said, I've definitely updated down on the usefulness of this given the comments here.