Ah, I see now! Thanks for the clarifications!
My apologies if this is dumb but if when a model linearly represents features a and b it automatically linearly represents and , then why wouldn't it automatically (i.e. without using up more model capacity) also linearly represent ? After all, is equivalent to , which is equivalent to .
In general, since { , } is truth functionally complete, if a and b are represented linearly won't the model have a linear representation of every expression of first o...
There is an equivocation going on in the post that bothers me. Mot is at first the deity of lack of technology, where "technology" is characterized with the usual examples of hardware (wheels, skyscrapers, phones) and wetware (vaccines, pesticides). Call this, for lack of a better term, "hard technology". Later however, "technology" is broadened to include what I'll call "social technologies" – LLCs, constitutions, markets etc. One could also put in here voting systems (not voting machines, but e.g. first-past-the-post vs approval), PR campaigns, myths. So...
For posterity, we discussed in-person, and both (afaict) took the following to be clear predictive disagreements between the (paradigmatic) naturalist realists and anti-realists (condensed for brevity here, to the point of really being more of a mnemonic device):
Realists claim that:
I appreciate the comment – kinda gives me closure! I knew my comments on rational basis review were very much a stretch, but thought the Anderson test was closer to strict scrutiny. Admittedly here I was strongly influenced by Derfner and Herbert (Voting Is Speech, 34 Yale L. & Pol’y Rev. 471 (2016)) who obviously want Anderson to be stricter than rational basis. They made it seem as though the original Anderson test was substantially tougher than (and therefore not equivalent to) rational basis, but admitted that in subsequent rulings Anderson seemed ...
How come they disagree on all those apparently non-spooky questions about relevant patterns in the world?
tl;dr: I take meta-ethics, like psychology and economics ~200 years ago, to be asking questions we don't really have the tools or know-how to answer. And even if we did, there is just a lot of work to be done (e.g. solving meta-semantics, which no doubt involves solving language acquisition. Or e.g. doing some sort of evolutionary anthropology of moral language). And there are few to do the work, with little funding.
Long answer: I take one of philosophy...
(Re: The Tails Coming Apart As Metaphor For Life. I dunno, if most people, upon reflection, find that the extremes prescribed by all straightforward extrapolations of our moral intuitions look ugly, that sounds like convergence on... not following any extrapolation into the crazy scenarios and just avoiding putting yourself in the crazy scenarios. It might just be wrong for us to have such power over the world as to be directing us into any part of Extremistan. Maybe let's just not go to Extremistan – let's stay in Mediocristan (and rebrand it as Satisfici...
(Sorry for delay! Was on vacation. Also, got a little too into digging up my old meta-ethics readings. Can't spend as much time on further responses...)
Although between Boyd and Blackburn, I'd point out that the question of realism falls by the wayside...
I mean fwiw, Boyd will say "goodness exists" while Blackburn is arguably committed to saying "goodness does not exist" since in his total theory of the world, nothing in the domain that his quantifiers range over corresponds to goodness – it's never taken as a value of any of his variables. But I'm pretty ...
Good models of moral language should be able to reproduce the semantics that normal people use every day.
Agreed. So much the worse for classic emotivism and error theory.
But semantics seems secondary to you (along with many meta-ethicists frankly – semantic ascent is often just used as a technique for avoiding talking past one another, allowing e.g. anti-realist views to be voiced without begging the question. I think many are happy grab whatever machinery from symbolic logic they need to make the semantics fit the metaphysical/epistemological views they h...
So the rules of chess are basically just a pattern out in the world that I can go look at. When I say I'm uncertain about the rules of chess, this is epistemic uncertainty that I manage the same as if I'm uncertain about anything else out there in the world.
The "rules of Morality" are not like this.
This and earlier comments are bald rejections of moral realism (including, maybe especially, naturalist realism). Can I get some evidence for this confident rejection?
I'm not sure what linking Yudkowsky's (sketch of a) semantics for moral terms is meant to tell ...
Harder, yes; extremely, I'm much less convinced. In any case, Chevron was already dealt a blow in 2022, so those lobbying Congress to create an AI agency of some sort should be encouraged to explicitly give it a broad mandate (e.g. that it has the authority to settle various major economic or political questions concerning AI.)
Thanks for reading!
conflict theory with a degrowth/social justice perspective
Yea, I find myself interested in the topics LWers are interested in, but I'm disappointed certain perspectives are missing (despite them being prima facie as well-researched as the perspectives typical on LW). I suspect a bubble effect.
this is unfortunately where my willing suspension of disbelief collapsed
Yup, I suspected that last version would be the hardest to believe for LWers! I plan on writing much more in depth on the topic soon. You might be interested in Guive Assadi's r...
The mechanism of the compound interest yields utility.
Depends on what you mean by "utility." If "happiness" the evidence is very much unclear: though Life Satisfaction (LS) is correlated with income/GDP when we make cross-sectional measurement, LS is not correlated with income/GDP when we make time-series measurements. This is the Easterlin Paradox. Good overview of a recent paper on it, presented by its author. Full paper here. Good discussion of the paper on the EA forum here (responses from author as well Michael Plant in the comments).
While I completely agree that care should be taken if we try to slow down AI capabilities, I think you might be overreacting in this particular case. In short: I think you're making strawmen of the people you are calling "neo-luddites" (more on that term below). I'm going to heavily cite a video that made the rounds and so I think decently reflects the views of many in the visual artist community. (FWIW, I don't agree with everything this artist says but I do think it's representative). Some details you seem to have missed:
That sounds about right. The person in the second case is less morally ugly than the first. This is spot on:
the important part is the internalized motivation vs reasoning out what to do from ethical principles.
What do you mean by this though?:
(although I notice my intuition has a hard time believing the premise in the 2nd case)
You find it hard to believe someone could internalize the trait of compassion through "loving kindness meditation"? (This last I assume is a placeholder term for whatever works for making oneself more virtuous). Also, any reaso...
Sorry, my first reply to your comment wasn't very on point. Yes, you're getting at one of the central claims of my post.
what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling
First, I wouldn't say "mostly." I think in excessive amounts it interferes. Regarding your skepticism: we already know that calculation (a maximizer's mindset) in other contexts interferes with affective attachment and positive evaluations towards the choices made by said calculation (see references to psych litt). Why shouldn't we expect th...
In my view, the neural-net type of processing has different strength and weaknesses from the explicit reasoning, and they are often complementary.
Agreed. As I say in the post:
Of course cold calculated reasoning has its place, and many situations call for it. But there are many more in which being calculating is wrong.
I also mention that faking it til you make it (which relies on explicit S2 type processing) is also justified sometimes, but something one ideally dispenses with.
..."moral perception" or "virtues" ...is not magic, bit also just a computation runn
... consequentialism judges the act of visiting a friend in hospital to be (almost certainly) good since the outcome is (almost certainly) better than not doing it. That's it. No other considerations need apply. [...] whether there exist other possible acts that were also good are irrelevant.
I don't know of any consequentialist theory that looks like that. What is the general consequentialist principle you are deploying here? Your reasoning seems very one off. Which is fine! That's exactly what I'm advocating for! But I think we're talking past each other ...
If our motivation is just to make our friend feel better is that okay?
Absolutely. Generally being mindful of the consequences of one's actions is not the issue: ethicists of every stripe regularly reference consequences when judging an action. Consequentialism differentiates itself by taking the evaluation of consequences to be explanatorily fundamental – that which forms the underlying principle for their unifying account of all/a broad range of normative judgments. The point that Stocker is trying to make there is (roughly) that being motivated purely by...
Here is my prediction:
...I claim that one's level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer's mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one's decision (or the object at the center of their decision) in such scenarios.
More specifically I predict that, above a certain threshold of engagement with the community, increased eng
It’s better when we have our heart in it, and my point is that moral reasoning can help us do that.
My bad, I should have been clearer. I meant to say "isn't it better when we have our heart in it, and we can dispense with the reasoning or the rule consulting?"
I should note, you would be in good company if you answered "no." Kant believed that an action has no moral worth if it was not motivated by duty, a motivation that results from correctly reasoning about one's moral imperatives. He really did seem to think we should be reasoning about our duties all the time. I think he was mistaken.
Regarding moral deference:
I agree that moral deference as it currently stands is highly unreliable. But even if it were, I actually don't think a world in which agents did a lot of moral deference would be ideal. The virtuous agent doesn't tell their friend "I deferred to the moral experts and they told me I should come see you."
I do emphasize the importance of having good moral authorities/exemplars help shape your character, especially when we're young and impressionable. That's not something we have much control over – when we're older, we can som...
Regarding feelings about disease far away:
I'm glad you have become concerned about these topics! I'm not sure virtue ethicists couldn't also motivate those concerns though. Random side-note: I absolutely think consequentialism is the way to go when judging public/corporate/non-profit policy. It makes no sense to judge the policy of those entities the same way we judge the actions of individual humans. The world would be a much better place if state departments, when determining where to send foreign aid, used consequentialist reasoning.
Regarding feelings t...
I agree that, among ethicists, being of one school or another probably isn't predictive of engaging more or less in "one thought too many." Ethicists are generally not moral paragons in that department. Overthinking ethical stuff is kind of their job though – maybe be thankful you don't have to do it?
That said, I do find that (at least in writing) virtue ethicists do a better job of highlighting this as something to avoid: they are better moral guides in this respect. I also think that they tend to muster a more coherent theoretical response to the problem of self-effacement: they more or less embrace it, while consequentialists try to dance around it.
Great question! Since I'm not a professional ethicist, I can't say: I don't follow this stuff closely enough. But if you want a concrete falsifiable claim from me, I proposed this to a commenter on the EA forum:
...I claim that one's level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer's mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one'
I agree with both of you that the question for consequentialists is to determine when and where an act-consequentialist decision procedure (reasoning about consequences), a deontological decision procedure (reasoning about standing duties/rules), or the decision procedure of the virtuous agent (guided by both emotions and reasoning) are better outcome producers.
But you're missing part of the overall point here: according to many philosophers (including sophisticated consequentialists) there is something wrong/ugly/harmful about relying too much on re...
lol. Fixed, thanks!
Agreed. Roberts, Kavanaugh, and Barrett are generally considered center-right.
In addition, Chief Justice Roberts made it clear on multiple occasions he is concerned with public confidence in the court. This would give them a chance to prove they are non-partisan, on an issue that literally pits the people against incumbent major parties. And as I point out in my case, allowing greater freedom of expression on the ballot that translates into more representative elected officials should help with public trust in general.
I think that's a bit reductionist. There are a number of ideologies/theories regarding how law should be interpreted and what role courts are meant to play etc. Parties certainly pick justices who have legal ideologies that favor the outcomes parties want, regarding current political issues. But I think those legal ideologies are more stable in the justices than their tendency to rule the way desired by the party which appointed them.
I am painfully aware of this. I've been doubting myself throughout, and for awhile just left the idea in the drawer precisely out of fear of its naïvety.
Ultimately I did write it up and post, for three reasons: (1) to avoid getting instantly dismissed, to get my idea properly assessed by a legal expert in the first place, I needed to lay things out clearly; (2) I think it's at least possible that our voting system has largely become invisible, and that many high-powered legal experts are focused on other things (of course there are die-hard voting re...
Trying to! Any guidance would be welcome. So far I've only sent it to the First Amendment Lawyers Association because it seemed like they would be receptive to it. Should I try the ACLU? Was also thinking of the Institute for Free for Free Speech, though they seem to lean conservative which might make them less receptive. I wonder if there is a high power libertarian leaning firm that specializes in constitutional law... ideally we're looking for lawyers who are receptive to the case, but who also would not be looked upon by the Court as judicial activists...
Agreed, but that doesn't make for a legal case today. The Originalism many on today's Court subscribe to does not take into consideration the intent of lawmakers (in this case the framers), but instead simply asks: what would reasonable persons living at the time of its adoption have understood the ordinary meaning of the text to be? This is original meaning theory, in contrast with original intent theory.
Very true! I should get feedback from legal experts though before I sink any more time into this.
Yes, such sentences are a thing. Kendall Walton calls them "principles of generation" because, according to his analysis, they generate fictional truths (see his Mimesis as Make-Believe). Pointing at the sand and shouting "There is lava there!" we have said something fictionally true, in virtue of the game rule pronounced earlier. "Narrative syncing" sounds like a broader set of practices that generate and sustain such truths – I like it! (I must say "principles of generation" is a bit clunky anyway – but it's also more specific. Maybe "rule decreein...
I don't follow the reasoning. How do you get from "most people's moral behaviour is explainable in terms of them 'playing' a status game" to "solving (some versions of) the alignment problem probably won't be enough to ensure a future that's free from astronomical waste or astronomical suffering"?
More details:
Regarding the quote from The Status Game: I have not read the book, so I'm not sure what the intended message is but this sounds like some sort of unwarranted pessimism about ppl's moral standing (something like a claim like "the vast majority of ppl ...
I'm not down or upvoting, but I will say, I hope you're not taking this exercise too seriously...
Are we really going to analyze one person's fiction (even if rationalist, it's still fiction), in an attempt to gain insight into this one person's attempt to model an entire society and its market predictions – and all of this in order to try and better judge the probability of certain futures under a number of counterfactual assumptions? Could be fun, but I wouldn't give its results much credence.
Don't forget Yudkowsky's own advice about not generalizing from...
Right so you're worried about moral hazard generated by insurance (in the case where we have liability in place). For starters, the government arguably generates moral hazard for disasters of a certain size by default: it can't credibly commit ex ante to not bail out a critical economic sector or not provide relief to victims in the event of a major disaster: the government is always implicitly on the hook (see Moss, D. A. When All Else Fails: Government as the Ultimate Risk Manager. See the too-big-to-fail effect for an example). Charging a risk-pric... (read more)