All of momom2's Comments + Replies

momom220

My experience interacting with Chinese people is that they have to constantly mind the censorship in a way that I would find abhorrent and mentally taxing if I had to live in their system. Though given there are many benefits to living in China (mostly quality of life and personal safety), I'm unconvinced that I prefer my own government all things considered.

But for the purpose of developing AGI, there's a lot more variance in possible outcomes (higher likelihood of S-risk and benevolent singleton) from the CCP  getting a lead rather than the US.

momom220

There's a lot that I like in this essay - the basic cases for AI consciousness, AI suffering and slavery, in particular - but also a lot that I think needs to be amended.

First, although you hedge your bets at various points, the uncertainty about the premises and validity of the arguments is not reflected in the conclusion. The main conclusion that should be taken from the observations you present is that we're can't be sure that AI does not suffer, that there's a lot of uncertainty about basic facts of critical moral importance, and a lot of similarities ... (read more)

1Odd anon
Thank you for your comments. :) I'm assuming we're using the same definition of slavery; that is, forced labour of someone who is property. Which part have I missed? To clarify: Do you think the recommendations in the Implementation section couldn't work, or that they couldn't become popular enough to be implemented? (I'm sorry that you felt cheated.) I've not come across this argument before, and I don't think I understand it well enough to write about it, sorry.
momom230

Since infantile death rates were much higher in previous centuries, perhaps the FBOE would operate differently back then; for example, if interacting with older brothers makes you homosexual, you shouldn't expect higher rates of homosexuality for third sons where the second son died as an infant than for second sons.

Have you taken that into account? Do you have records of who survived to 20yo and what happens if you only count those?

2rba
It doesn't look to me like non-surviving children are reported in this data, so no.  However, the reported results doesn't change when you just look at it century by century.
momom221

But that argument would have worked the same way 50 years ago, when we were wrong to expect <50% chance of AGI in at least 50 years. Like I feel for LLMs, early computer work solved things that could be considered high-difficulty blockers such as proving a mathematical theorem.

momom276

Nice that someone has a database on the topic, but I don't see the point in this being a map?

1Remmelt
Yes, I was also wondering what ordering it by jurisdiction contributed. I guess it's nice for some folks to have it be more visual, even if the visual aspects don't contribute much?
4Mateusz Bagiński
Especially given how concentrated-sparse it is. It would be much better to have it as a google sheet.
momom210

I think what's going on is that large language models are trained to "sound smart" in a live conversation with users, and so they prefer to highlight possible problems instead of confirming that the code looks fine, just like human beings do when they want to sound smart.

This matches my experience, but I'd be interested in seeing proper evals of this specific point!

momom221

The advice in there sounds very conducive to a productive environment, but also very toxic. Definitely an interesting read, but I wouldn't model my own workflow based on this.

momom210

Honeypots should not be public and mentioned here since this post will potentially be part of a rogue AI's training data.
But it's helpful for people interested in this topic to look at existing honeypots (to learn how to make their own, evaluate effectiveness, get intuitions about honeypots work, etc.) so what you should do is mention that you made a honeypot or know of one, but not say what or where. Interested people can contact you privately if they care to.

1Ozyrus
>this post will potentially be part of a rogue AI's training data I had that in mind while I was writing this, but I think overall it is good to post this. It hopefully gets more people thinking about honeypots and making them, and early rogue agents will also know we do and will be (hopelly overly) cautious, wasting resources. I probably should have emphasised more that this all is aimed more at early-stage rogue agents with potential to become something more dangerous because of autonomy, than at a runaway ASI. It is a very fascinating thing to consider, though, in general. We are essentially coordinating in the open right now, all our alignment, evaluation, detection strategies from forums will definetly be in training. And certainly there are both detection and alignment strategies that will benefit from being covert. As well as some ideas, strategies, theories could benefit alignment from being overt (like acausal trade, publicly speaking about commiting to certain things, et cetera).  A covert alignment org/forum is probably a really, really good idea. Hopefully, it already exists without my knowledge.
momom220

Thank you very much, this was very useful to me.

momom241
  • They're a summarization of a lot of vibes from the Sequences.
  • Artistic choice, I assume. It doesn't bear on the argument.
  • Yudkowsky explains all about the virtues in the Sequences
    For studies, there are broad studies on cognitive science (especially relating to bias) but you'll be hard-pressed to match them precisely to one virtue or another. Mostly, Yudkowsky's opinions on these virtues are supported by academic literature, but I'm not aware of any work that showcases this clearly.
    For practical experience, you can look into the legacy of the Center Fo
... (read more)
momom220

Do you know what it feels like to feel pain?  Then congratulations, you know what it feels like to have qualia.  Pain is a qualia.  It's that simple.  If I told you that I was going to put you in intense pain for an hour, but I assured you there would be no physical damage or injury to you whatsoever, you would still be very much not ok with that.  You would want to avoid that experience.  Why?  Because pain hurts!  You're not afraid of the fact that you're going to have an "internal representation" of pain, nor are

... (read more)
momom230

For making an AI Safety video, we at the CeSIA also have had some success at it and we'd be happy to help by providing technical expertise, proofreading and translation in French.
Other channels you could reach out to:

momom230

The first thing that comes to mind is to beg the question of what proportion of human-generated papers are publishing-worthier (since a lot of them are slop), but let's not forget that publication matters little for catastrophic risk, it's actually getting results that would be important.
So I recommend not updating at all on AI risk based on Sakana's results (or updating negatively if you expected that R&D automation would come faster, or that this might slow down human augmentation).

momom210

In that case, per my other comment, I think it's much more likely that superbabies concern only a small fraction of the population and exacerbates inequality without bringing the massive benefits that a generally more capable population would.

Do you think superbabies would be put to work on alignment in a way that makes a difference due to geniuses driving the field? I'm having trouble understanding how concretely you think superbabies can lead to significantly improved chance of helping alignment.

4kman
My guess is that peak intelligence is a lot more important than sheer numbers of geniuses for solving alignment. At the end of the day someone actually has to understand how to steer the outcome of ASI, which seems really hard and no one knows how to verify solutions. I think that really hard (and hard to verify) problem solving scales poorly with having more people thinking about it. Sheer numbers of geniuses would be one effect of raising the average, but I'm guessing the "massive benefits" you're referring to are things like coordination ability and quality of governance? I think those mainly help with alignment via buying time, but if we're already conditioning on enhanced people having time to grow up I'm less worried about time, and also think that sufficiently widespread adoption to reap those benefits would take substantially longer (decades?).
4GeneSmith
It's possible I'm misunderstanding your comment, so please correct me if I am, but there's no reason you couldn't do superbabies at scale even if you care about alignment. In fact, the more capable people we have the better. Kman may have his own views, but my take is pretty simple; there are a lot of very technically challenging problems in the field of alignment and it seems likely smarter humans would have a much higher chance of solving them.
momom2114

I'm having trouble understanding your ToC in a future influenced by AI. What's the point of investigating this if it takes 20 years to become significant?

7GeneSmith
Kman and I probably differ somewhat here. I think it's >90% likely that if we continue along the current trajectory we'll get AGI before the superbabies grow up. This technology only starts to become really important if there's some kind of big AI disaster or a war that takes down most of the world's chip fabs. I think that's more likely than people are giving it credit for and if it happens this will become the most important technology in the world. Gene editing research is much less centralized than chip manufacturing. Basically all of the research can be done in normal labs of the type seen all over the world. And the supply chain for reagents and other inputs is much less centralized than the supply chain for chip fabrication. You don't have a hundred billion dollar datacenter than can be bombed by hypersonic projectiles. The research can happen almost anywhere. So this stuff is just naturally a lot more robust than AI in the event of a big conflict.
6kman
I mostly think we need smarter people to have a shot at aligning ASI, and I'm not overwhelmingly confident ASI is coming within 20 years, so I think it makes sense for someone to have the ball on this.
momom220-4

I'm surprised to see no one in the comments whose reaction is "KILL IT WITH FIRE", so I'll be that guy and make a case why this research should be stopped rather than pursued:

On the one hand, there is obviously enormous untapped potential in this technology. I don't have issues about the natural order of life or some WW2 eugenics trauma. From my (unfamiliar with the subject) eyes, you propose a credible way to make everyone healthier, smarter, happier, at low cost and within a generation, which is hard to argue against.

On the other hand, you spend no time ... (read more)

6Kaj_Sotala
There's also the option that even if this technology is initially funded by the wealthy, learning curves will then drive down its cost as they do for every technology, until it becomes affordable for governments to subsidize its availability for everyone.
momom296

There are three traders on this market; it means nothing at the moment. No need for virtue signalling to explain a result you might perceive as abnormal, it's just not formed yet.

momom210

Thanks for writing this! I was unaware of the Chinese investment, which explains another recent information which you did not include but I think is significant: Nvidia's stock plummeted 18% today.

5Alice Blair
I saw that news as I was polishing up a final draft of this post. I don't think it's terribly relevant to AI safety strategy, I think it's just an instance of the market making a series of mistakes in understanding how AI capabilities work. I won't get into why I think this is such a layered mistake here, but it's another reminder that the world generally has no idea what's coming in AI. If you think that there's something interesting to be gleaned from this mistake, write a post about it! Very plausibly, nobody else will.
momom21-4

Five minutes of thought on how this could be used for capabilities:
- Use behavioral self-awareness to improve training data (e.g. training on this dataset increases self-awareness of code insecurity, so it probably contains insecure code that can be fixed before training on it).
- Self-critique for iterative improvement within a scaffolding (already exists, but this work validates the underlying principles and may provide further grounding).

It sure feels like behavioral self-awareness should work just as well for self capability assessments as for safety to... (read more)

7Martín Soto
Speaking for myself (not my coauthors), I don't agree with your two items, because: * if your models are good enough at code analysis to increase their insecurity self-awareness, you can use them in other more standard and efficient ways to improve the dataset * doing self-critique the usual way (look over your own output) seems much more fine-grained and thus efficient than asking the model whether it "generally uses too many try-excepts" More generally, I think behavioral self-awareness for capability evaluation is and will remain strictly worse than the obvious capability evaluation techniques. That said, I do agree systematic inclusion of considerations about negative externalities should be a norm, and thus we should have done so. I will shortly say now that a) behavioral self-awareness seems differentially more relevant to alignment than capabilities, and b) we expected lab employees to find out about this themselves (in part because this isn't surprising given out-of-context reasoning), and we in fact know that several lab employees did. Thus I'm pretty certain the positive externalities of building common knowledge and thinking about alignment applications are notably bigger.
momom230

(If you take time to think about this, feel free to pause reading and write your best solution in the comments!)

How about:
- Allocating energy everywhere to either twitching randomly or collecting nutrients. Assuming you are propelled by the twitching, this follows the gradient if there's one.
- Try to grow in all directions. If there are no outside nutrients to fuel this growth, consume yourself. In this manner, regenerate yourself in the direction of the gradient.
- Try to grab nutrients from all directions. If there are nutrients, by reaction you will be p... (read more)

2Malmesbury
Kudos for taking the challenge! If I understand correctly, your first point is actually pretty similar to how E. coli follows gradients of nutrients, even when the scale of the gradient is much larger than the size of a cell.
momom250

Contra 2:
ASI might provide a strategic advantage of a kind which doesn't negatively impact the losers of the race, e.g. it increases GDP by x10 and locks competitors out of having an ASI.
Then, losing control of the ASI could [not being able of] posing an existential risk to the US.
I think it's quite likely this is what some policymakers have in mind: some sort of innovation which will make everything better for the country by providing a lot cheap labor and generally improving productivity, the way we see AI applications do right now but on a bigger scale.... (read more)

2Mateusz Bagiński
  It does negatively impact the losers, to the extent that they're interested not only in absolute wealth but also relative wealth (which I expect to be the case, although I know ~nothing about SotA modeling of states as rational actors or whatever).
momom22-2

From the disagreement between the two of you, I infer there is yet debate as to what environmentalism means. The only way to be a true environmentalist then is to make things as reversible as possible until such time as an ASI can explain what the environmentalist course of action regarding the Sun should be.

momom210

The paradox arises because the action-optimal formula mixes world states and belief states. 
The [action-planning] formula essentially starts by summing up the contributions of the individual nodes as if you were an "outside" observer that knows where you are, but then calculates the probabilities at the nodes as if you were an absent-minded "inside" observer that merely believes to be there (to a degree). 

So the probabilities you're summing up are apples and oranges, so no wonder the result doesn't make any sense. As stated, the formula for actio... (read more)

momom221

Having read Planecrash, I do not think there is anything in this review that I would not have wanted to know before reading the work (which is the important part of what people consider "spoilers" for me).

momom210

Top of the head like when I'm trying to frown too hard

momom210

distraction had no effect on identifying true propositions (55% success for uninterrupted presentations, vs. 58% when interrupted); but did affect identifying false propositions (55% success when uninterrupted, vs. 35% when interrupted)

If you are confused by these numbers (why so close to 50%? Why below 50%) it's because participants could pick four options (corresponding to true, false, don't know and never seen). 
You can read the study, search for keyword "The Identification Test".

momom210
  1. I don't see what you mean by the grandfather problem.
    1. I don't care about the specifics of who spawns the far future generation; whether it's Alice or Bob I am only considering numbers here.
    2. Saving lives now has consequences for the far future insofar as current people are irrepleceable: if they die, no one will make more children to compensate, resulting in a lower total far future population. Some deaths are less impactful than others for the far future.
  2. That's an interesting way to think about it, but I'm not convinced; killing half the population does not
... (read more)
3AnthonyC
I think the grandfather idea is that if you kill 100 people now, and the average person who dies would have had 1 descendant, and the large loss would happen in 100 years (~4 more generations), then the difference in total lives lived between the two scenarios is ~500, not 900. If the number of descendants per person is above ~1.2, then burying the waste means population after the larger loss in 100 years is actually higher than if you processed it now. Obviously I'm also ignoring a whole lot of things here that I do think matter, as well. And of course, as you pointed out in your reply to my comment above, it's probably better to ignore the scenario description and just look at it as a pure choice along the lines of something like "Is it better to reduce total population by 900 if the deaths happen in 100 years instead of now?"
momom230

Yes, that's the first thing that was talked about in my group's discussion on longtermism. For the sake of the argument, we were asked to assume that the waste processing/burial choice amounted to a trade in lives all things considered... but the fact that any realistic scenario resembling this thought experiment would not be framed like that is the central part of my first counterargument.

momom230

I enjoy reading any kind of cogent fiction on LW, but this one is a bit too undeveloped for my tastes. Perhaps be more explicit about what Myrkina sees in the discussion which relates to our world?
You don't have to always spell earth-shattering revelations out loud (in fact it's best to let the readers reach the correct conclusion by themselves imo), but there needs to be enough narrative tension to make the conclusion inevitable; as it stands, it feels like I can just meh my way out of thinking more than 30s on what the revelation might be, the same way Tralith does.

2Logan Zoellner
  I'm glad you found one of the characters sympathetic.  Personally I feel strongly both ways, which is why I wrote the story the way that I did.
momom230

Thanks, it does clarify, both on separating the instantiation of an empathy mechanism in the human brain vs in AI and on considering instantiation separately from the (evolutionary or training) process that leads to it.

momom230

I was under the impression that empathy explained by evolutionary psychology as a result of the need to cooperate with the fact that we already had all the apparatus to simulate other people (like Jan Kulveit's first proposition).
(This does not translate to machine empathy as far as I can tell.)

I notice that this impression is justified by basically nothing besides "everything is evolutionary psychology". Seeing that other people's intuitions about the topic are completely different is humbling; I guess emotions are not obvious.

So, I would appreciate if yo... (read more)

2Steven Byrnes
I definitely think that the human brain has innate evolved mechanisms related to social behavior in general, and to caring about (certain) other people’s welfare in particular. And I agree that the evolutionary pressure explaining why those mechanisms exist are generally related to the kinds of things that Robert Trivers and other evolutionary psychologists talk about. This post isn’t about that. Instead it’s about what those evolved mechanisms are, i.e. how they work in the brain. Does that help? …But I do want to push back against a strain of thought within evolutionary psychology where they say “there was an evolutionary pressure for the human brain to do X, and therefore the human brain does X”. I think this fails to appreciate the nature of the constraints that the brain operates under. There can be evolutionary pressure for the brain to do something, but there’s no way for the brain to do it, so it doesn’t happen, or the brain does something kinda like that but with incidental side-effects or whatever. As an example, imagine if I said: “Here’s the source code for training an image-classifier ConvNet from random initialization using uncontrolled external training data. Can you please edit this source code so that the trained model winds up confused about the shape of Toyota Camry tires specifically?” The answer is: “Nope. Sorry. There is no possible edit I can make to this PyTorch source code such that that will happen.” You see what I mean? I think this kind of thing happens in the brain a lot. I talk about it more specifically here. More of my opinions about evolutionary psychology here and here.
momom21512

I do not find this post reassuring about your approach.

  • Your plan is unsound; instead of a succession of events which need to go your way, I think you should aim for incremental marginal gains. There is no cost-effectiveness analysis, and the implicit theory of change is lacunar.
  • Your press release is unreadable (poor formatting), and sounds like a conspiracy theory (catchy punchlines, ALL CAPS DEMANDS, alarmist vocabulary and unsubstantiated claims) ; I think it's likely to discredit safety movements and raise attention in counterproductive ways.
  • The figures
... (read more)
5Remmelt
Thanks, as far as I can this is a mix of critiques of strategic approach (fair enough), about communication style (fair enough), and partial misunderstandings of the technical arguments.   I agree that we should not get hung up on a succession of events to go a certain way. IMO, we need to get good at simultaneously broadcasting our concerns in a way that’s relatable to other concerned communities, and opportunistically look for new collaborations there.   At the same time, local organisers often build up an activist movement by ratcheting up the number of people joining the events and the pressure they put on demanding institutions to make changes. These are basic cheap civil disobedience tactics that have worked for many movements (climate, civil rights, feminist, changing a ruling party, etc). I prefer to go with what has worked, instead of trying to reinvent the wheel based on fragile cost-effectiveness estimates. But if you can think of concrete alternative activities that also have a track record of working, I’m curious to hear. I think this is broadly fair.  The turnaround time of this press release was short, and I think we should improve on the formatting and give more nuanced explanations next time. Keep in mind the text is not aimed at you but people more broadly who are feeling concerned and we want to encourage to act. A press release is not a paper. Our press release is more like a call to action – there is a reason to add punchy lines here.     Let me recheck the AI Impacts paper. Maybe I was ditzy before, in which case, my bad.   As you saw from my commentary above, I was skeptical about using that range of figures in the first place.   Not sure what you see as the conflation?  AGI, as an autonomous system that would automate many jobs, would necessarily be self-modifying – even in the limited sense of adjusting its internal code/weights on the basis of new inputs.    The reasoning shared in the press release by my colleague was rather l
momom230

I agree with the broad idea, but I'm going to need a better implementation.
In particular, the 5 criteria you give are insufficient because the example you give scores well on them, and is still atrocious: if we decreed that "black people" was unacceptable and should be replaced by "black peoples", it would cause a lot of confusion on account of how similar the two terms are and how ineffective the change is.

The cascade happens because of a specific reason, and the change aims at resolving that reason. For example, "Jap" is used as a slur, and not saying it... (read more)

momom231
  • Probability of existential catastrophe before 2032 assuming AGI arrives in that period and Harris wins[12] = 30%

  • Probability of existential catastrophe before 2032 assuming AGI arrives in that period and Trump wins[13] = 35%.

A lot of your AI-risk reason to support Harris seems to hinge on this, which I find very shaky. How wide are your confidence intervals here?
My own guesses are much more fuzzy. According to your argument, if my intuition was .2 vs .5, then it's an overwhelming case for Harris but I'm unfamiliar enough with the topic that it cou... (read more)

momom221

Seems like you need to go beyond arguments of authority and stating your conclusions and instead go down to the object-level disagreements. You could say instead "Your argument for ~X is invalid because blah blah" and if Jacob says "Your argument for the invalidity of my argument for ~X is invalid because blah blah" then it's better than before because it's easier to evaluate argument validity than ground truth.
(And if that process continues ad infinitam, consider that someone who cannot evaluate the validity of the simplest arguments is not worth arguing with.)

momom230

It's thought-provoking.
Many people here identify as Bayesians, but are as confused as Saundra by the troll's questions, which indicates that they're missing something important.

momom210

It wasn't mine. I did grow up in a religious family, but becoming a rationalist came gradually, without sharp divide with my social network. I always figured people around me were making all sorts of logical mistakes though, and noticed very early deep flaws in what I was taught.

momom231

It's not. The paper is hype, the authors don't actually show that this could replace MLPs.

momom221

This is very interesting!
I did not expect that Chinese would be more optimistic about benefits than worried about risks and that they would rank it so low as an existential risk. 
This is in contrast with posts I see on social media and articles showcasing safety institutes and discussing doomer opinions, which gave me the impression that Chinese academia was generally more concerned about AI risk and especially existential risk than the US.

I'm not sure how to reconcile this survey's results with my previous model. Was I just wrong and updating too much on anecdotal evidence?
How representative of policymakers and of influential scientists do you think these results are?

6Nick Corvino
I think the crux is that the thoughts of the CCP and Chinese citizens don't necessarily have to have a strong correlation - in many ways they can be orthogonal, and sometimes even negatively correlated (like when the gov trades off on personal freedoms for national security).   I think recent trends suggest the Chinese gov / Xi Jingping are taking risks (especially the tail risks) more seriously, and have done some promising AI safety stuff. Still mostly unclear, tho. Highly recommend checking out Concordia AI's The State of AI Safety in China Spring 2024 Report. 
momom210

About the Christians around me: it is not explicitly considered rude, but it is a signal that you want to challenge their worldview, and if you are going to predictably ask that kind of question often, you won't be welcome in open discussions.
(You could do it once or twice for anecdotal evidence, but if you actually want to know whether many Christians believe in a literal snake, you'll have to do a survey.)

momom2123

I disagree – I think that no such perturbations exist in general, rather than that we have simply not had any luck finding them.

I have seen one such perturbation. It was two images of two people, one which was clearly male and the other female, though I wasn't be able to tell any significant difference between the two images on 15s of trying to find one except for a slight difference in hue. 
Unfortunately, I can't find this example again on a 10mn search. It was shared on Discord; the people in the image were white and freckled. I'll save it if I find it again.

Canaletto229

https://x.com/jeffreycider/status/1648407808440778755

(I'm writing a post on cognitohazards, the perceptual inputs that hurt you. So, i have this post conveniently referenced in my draft lol)

momom210

The pyramids and Mexico and the pyramids in Egypt are related via architectural constraints and human psychology.

momom220

In practice, when people say "one in a million" in that kind of context, it's much higher than that. I haven't watched Dumb and Dumber, but I'd be surprised if Lloyd did not, actually, have a decent chance of ending together with Mary.

On one hand, we claim [dumb stuff using made up impossible numbers](https://www.lesswrong.com/posts/GrtbTAPfkJa4D6jjH/confidence-levels-inside-and-outside-an-argument) and on the other hand, we dismiss those numbers and fall back on there's-a-chancism.
These two phenomena don't always perfectly compensate one another (as examples show in both posts), but common sense is more reliable that it may seem at first. (I'm not saying it's the correct approach nonetheless.)

Answer by momom220

Epistemic status: amateur, personal intuitions.

If this were the case, it makes sense to hold dogs (rather than their owners, or their breeding) responsible for aggressive or violent behaviour.

I'd consider whether punishing the dog would make the world better, or whether changing the system that led to its breeding, or providing incentives to the owner or any combination of other actions would be most effective.

Consequentialism is about considering the consequences of actions to judge them, but various people might wield this in various ways. 
Implicitl... (read more)

momom210

I can imagine plausible mechanisms for how the first four backlash examples were a consequence of perceived power-seeking from AI safetyists, but I don't see one for e/acc. Does someone have one?

Alternatively, what reason do I have to expect that there is a causal relationship between safetyist power-seeking and e/acc even if I can't see one?

e/acc has coalesced in defense of open-source, partly in response to AI safety attacks on open-source. This may well lead directly to a strongly anti-AI-regulation Trump White House, since there are significant links between e/acc and MAGA.

I think of this as a massive own goal for AI safety, caused by focusing too much on trying to get short-term "wins" (e.g. dunking on open-source people) that don't actually matter in the long term.

momom210

That's not interesting to read unless you say what your reasons are and they differ from other critics'. Perhaps not say it all in a comment, but at least a link to a post.

momom210

Interestingly, I think that one of the examples of proving too much on Wikipedia can itself be demolished by a proving too much argument, but I’m not going to say which one it is because I want to see if other people independently come to the same conclusion.

For those interested in the puzzle, here is the page Scott was linking to at the time: https://en.wikipedia.org/w/index.php?title=Proving_too_much&oldid=542064614
The article was edited a few hours later, and subsequent conversation showed that Wikipedia editors came to the conclusion Scott hinted a... (read more)

momom210

Another way to avoid the mistake is to notice that the implication is false, regardless of the premises. 
In practice, people's beliefs are not deductively closed, and (in the context of a natural language argument) we treat propositional formulas as tools for computing truths rather than timeless statements.

momom240

it can double as a method for creating jelly donuts on demand

For those reading this years later, here's the comic that shows how to make ontologically necessary donuts.

momom281

I'd appreciate examples of the sticker shortcut fallacy with in-depth analysis of why they're wrong and how the information should have been communicated instead.

9ymeskhout
I wanted to include very basic examples first: I am planning yet another follow-up to outline more contentious examples. Basically, almost any dispute  that is based on a disguised query and hinges on specific categorization matches the fallacy. Some of the prominent examples that come to mind, with the sticker shortcut label italicized: * Was January 6th an insurrection? * Is Israel committing a genocide? * Are IQ tests a form of eugenics? All of these questions appear to be a disguised query into asking whether X is a "really bad thing". But instead of asking this directly, they try to sneak in the connotation through the label. Similarly, the whole debate over whether transwomen are women is a hodgepodge of disguised queries that try to sneak in a preferred answer through the acceptance of labels. In each of these examples, we're better served by discussing the thing directly rather than debating over labels. Does this help clarify?
Load More