Nobody has an internally consistent definition of anything, but it works out because people are usually talking about "typical X" not "edge case X." Bouba is probably thinking of pesticides or GMOs. So ask them why they think those things are harmful. By "not chemicals" they're probably thinking of water and apples. If you want their opinion of alcohol you can say "do you think alcohol is bad for you?" not "do you think alcohol counts as a chemical?"
You don't actually have to live your life by making every conversation one about hammering down definitions.
I think they're saying opposite things.
Chipmonk: progress = bad days become sparse
Kaj: progress = have your first good day
Kaj really doesn't address what progress looks like after that "1%" is achieved. It's "the beginning of the point where you can make it progressively less" with "more time and work." Okay -- so should we look at progress like the sawtooth graph or like Chipmonk's polkadots calendar? Kaj doesn't answer that all so they can't really take credit for Chipmonk's piece.
You can view them as a complement but I think there's other flaws in Kaj's...
You do know the Nuremberg defense is bad and wrong and was generally rejected right? Nazis are bad, even if their boss is yet another Nazi, who is also bad. If it's an "accountability sink" it's certainly one that was solved by holding all of them accountable. I don't share your "vague feeling of arbitrariness," nor did the Allies. Nazis pretended they're good people by building complicated contraptions to suppress their humanity, I'm aware, that's what makes it a defense, and we reject it.
If accountability sink lends credence to Nuremberg defense then it'...
Well not to dig in or anything but if I have a chance to automate something I'm going to think of it in terms of precision/recall/long tails, not in terms of the joy of being able to blame a single person when something goes wrong. There are definitely better coordination/optimization models than "accountability sinks." I don't love writing a riposte to a concept someone else found helpful but it really is on the edge between "sounds cool, means nothing" and "actively misleading" so I'm bringing it up.
The Nuremberg defense discussion is sketch. The author ...
I don't agree "the pursuit of happiness is dead" so I guess accountability sinks aren't that big of a problem? Like corporations are not constantly failing due to lack of accountability, for instance the blameless postmortem seems to be working just fine. Maybe we should introduce blameless PMs aka "occasionally accepting an apology" to other layers of society. The problem seems to be too few accountability sinks, not too many.
Some of these follow from the "central fallacy," e.g. just because penguins are birds doesn't mean they're typical birds, which typically can fly. I nicknamed this "semantic bounty" in a short post -- if you spend 45 minutes convincing somebody something is X, e.g. X = discriminatory because X is probably gonna be something values-infused rather than feel like an arbitrary label, you're more likely to win the argument that something is technically X and therefore doesn't get a whole lot of properties of X, when you were hoping you get all the properties of X as a bounty for your opponent conceding the is-ness.
This does sound OCD in that all the psychic energy is going into rationalizing doom slightly differently in the hopes that this time you'll get some missing piece of insight that will change your feelings. Like I think we can accrue a sort of moral OCD that if too many people behavior as if p(doom)=low we must become the ascetic who doesn't just believe it's high but who has many mental rituals meant to enforce believing it's high as many hours a day as possible.
ERP (exposure/response prevention) is gold standard for OCD, not ACT or DBT. I mean, there's ov...
I'm sorry but I was just so lost by the end of this article. I have no explanation for why Hreha doesn't know what a UX researcher is or that they professionally do behavorial economics all day including designing nudge-like behavior, like for instance a tip app screen, that nudges you to tip $1 on a $3 coffee.
You're allowed to ask your CEO "why'd you do X", you're allowed to ask a senior Go player "why'd you do X." They're more similar than different. As for "punching down," yeah, chess/go probably have a lot more serious culture of public critical feedback, you could probably write a thinkpiece just on this, but factors include low ego culture and objectivity of sr->jr feedback.
One difference is with CEO is it's incomplete info. You may be surfacing something they don't know. Another difference is your interests are different. You're telling them they made ...
My only eyebrow-raise was at all the wealthier people who go to AA. There are a lot of broke-ass people at AA. If you can acquire alcohol you can get to an AA meeting.
Bigger question: is it a generally difficult problem to analyze like, "this thing claims to help, but only people who are motivated will do it, so we can't really tell if it further acts positively on motivated people or if it's a total useless thing?" Like if I'm motivated to learn and I read a book and get smarter you wouldn't exactly say the book did nothing, even if the main thing is the motivation level of the learner, not the availability of the book.
This is cool. It comes up with meta-contrarianism. There's another concept I might write up that goes something like, "you don't always have to name the concept you're defending."
For example, I wanted to do a meetup on why gen z men are more religious in celebration of International Men's Day.
Just think of how the two topics in that sentence interact. It's more likely I'm going to have a meetup on "whether it's okay to have a meetup about International Men's Day" than anything. And if I wanted to promote men in any broad sense, it seems like doing the topi...
oh ok you said "has obvious adhd" like you're inferring it from a few minutes observation of her behavior, not that she told you she has adhd. in general no you can't get an accurate diagnosis by observing someone, you need to differential diagnosis hypomania, hyperthyroidism, autism, substance abuse, caffeine, sleep deprivation, or just enjoying her hobby, plus establish whatever behavior is adhdlike happens across a variety of domains going back some time.
Furthermore it's pretty basic flaws by LW standards, like the "map/territory" which is the first post in the first sequence. I don't think "discussing basic stuff" is wrong by itself, but doing so by shuttling in someone else's post is sketch, and when that post is also some sort of polemical countered by the first post in the first sequence on LW it starts getting actively annoying.
convenient fiction aka a model. Like they almost get this, they just think pointing it out should be done in a really polemical strawmanny "scientists worship their god-models" way.
It's telling they manage to avoid using the word "risk" or "risk-averse" because that's the most obvious example of a time when an economist would realize a simpler form of utility, money, isn't the best model for individual decisions. This isn't a forgivable error when you're convinced you have a more lucid understanding of the model/metaphor status of a science concept than scientists who use it, and it's accessible in econ 101 or even just to common sense.
More specifically, the correctness of the proof (at least in the triangles case) is common sense, coming up with the proof is not.
The integrals idea gets sketchy. Try it with e^(1/x). It's just a composition of functions so reverse the chain rule then deal with any extra terms that come up. Of course, it's not integrable. There's not really any utility in overextending common sense to include things that might or might not work. And you're very close to implying "it's common sense" is a proof for things that sound obvious but aren't.
Claude 3.7 is too balanced, too sycophantic, buries the lede
me: VA monitor v IPS monitor for coding, reducing eye strain
It wrote a balanced answer, said "IPS is generally better" but it's kind of sounding like 60/40 here, and it misses the obvious fact that VA monitors are generally the curved ones. My older coworkers with more eye strain problems don't have curved monitors.
I hope on reddit/YT and the answer gets clear really fast. Claude's info was accurate yet missed the point and I wound up only getting the answer on reddit/YT.
One I've noticed is pretty well-intentioned "woke" people are more "lived experiences" oriented and well-intentioned "rationalist" people are more "strong opinions weakly held." Honestly, if your only goal is truth seeking, and admitting I'm rationalist-biased when I say this and also this is simplified, the "woke" frame is better at breadth and the "rationalist" frame is better at depth. But ohmygosh these arguments can spiral. Neither realize their meta is broken down. The rationalist thinks low-confidence opinions are humility; the woke thinks "I am ope...
Those are two really different directions. One option is just outright dismiss the other person. The other is cede the argument completely but claim Moloch completely dominates that argument too. Is this really how you want to argue stuff -- everything is either 0 or the next level up of infinity?
Well, conversely, do you have examples that don't involve one side trying to claim a moral high ground and trivialize other concerns? That is the main class of examples I can see relevant to your posts and for these I don't think the problem is an "any reason" phenomenon, it's breaking out of the terrain where the further reasons are presumed trivial.
I don't think the problem is forgetting there exists other arguments, it's confronting whether an argument like "perpetuates colonialism" dominates concerns like "usability." I'd like to know how you handle arguing for something like "usability" in the face of a morally urgent argument like "don't be eurocentric."
I'd like to know how you handle arguing for something like "usability" in the face of a morally urgent argument like "don't be eurocentric."
I would probably start with rejecting the premise that I have to listen to other people's arguments.
(This makes even more sense when we know that the people who loudly express their opinions are often just a tiny minority of users. However, it is perfectly possible to ignore the majority, too.)
I think this is a mistake that many intelligent people make, to believe that you need to win verbal fights. Perhaps identifying...
Well a simple, useful, accurate, non-learning-oriented model, except to the extent that it's a known temporary state, is to turn all the red boxes into one more node in your mental map and average out accordingly. If they're an expert it's like "well what I've THOUGHT to this point is 0.3, but someone very important said 0.6, so it's probably closer to 0.6, but it's also possible we're talking about different situations without realizing it."
I thought it might be "look for things that might not even be there as hard as you would if they are there." Then the koan form takes it closer to "the thereness of something just has little relevance on how hard you look for it." But it needs to get closer to the "biological" part of your brain, where you're not faking it with all your mental and bodily systems, like when your blood pressure rises from "truly believing" a lion is around the corner but wouldn't if you "fake believe" it.
Neat. You can try to ask it for confidence interval and it'll probably correlate against the hallucinations. Another idea is run it against the top 1000 articles and see how accurate they are. I can't really guess back-of-envelope for if it's cost effective to run this over all of wiki per-article.
Also I kind of just want this on reddit and stuff. I'm more concerned about casually ingested fake news than errors in high quality articles when it comes to propaganda/disinfo.
By "aren't catching" do you mean "can't" or do you mean "wikipedia company/editors haven't deployed an LLM to crawl wikipedia, read sources and edit the article for errors"?
The 161 is paywall so I can't really test. My guess is Claude wouldn't find the math error off a "proofread this, here's its sources copy/pasted" type prompt but you can try.
To me, those instructions are a little like OP's "understand an algorithm" and I would need to do all of them without needing any support from a teammate in a predictable amount of time. The first 2 are 10 minute activities for some level of a rough draft, the 3rd I wrote specifically so it has a...
Allow grade skipping
I get you're spitballing here, and I'm going to admit this isn't the most data-driven argument, but here goes: you're saying take away the kid's friends, ratchet up the bullying, make sure they hit puberty at the wrong time, make sure they suck at sports, obliterate the chance of them having a successful romantic interaction, and the reward is one of two things: still being bored in classes with the same problems, or having "dumb kid" problems in the exact same classes that are harmful to dumb kids.
Again... total leaf-node in your...
Whenever I try to "learn what's going on with AI alignment" I wind up on some article about whether dogs know enough words to have thoughts or something. I don't really want to kill off the theoretical term (it can peek into the future a little later and function more independent of technology, basically) but it seems like kind of a poor way to answer stuff like: what's going on now, or if all the AI companies allowed me to write their 6 month goals, what would I put on it.
Camus specifically criticized that Kierkegaard leap of faith in Myth of Sisyphus. Would be curious if you've read it and if it makes more sense to you than me lol. Camus basically thinks you don't need to make any ultimate philosophical leap of faith. I'm more motivated by the weaker but still useful half of his argument which is just that nihilism doesn't imply unhappiness, depression, or say you shouldn't try to make sense of things. Those are all as wrong as leaping into God faith.
It's cool to put this to paper. I tried writing down my most fundamental principles and noticed I thought they were tautological and also realized many people disagree with them. Like "If you believe something is right you believe others are wrong." Many, many people have a belief "everyone's entitled to their own opinion" that overrules this one.
Or "if something is wrong you shouldn't do it." Sounds... tautological. But again, many people don't think that's really true when it comes to abstractly reasoned "effective altruism" type stuff. It's just an ocea...
I've been making increasingly more genuine arguments about this regarding horoscopes. They're not "scientific," but neither are any of my hobbies, and they're only harmful when taken to the extreme but that's also true for all my hobbies, and they seem to have a bunch of low-grade benefits like "making you curious about your personality." So then I felt astrology done scientifically (where you make predictions but hedge them and are really humble about failure) is way better than science done shoddily (where you yell at people for not wearing a mask to you...
I know I'm writing in 2025 but this is the first Codex piece I didn't like. People don't know about or like AI experts so they ignore them like all us rationalists ignore astrology experts. There's no fallacy. There's a crisis in expert trust, let's not try to conflate that with people's inability to distinguish between 1% and 5% chances.
Reminds me of the Tolkien cosmology including the inexplicable Tom Bombadil. Human intuition on your conjecture is varied. I vote it's false - seems like if the universe has enough chances to do something coincidental it'll get lucky eventually. I feel that force is stronger than the ability to find an even better contextualized explanation.
I almost think it's a problem you included the word "mainstream." It's a slippery word that winds up meaning "other people's news." It seems like realizing the point in your post is one step, and taking a more surgical dive into what news counts as obscure enough is another. If you're a doomscroller you're probably gravitating toward stuff many people have been hearing about, though.
The "semantic bounty" fallacy occurs when you argue semantics, and you think that if you win an argument that X counts as Y, your interlocutor automatically gives up all the properties of Y as a bounty.
What actually happens is: your interlocutor may yield that X technically counts as Y, but since it's a borderline example of Y, most of Y doesn't apply to it. Unfortunately, as the argument gets longer, you may feel you deserve a bigger bounty if you win, when really your interlocutor is revealing to your their P(X is not Y) is quite high, and if they do yie...
So don't use the definition if it's useless. The object level conversation is very easy to access here. Say something like "do you mean GMOs?" and then ask them why they think GMOs are harmful. If their answer is "because GMOs are chemicals" then you say "why do you think chemicals are harmful?" and then you can continue conversing about whether GMOs are harmful.
Honestly I think it's net virtuous to track other people's definitions and let them modify them whenever they feel a need to. Aligning on definitions is expensive and always involves talking ... (read more)