All of danielechlin's Comments + Replies

This is cool. It comes up with meta-contrarianism. There's another concept I might write up that goes something like, "you don't always have to name the concept you're defending."

For example, I wanted to do a meetup on why gen z men are more religious in celebration of International Men's Day.

Just think of how the two topics in that sentence interact. It's more likely I'm going to have a meetup on "whether it's okay to have a meetup about International Men's Day" than anything. And if I wanted to promote men in any broad sense, it seems like doing the topi... (read more)

Keep in mind their goal is to take money from gambling addicts, not predict the future.

-9Anders Lindström

oh ok you said "has obvious adhd" like you're inferring it from a few minutes observation of her behavior, not that she told you she has adhd. in general no you can't get an accurate diagnosis by observing someone, you need to differential diagnosis hypomania, hyperthyroidism, autism, substance abuse, caffeine, sleep deprivation, or just enjoying her hobby, plus establish whatever behavior is adhdlike happens across a variety of domains going back some time.

Adhd is not legible just by being in the same room as someone.

2lsusr
It is when she acts like she has ADHD and tells you she has ADHD.

Furthermore it's pretty basic flaws by LW standards, like the "map/territory" which is the first post in the first sequence. I don't think "discussing basic stuff" is wrong by itself, but doing so by shuttling in someone else's post is sketch, and when that post is also some sort of polemical countered by the first post in the first sequence on LW it starts getting actively annoying.

convenient fiction aka a model. Like they almost get this, they just think pointing it out should be done in a really polemical strawmanny "scientists worship their god-models" way. 

It's telling they manage to avoid using the word "risk" or "risk-averse" because that's the most obvious example of a time when an economist would realize a simpler form of utility, money, isn't the best model for individual decisions. This isn't a forgivable error when you're convinced you have a more lucid understanding of the model/metaphor status of a science concept than scientists who use it, and it's accessible in econ 101 or even just to common sense.

More specifically, the correctness of the proof (at least in the triangles case) is common sense, coming up with the proof is not.

The integrals idea gets sketchy. Try it with e^(1/x). It's just a composition of functions so reverse the chain rule then deal with any extra terms that come up. Of course, it's not integrable. There's not really any utility in overextending common sense to include things that might or might not work. And you're very close to implying "it's common sense" is a proof for things that sound obvious but aren't.
 

2AnthonyC
Sure. And I'm of the opinion that it is only common sense after you've done quite a lot of the work of developing a level of intuition for mathematical objects that most people, including a significant proportion of high school math teachers, never got.

Claude 3.7 is too balanced, too sycophantic, buries the lede

me: VA monitor v IPS monitor for coding, reducing eye strain

It wrote a balanced answer, said "IPS is generally better" but it's kind of sounding like 60/40 here, and it misses the obvious fact that VA monitors are generally the curved ones. My older coworkers with more eye strain problems don't have curved monitors.

I hope on reddit/YT and the answer gets clear really fast. Claude's info was accurate yet missed the point and I wound up only getting the answer on reddit/YT.

One I've noticed is pretty well-intentioned "woke" people are more "lived experiences" oriented and well-intentioned "rationalist" people are more "strong opinions weakly held." Honestly, if your only goal is truth seeking, and admitting I'm rationalist-biased when I say this and also this is simplified, the "woke" frame is better at breadth and the "rationalist" frame is better at depth. But ohmygosh these arguments can spiral. Neither realize their meta is broken down. The rationalist thinks low-confidence opinions are humility; the woke thinks "I am ope... (read more)

I'm kind of interested in this idea of pretending you're talking to different people at different phases. Boss, partner, anyone else, ...

Hadn't thought of the unconscious->reward for noticing flow. Neat!

Those are two really different directions. One option is just outright dismiss the other person. The other is cede the argument completely but claim Moloch completely dominates that argument too. Is this really how you want to argue stuff -- everything is either 0 or the next level up of infinity?

3Deii
I do believe the "eurocentric" argument is a manifestation of moloch, it is the new version of "x is the next hitler" or "y was done by the nazis", it can be used to dismiss any argument coming from the west and to justify almost anything, for example it could be used by china or any latin american country putting an AGI in the government by saying: "AI safety is an eurocentric concept made to perpetuate western hegemony" So as a rule of thumb, I refuse to giving anyone saying that the benefit of the doubt, in my model anyone using that argument has a hidden agenda behind it and even if they don't, the false positives are not enough to change my mind, it's a net positive personal policy, sorry not sorry

Well, conversely, do you have examples that don't involve one side trying to claim a moral high ground and trivialize other concerns? That is the main class of examples I can see relevant to your posts and for these I don't think the problem is an "any reason" phenomenon, it's breaking out of the terrain where the further reasons are presumed trivial.

2silentbob
Some further examples: * Past me might have said: Apple products are "worse" because they are overpriced status symbols * Many claims in politics, say "we should raise the minimum wage because it helps workers" * We shouldn't use nuclear power because it's not really "renewable" * When AI lab CEOs warn of AI x risk we can dismiss that because they might just want to build hype * AI cannot be intelligent, or dangerous, because it's just matrix multiplications * One shouldn't own a cat because it's an unnatural way for a cat to live * Pretty much any any-benefit mindset that makes it into an argument rather than purely existing in a person's behavior

I don't think the problem is forgetting there exists other arguments, it's confronting whether an argument like "perpetuates colonialism" dominates concerns like "usability." I'd like to know how you handle arguing for something like "usability" in the face of a morally urgent argument like "don't be eurocentric."

Viliam102

I'd like to know how you handle arguing for something like "usability" in the face of a morally urgent argument like "don't be eurocentric."

I would probably start with rejecting the premise that I have to listen to other people's arguments.

(This makes even more sense when we know that the people who loudly express their opinions are often just a tiny minority of users. However, it is perfectly possible to ignore the majority, too.)

I think this is a mistake that many intelligent people make, to believe that you need to win verbal fights. Perhaps identifying... (read more)

2Deii
"Don't be eurocentric" is not an urgent problem at all, "Don't be needlessly inefficient just to virtual signal group affiliations" is an even bigger problem in the grand scheme of things, what if that user never gets to use the app because he never manages to understand the UI? also most developers aren't in a good enough position in the market where they can manage to lose users by such a trivialities
2silentbob
It certainly depends on who's arguing. I agree that some sources online see this trade-off and end up on the side of not using flags after some deliberation, and I think that's perfectly fine. But this describes only a subset of cases, and my impression is that very often (and certainly in the cases I experienced personally) it is not even acknowledged that usability, or anything else, may also be a concern that should inform the decision.  (I admit though that "perpetuates colonialism" is a spin that goes beyond "it's not a 1:1 mapping" and is more convincing to me)

Well a simple, useful, accurate, non-learning-oriented model, except to the extent that it's a known temporary state, is to turn all the red boxes into one more node in your mental map and average out accordingly. If they're an expert it's like "well what I've THOUGHT to this point is 0.3, but someone very important said 0.6, so it's probably closer to 0.6, but it's also possible we're talking about different situations without realizing it." 

I thought it might be "look for things that might not even be there as hard as you would if they are there."  Then the koan form takes it closer to "the thereness of something just has little relevance on how hard you look for it." But it needs to get closer to the "biological" part of your brain, where you're not faking it with all your mental and bodily systems, like when your blood pressure rises from "truly believing" a lion is around the corner but wouldn't if you "fake believe" it.

3Lorxus
I imagine it's something like "look for things that are notably absent, when you would expect them to have been found if there"?

Neat. You can try to ask it for confidence interval and it'll probably correlate against the hallucinations. Another idea is run it against the top 1000 articles and see how accurate they are. I can't really guess back-of-envelope for if it's cost effective to run this over all of wiki per-article.

Also I kind of just want this on reddit and stuff. I'm more concerned about casually ingested fake news than errors in high quality articles when it comes to propaganda/disinfo.

By "aren't catching" do you mean "can't" or do you mean "wikipedia company/editors haven't deployed an LLM to crawl wikipedia, read sources and edit the article for errors"?

The 161 is paywall so I can't really test. My guess is Claude wouldn't find the math error off a "proofread this, here's its sources copy/pasted" type prompt but you can try.

8Ben Wu
I was curious about this so decided to check. Both Claude 3.7 and GPT-4o were able to spot this error when I provided them just the Wikipedia page and instructed them to find any mistakes. They also spotted the arithmetic error when asked to proof-read the cited WSJ article. In all cases, their stated reasoning was that 200 million tons of rabbit meat was way too high, on the order of global meat production, so they didn't have to actually do any explicit arithmetic.[1] Funnily enough, the LLMs found two other mistakes in the Rabbit Wikipedia page: the character Peter Warne was listed as Peter Wayne and doxycycline was misspelt as docycycline. So it does seem like, even without access to sources, current LLMs could do a good job at spotting typos and egregious errors in Wikipedia pages. (caveat: both models also listed a bunch of other "mistakes" which I didn't check carefully but seemed like LLM hallucinations since the correction contradicted reputable sources) 1. ^ GPT-4o stumbles slightly when trying to do the arithmetic on the WSJ article. It compares the article's 420,000 tons with 60 million (200 million x 0.3) rather than the correct calculation of 42 million (200 million x 0.3 x 0.7). However, I gave the same prompt to o1 and it did the maths correctly.
2ozziegooen
Yep. My guess is that this would take some substantial prompt engineering, and potentially a fair bit of money.  I imagine they'll get to it eventually (as it becomes easier + cheaper), but it might be a while. 

You want to be tending your value system so that being good at your job also makes you happy. It sounds like a cop-out but that's really it, really important, and really the truth. Being angry you have to do your job the best way possible is not sustainable.

  • "Wrap that in a semaphore"
  • "Can you check if that will cause a diamond dependency"
  • "Can you try deflaking this test? Just add a retry if you need or silence it and we'll deal with it later"
  • "I'll refactor that so it's harder to call it with a string that contains PII"

To me, those instructions are a little like OP's "understand an algorithm" and I would need to do all of them without needing any support from a teammate in a predictable amount of time. The first 2 are 10 minute activities for some level of a rough draft, the 3rd I wrote specifically so it has a... (read more)

Allow grade skipping

I get you're spitballing here, and I'm going to admit this isn't the most data-driven argument, but here goes: you're saying take away the kid's friends, ratchet up the bullying, make sure they hit puberty at the wrong time, make sure they suck at sports, obliterate the chance of them having a successful romantic interaction, and the reward is one of two things: still being bored in classes with the same problems, or having "dumb kid" problems in the exact same classes that are harmful to dumb kids.

Again...  total leaf-node in your... (read more)

You also have the Trump era "RINO" slur and some similar left-on-liberal fighting. First let me deal with this intra-tribe outgroup, before getting back to my normal outgroup.

Whenever I try to "learn what's going on with AI alignment" I wind up on some article about whether dogs know enough words to have thoughts or something. I don't really want to kill off the theoretical term (it can peek into the future a little later and function more independent of technology, basically) but it seems like kind of a poor way to answer stuff like: what's going on now, or if all the AI companies allowed me to write their 6 month goals, what would I put on it.

Camus specifically criticized that Kierkegaard leap of faith in Myth of Sisyphus. Would be curious if you've read it and if it makes more sense to you than me lol. Camus basically thinks you don't need to make any ultimate philosophical leap of faith. I'm more motivated by the weaker but still useful half of his argument which is just that nihilism doesn't imply unhappiness, depression, or say you shouldn't try to make sense of things. Those are all as wrong as leaping into God faith.

It's cool to put this to paper. I tried writing down my most fundamental principles and noticed I thought they were tautological and also realized many people disagree with them. Like "If you believe something is right you believe others are wrong." Many, many people have a belief "everyone's entitled to their own opinion" that overrules this one.

Or "if something is wrong you shouldn't do it." Sounds... tautological. But again, many people don't think that's really true when it comes to abstractly reasoned "effective altruism" type stuff. It's just an ocea... (read more)

This might just be a writing critique but 1) I just skipped all the bricks stuff, 2) I found the conclusion was "shares aren't like bricks." Also like what should we use instead?

I've been making increasingly more genuine arguments about this regarding horoscopes. They're not "scientific," but neither are any of my hobbies, and they're only harmful when taken to the extreme but that's also true for all my hobbies, and they seem to have a bunch of low-grade benefits like "making you curious about your personality." So then I felt astrology done scientifically (where you make predictions but hedge them and are really humble about failure) is way better than science done shoddily (where you yell at people for not wearing a mask to you... (read more)

I know I'm writing in 2025 but this is the first Codex piece I didn't like. People don't know about or like AI experts so they ignore them like all us rationalists ignore astrology experts. There's no fallacy. There's a crisis in expert trust, let's not try to conflate that with people's inability to distinguish between 1% and 5% chances.

Reminds me of the Tolkien cosmology including the inexplicable Tom Bombadil. Human intuition on your conjecture is varied. I vote it's false - seems like if the universe has enough chances to do something coincidental it'll get lucky eventually. I feel that force is stronger than the ability to find an even better contextualized explanation.

I almost think it's a problem you included the word "mainstream." It's a slippery word that winds up meaning "other people's news." It seems like realizing the point in your post is one step, and taking a more surgical dive into what news counts as obscure enough is another. If you're a doomscroller you're probably gravitating toward stuff many people have been hearing about, though.

The "semantic bounty" fallacy occurs when you argue semantics, and you think that if you win an argument that X counts as Y, your interlocutor automatically gives up all the properties of Y as a bounty.

What actually happens is: your interlocutor may yield that X technically counts as Y, but since it's a borderline example of Y, most of Y doesn't apply to it. Unfortunately, as the argument gets longer, you may feel you deserve a bigger bounty if you win, when really your interlocutor is revealing to your their P(X is not Y) is quite high, and if they do yie... (read more)