Keep in mind their goal is to take money from gambling addicts, not predict the future.
oh ok you said "has obvious adhd" like you're inferring it from a few minutes observation of her behavior, not that she told you she has adhd. in general no you can't get an accurate diagnosis by observing someone, you need to differential diagnosis hypomania, hyperthyroidism, autism, substance abuse, caffeine, sleep deprivation, or just enjoying her hobby, plus establish whatever behavior is adhdlike happens across a variety of domains going back some time.
Adhd is not legible just by being in the same room as someone.
Furthermore it's pretty basic flaws by LW standards, like the "map/territory" which is the first post in the first sequence. I don't think "discussing basic stuff" is wrong by itself, but doing so by shuttling in someone else's post is sketch, and when that post is also some sort of polemical countered by the first post in the first sequence on LW it starts getting actively annoying.
convenient fiction aka a model. Like they almost get this, they just think pointing it out should be done in a really polemical strawmanny "scientists worship their god-models" way.
It's telling they manage to avoid using the word "risk" or "risk-averse" because that's the most obvious example of a time when an economist would realize a simpler form of utility, money, isn't the best model for individual decisions. This isn't a forgivable error when you're convinced you have a more lucid understanding of the model/metaphor status of a science concept than scientists who use it, and it's accessible in econ 101 or even just to common sense.
More specifically, the correctness of the proof (at least in the triangles case) is common sense, coming up with the proof is not.
The integrals idea gets sketchy. Try it with e^(1/x). It's just a composition of functions so reverse the chain rule then deal with any extra terms that come up. Of course, it's not integrable. There's not really any utility in overextending common sense to include things that might or might not work. And you're very close to implying "it's common sense" is a proof for things that sound obvious but aren't.
Claude 3.7 is too balanced, too sycophantic, buries the lede
me: VA monitor v IPS monitor for coding, reducing eye strain
It wrote a balanced answer, said "IPS is generally better" but it's kind of sounding like 60/40 here, and it misses the obvious fact that VA monitors are generally the curved ones. My older coworkers with more eye strain problems don't have curved monitors.
I hope on reddit/YT and the answer gets clear really fast. Claude's info was accurate yet missed the point and I wound up only getting the answer on reddit/YT.
One I've noticed is pretty well-intentioned "woke" people are more "lived experiences" oriented and well-intentioned "rationalist" people are more "strong opinions weakly held." Honestly, if your only goal is truth seeking, and admitting I'm rationalist-biased when I say this and also this is simplified, the "woke" frame is better at breadth and the "rationalist" frame is better at depth. But ohmygosh these arguments can spiral. Neither realize their meta is broken down. The rationalist thinks low-confidence opinions are humility; the woke thinks "I am ope...
I'm kind of interested in this idea of pretending you're talking to different people at different phases. Boss, partner, anyone else, ...
Hadn't thought of the unconscious->reward for noticing flow. Neat!
Those are two really different directions. One option is just outright dismiss the other person. The other is cede the argument completely but claim Moloch completely dominates that argument too. Is this really how you want to argue stuff -- everything is either 0 or the next level up of infinity?
Well, conversely, do you have examples that don't involve one side trying to claim a moral high ground and trivialize other concerns? That is the main class of examples I can see relevant to your posts and for these I don't think the problem is an "any reason" phenomenon, it's breaking out of the terrain where the further reasons are presumed trivial.
I don't think the problem is forgetting there exists other arguments, it's confronting whether an argument like "perpetuates colonialism" dominates concerns like "usability." I'd like to know how you handle arguing for something like "usability" in the face of a morally urgent argument like "don't be eurocentric."
I'd like to know how you handle arguing for something like "usability" in the face of a morally urgent argument like "don't be eurocentric."
I would probably start with rejecting the premise that I have to listen to other people's arguments.
(This makes even more sense when we know that the people who loudly express their opinions are often just a tiny minority of users. However, it is perfectly possible to ignore the majority, too.)
I think this is a mistake that many intelligent people make, to believe that you need to win verbal fights. Perhaps identifying...
Well a simple, useful, accurate, non-learning-oriented model, except to the extent that it's a known temporary state, is to turn all the red boxes into one more node in your mental map and average out accordingly. If they're an expert it's like "well what I've THOUGHT to this point is 0.3, but someone very important said 0.6, so it's probably closer to 0.6, but it's also possible we're talking about different situations without realizing it."
I thought it might be "look for things that might not even be there as hard as you would if they are there." Then the koan form takes it closer to "the thereness of something just has little relevance on how hard you look for it." But it needs to get closer to the "biological" part of your brain, where you're not faking it with all your mental and bodily systems, like when your blood pressure rises from "truly believing" a lion is around the corner but wouldn't if you "fake believe" it.
Neat. You can try to ask it for confidence interval and it'll probably correlate against the hallucinations. Another idea is run it against the top 1000 articles and see how accurate they are. I can't really guess back-of-envelope for if it's cost effective to run this over all of wiki per-article.
Also I kind of just want this on reddit and stuff. I'm more concerned about casually ingested fake news than errors in high quality articles when it comes to propaganda/disinfo.
By "aren't catching" do you mean "can't" or do you mean "wikipedia company/editors haven't deployed an LLM to crawl wikipedia, read sources and edit the article for errors"?
The 161 is paywall so I can't really test. My guess is Claude wouldn't find the math error off a "proofread this, here's its sources copy/pasted" type prompt but you can try.
You want to be tending your value system so that being good at your job also makes you happy. It sounds like a cop-out but that's really it, really important, and really the truth. Being angry you have to do your job the best way possible is not sustainable.
To me, those instructions are a little like OP's "understand an algorithm" and I would need to do all of them without needing any support from a teammate in a predictable amount of time. The first 2 are 10 minute activities for some level of a rough draft, the 3rd I wrote specifically so it has a...
Allow grade skipping
I get you're spitballing here, and I'm going to admit this isn't the most data-driven argument, but here goes: you're saying take away the kid's friends, ratchet up the bullying, make sure they hit puberty at the wrong time, make sure they suck at sports, obliterate the chance of them having a successful romantic interaction, and the reward is one of two things: still being bored in classes with the same problems, or having "dumb kid" problems in the exact same classes that are harmful to dumb kids.
Again... total leaf-node in your...
You also have the Trump era "RINO" slur and some similar left-on-liberal fighting. First let me deal with this intra-tribe outgroup, before getting back to my normal outgroup.
Whenever I try to "learn what's going on with AI alignment" I wind up on some article about whether dogs know enough words to have thoughts or something. I don't really want to kill off the theoretical term (it can peek into the future a little later and function more independent of technology, basically) but it seems like kind of a poor way to answer stuff like: what's going on now, or if all the AI companies allowed me to write their 6 month goals, what would I put on it.
Camus specifically criticized that Kierkegaard leap of faith in Myth of Sisyphus. Would be curious if you've read it and if it makes more sense to you than me lol. Camus basically thinks you don't need to make any ultimate philosophical leap of faith. I'm more motivated by the weaker but still useful half of his argument which is just that nihilism doesn't imply unhappiness, depression, or say you shouldn't try to make sense of things. Those are all as wrong as leaping into God faith.
It's cool to put this to paper. I tried writing down my most fundamental principles and noticed I thought they were tautological and also realized many people disagree with them. Like "If you believe something is right you believe others are wrong." Many, many people have a belief "everyone's entitled to their own opinion" that overrules this one.
Or "if something is wrong you shouldn't do it." Sounds... tautological. But again, many people don't think that's really true when it comes to abstractly reasoned "effective altruism" type stuff. It's just an ocea...
This might just be a writing critique but 1) I just skipped all the bricks stuff, 2) I found the conclusion was "shares aren't like bricks." Also like what should we use instead?
I've been making increasingly more genuine arguments about this regarding horoscopes. They're not "scientific," but neither are any of my hobbies, and they're only harmful when taken to the extreme but that's also true for all my hobbies, and they seem to have a bunch of low-grade benefits like "making you curious about your personality." So then I felt astrology done scientifically (where you make predictions but hedge them and are really humble about failure) is way better than science done shoddily (where you yell at people for not wearing a mask to you...
I know I'm writing in 2025 but this is the first Codex piece I didn't like. People don't know about or like AI experts so they ignore them like all us rationalists ignore astrology experts. There's no fallacy. There's a crisis in expert trust, let's not try to conflate that with people's inability to distinguish between 1% and 5% chances.
Reminds me of the Tolkien cosmology including the inexplicable Tom Bombadil. Human intuition on your conjecture is varied. I vote it's false - seems like if the universe has enough chances to do something coincidental it'll get lucky eventually. I feel that force is stronger than the ability to find an even better contextualized explanation.
I almost think it's a problem you included the word "mainstream." It's a slippery word that winds up meaning "other people's news." It seems like realizing the point in your post is one step, and taking a more surgical dive into what news counts as obscure enough is another. If you're a doomscroller you're probably gravitating toward stuff many people have been hearing about, though.
The "semantic bounty" fallacy occurs when you argue semantics, and you think that if you win an argument that X counts as Y, your interlocutor automatically gives up all the properties of Y as a bounty.
What actually happens is: your interlocutor may yield that X technically counts as Y, but since it's a borderline example of Y, most of Y doesn't apply to it. Unfortunately, as the argument gets longer, you may feel you deserve a bigger bounty if you win, when really your interlocutor is revealing to your their P(X is not Y) is quite high, and if they do yie...
This is cool. It comes up with meta-contrarianism. There's another concept I might write up that goes something like, "you don't always have to name the concept you're defending."
For example, I wanted to do a meetup on why gen z men are more religious in celebration of International Men's Day.
Just think of how the two topics in that sentence interact. It's more likely I'm going to have a meetup on "whether it's okay to have a meetup about International Men's Day" than anything. And if I wanted to promote men in any broad sense, it seems like doing the topi... (read more)