The Additional Questions Elephant (first image in article, "image credit: Planecrash") is definitely older than Planecrash; see e.g. https://knowyourmeme.com/photos/1036583-reaction-images for an instance from 2015.
They're present on the original for which this is a linkpost. I don't know what the mechanism was by which the text was imported here from the original, but presumably whatever it was it didn't preserve the images.
Yes, that sounds much more normal to me.
Though in the particular case here, something else seems off: when you write you would normally italicize both the "f" and the "x", as you can see in the rendering in this very paragraph. I can't think of any situation in actual mathematical writing where you would italicize one and not the other in order to make some distinction between function-names and variable names.
For that matter, I'm not wild about making a distinction between "variables" and "functions". If you write and also&nb...
(Brief self-review for LW 2023 review.)
Obviously there's nothing original in my writeup as opposed to the paper it's about. The paper still seems like an important one, though I haven't particularly followed the literature and wouldn't know if it's been refuted or built upon by other later work. In particular, in popular AI discourse one constantly hears things along the lines of "LLMs are just pushing symbols around and don't have any sort of model of the actual world in them", and this paper seems to me to be good evidence that transformer networks, even...
I'm confused by what you say about italics. Mathematical variables are almost always italicized, so how would italicizing something help to clarify that it isn't a variable?
It seems like a thing that literally[1] everyone does sometimes. "Let's all go out for dinner." "OK, where shall we go?" As soon as you ask that question you're "optimizing for group fun" in some sense. Presumably the question is intending to ask about some more-than-averagely explicit, or more-than-averagely sophisticated, or more-than-averagely effortful, "optimizing for group fun", but to me at least it wasn't very clear what sort of thing it was intending to point at.
[1] Almost literally.
Yeah, I do see the value of keeping things the same across multiple years, which is why I said "might be worth" rather than "would be a good idea" or anything of the sort.
To me, "anti-agathics" specifically suggests drugs or something of the kind. Not so strongly that it's obvious to me that the question isn't interested in other kinds of anti-aging measures, but strongly enough to make it not obvious whether it is or not.
There is arguably a discrepancy between the title of the question "P(Anti-Agathics)" and the actual text of the question; there might be ways of "reaching an age of 1000 years" that I at least wouldn't want to call "anti-agathics". Uploading into a purely virtual existence. Uploading into a robot whose parts can be repaired and replaced ad infinitum. Repeated transfer of consciousness into some sort of biological clones, so that you get a new body when the old one starts to wear out.
My sense is that the first of those is definitely not intended to be cover...
I have just noticed something that I think has been kinda unsatisfactory about the probability questions since for ever.
There's a question about the probability of "supernatural events (including God, ghosts, magic, etc.)" having occurred since the beginning of the universe. There's another question about the probability of there being a god.
I notice an inclination to make sure that the first probability is >= the second, for the obvious reason. But, depending on how the first question is interpreted, that may be wrong.
If the existence of a god is consi...
"Trauma" meaning psychological as opposed to physical damage goes back to the late 19th century.
I agree that there's a widespread tendency to exaggerate the unpleasantness/harm done by mere words. (But I suggest there's an opposite temptation too, to say that obviously no one can be substantially harmed by mere words, that physical harm is different in kind from mere psychological upset, etc., and that this is also wrong.)
I agree that much of the trans community seems to have embraced what looks to me like a severely hyperbolic view of how much threat tran...
I don't think "deadname" is a ridiculous term just because no one died. The idea is that the name is dead: it's not being used any more. Latin is a "dead language" because (roughly speaking) no one speaks or writes in Latin. "James" is a "dead name" because (roughly speaking) no one calls that person "James" any more.
This all seems pretty obvious to me, and evidently it seems the opposite way to you, and both of us are very smart [citation needed], so probably at least one of us is being mindkilled a bit by feeling strongly about some aspect of the issue. I don't claim to know which of us it is :-).
I think you're using "memetic" to mean "of high memetic fitness", and I wish you wouldn't. No one uses "genetic" in that way.
An idea that gets itself copied a lot (either because of "actually good" qualities like internal consistency, doing well at explaining observations, etc., or because of "bad" (or at least irrelevant) ones like memorability, grabbing the emotions, etc.) has high memetic fitness. Similarly, a genetically transmissible trait that tends to lead to its bearers having more surviving offspring with the same trait has high genetic fitness. O...
Unless I misread, it said "mRNA" before.
Correction: the 2024 Nobel Prize in Medicine was for the discovery of microRNA, not mRNA which is also important but a different thing.
I think it's more "Hinton's concerns are evidence that worrying about AI x-risk isn't silly" than "Hinton's concerns are evidence that worrying about AI x-risk is correct". The most common negative response to AI x-risk concerns is (I think) dismissal, and it seems relevant to that to be able to point to someone who (1) clearly has some deep technical knowledge, (2) doesn't seem to be otherwise insane, (3) has no obvious personal stake in making people worry about x-risk, and (4) is very smart, and who thinks AI x-risk is a serious problem.
It's hard to squ...
Pedantic correction: you have some sizes where you've written e.g. 20' x 20' and I'm pretty sure you mean 20" x 20".
(Also, the final note saying pixel art is good for crisp upscaling and you should start with the lowest-resolution version seems very weird to me, though the way it's worded makes it unlikely that this is a mistake; another sentence or so elaborating on why this is a good idea would be interesting to me.)
So maybe e.g. the (not very auto-) autoformalization part produced a theorem-statement template with some sort of placeholder where the relevant constant value goes, and AlphaProof knew it needed to find a suitable value to put in the gap.
I'm pretty sure what's going on is:
The AlphaZero algorithm doesn't obviously not involve an LLM. It has a "policy network" to propose moves, and I don't know what that looks like in the case of AlphaProof. If I had to guess blindly I would guess it's an LLM, but maybe they've got something else instead.
I don't think this [sc. that AlphaProof uses an LLM to generate candidate next steps] is true, actually.
Hmm, maybe you're right. I thought I'd seen something that said it did that, but perhaps I hallucinated it. (What they've written isn't specific enough to make it clear that it doesn't do that either, at least to me. They say "AlphaProof generates solution candidates", but nothing about how it generates them. I get the impression that it's something at least kinda LLM-like, but could be wrong.)
Looks like this was posted 4 minutes before my https://www.lesswrong.com/posts/TyCdgpCfX7sfiobsH/ai-achieves-silver-medal-standard-solving-international but I'm not deleting mine because I think some of the links, comments, etc. in my version are useful.
Nothing you have said seems to make any sort of conspiracy theory around this more plausible than the alternative, namely that it's just chance. There are 336 half-hours per week; when two notable things happen in a week, about half a percent of the time one of them happens within half an hour of the other. The sort of conspiracies you're talking about seem to me more unlikely than that.
(Why a week? Arbitrary choice of timeframe. The point isn't a detailed probability calculation, it's that minor coincidences happen all the time.)
Given how much harm such an incident could do CrowdStrike, and given how much harm it could do an individual at Crowdstrike who turned out to have caused it on purpose, your second explanation seems wildly improbable.
The third one seems pretty improbable too. I'm trying to imagine a concrete sequence of events that matches your description, and I really don't think I can. Especially as Trump's formal acceptance of the GOP nomination can hardly have been any sort of news to anyone.
(Maybe I've misunderstood your tone and your comment is simply a joke, in whi...
Right: as I said upthread, the discussion is largely about whether terms like "spending" are misleading or helpful when we're talking about time rather than money. And, as you point out (or at least it seems clearly implied by what you say), whether a given term is helpful to a given person will depend on what other things are associated with that term in that person's mind, so it's not like there's even a definite answer to "is it helpful or misleading?".
(But, not that it matters all that much, I think you might possibly not have noticed that Ruby and Raemon are different people?)
In such a world we'd presumably already have vocabulary adapted to that situation :-). But yes, I would feel fine using the term "spending" (but then I also feel fine talking about "spending time") but wouldn't want to assume that all my intuitions from the present world still apply.
(E.g., in the actual world, for anyone who is neither very rich or very poor spending always has saving as an alternative[1], and how much you save can have a big impact on your future well-being. In the hypothetical spend-it-or-lose-it world, that isn't the case, and tha...
I am not convinced by the analogy. If you have $30 in your bank account you spend it on a book, you are $30 poorer; you had the option of just not doing that, in which case you would still have the $30. If you have 60 minutes ahead of you in the day and you spend it with a friend, then indeed you're 60 minutes older = poorer at the end of that time; but you didn't have the option of not spending those 60 minutes; they were going to pass by one way or another whatever you did.
You might still have given up something valuable! If you'd have preferred to devot...
You would be in the same situation if you'd done something else during that hour. You're only "paying", in the sense of giving up something valuable to you, in so far as you would have preferred to do something else.
That's sometimes true of time spent with friends -- maybe your friend is moving house and you help them unload a lot of boxes or something -- but by and large we human beings tend to enjoy spending time with our friends. (Even when unloading boxes, actually.)
Agreed. I was going for "explain how he came to say the false thing", not "explain why it's actually true rather than false".
I took it as a kinda-joking backformation from "patisserie".
On the other hand, it seems like Adam is looking at breadmaking that uses a sourdough starter, and that does have both yeasts and bacteria in it. (And breadmaking that uses it is correspondingly more error-prone and in need of adjustment on the fly than most baking that just uses commercial yeast, though some of what makes it more error-prone isn't directly a consequence of the more complicated leavener.)
I tried them all. My notes are full of spoilers.
Industrial Revolution (_2):
I didn't catch the deliberately-introduced error; instead I thought the bit about a recession in the 1830s was likely false. I took "the Industrial Revolution" to include the "Second Industrial Revolution" (as, indeed, the article seems to) and it seems to me that electricity was important for the progress of that, so I'm not entirely convinced that what was introduced to the article was exactly a falsehood.
Price gouging:
It seemed utterly unbelievable to me that a gathering of econo
There appear to be two edited versions of the Industrial Revolution article. Which one is recommended? (My guess: the _2 one because it's more recent.)
Let's suppose it's true, as Olli seems to find, that most not-inconsequential things in Wikipedia are more "brute facts" than things one could reasonably deduce from other things. Does this tell us anything interesting about the world?
For instance: maybe it suggests that reasoning is less important than we might think, that in practice most things we care about we have to remember rather than working out. It certainly seems plausible that that's true, though "reasonining is less important than we might think" feels like a slightly tendentious way of puttin...
Unfortunately, not being a NYT subscriber I think I can't see the specific video you mention (the only one with Biden allegedly being led anywhere that I can see before the nag message has his wife doing the alleged leading, and the point of it seems to have been not that someone was leading him somewhere but that he walked off instead of greeting veterans at a D-Day event, and there isn't anything in the text I can see that calls anything a cheap fake).
(Obviously my lack of an NYT subscription isn't your problem, but unless there's another source for what...
What in that article is misinformation?
Elsewhere on the internet, people are complaining vociferously that the NYT's more recent articles about Biden's age and alleged cognitive issues show that the NYT is secretly doing the bidding of billionaires who think a different candidate might tax them less. I mention this not because I think those people are right but as an illustration of the way that "such-and-such a media outlet is biased!" is a claim that often says more about the position of the person making the complaint than about the media outlet in question.
I gave it a few paragraphs from something I posted on Mastodon yesterday, and it identified me. I'm at least a couple of notches less internet-famous than Zvi or gwern, though again there's a fair bit of my writing on the internet and my style is fairly distinctive. I'm quite impressed.
(I then tried an obvious thing and fed it a couple of Bitcoin-white-paper paragraphs, but of course it knew that they were "Satoshi Nakamoto" and wasn't able to get past that. Someone sufficiently determined to identify Satoshi and with absurd resources could do worse than to train a big LLM on "everything except writings explicitly attributed to Satoshi Nakamoto" and then see what it thinks.)
If it's true that models are "starting to become barely capable of noticing that they are falling for this pattern" then I agree it's a good sign (assuming that we want the models to become capable of "general intelligence", of course, which we might not). I hadn't noticed any such change, but if you tell me you've seen it I'll believe you and accordingly reduce my level of belief that there's a really fundamental hole here.
I'm suggesting that the fact that things the model can't do produce this sort of whack-a-mole behaviour and that the shape of that behaviour hasn't really changed as the models have grown better at individual tasks may indicate something fundamental that's missing from all models in this class, and that might not go away until some new fundamental insight comes along: more "steps of scaling" might not do the trick.
Of course it might not matter, if the models become able to do more and more difficult things until they can do everything humans can do, in whi...
That seems reasonable.
My impression (which isn't based on extensive knowledge, so I'm happy to be corrected) is that the models have got better at lots of individual tasks but the shape of their behaviour when faced with a task that's a bit too hard for them hasn't changed much: they offer an answer some part of which is nonsense; you query this bit; they say "I'm sorry, I was wrong" and offer a new answer some different part of which is nonsense; you query this bit; they say "I'm sorry, I was wrong" and offer a new answer some different part of which is n...
It's pretty good. I tried it on a few mathematical questions.
First of all, a version of the standard AIW problem from the recent "Alice in Wonderland" paper. It got this right (not very surprisingly as other leading models also do, at least much of the time). Then a version of the "AIW+" problem which is much more confusing. Its answer was wrong, but its method (which it explained) was pretty much OK and I am not sure it was any wronger than I would be on average trying to answer that question in real time.
Then some more conceptual mathematical puzzles. I ...
Even though it would have broken the consistent pattern of the titling of these pieces, I find myself slightly regretting that this one isn't called "Grace Notes".
A nitpick: you say
fun story, I passed the C2 exam and then I realized I didn’t remember the word faucet when I went to the UK to visit a friend
but here in the UK I don't think I have ever once heard a native speaker use the word "faucet" in preference to "tap". I guess the story is actually funnier if immediately after passing your C2 exam you (1) thought "faucet" was the usual UK term and (2) couldn't remember it anyway...
(I liked the post a lot and although I am no polyglot all the advice seems sound to me.)
Please don't write comments all in boldface. It feels like you're trying to get people to pay more attention to your comment than to others, and it actually makes your comment a little harder to read as well as making the whole thread uglier.
It looks to me as if, of the four "root causes of social relationships becoming more of a lemon market" listed in the OP, only one is actually anything to do with lemon-market-ness as such.
The dynamic in a lemon market is that you have some initial fraction of lemons but it hardly matters what that is because the fraction of lemons quickly increases until there's nothing else, because buyers can't tell what they're getting. It's that last feature that makes the lemon market, not the initial fraction of lemons. And I think three of the four proposed "root c...
I think this is oversimplified:
High decouplers will notice that, holding preferences constant, offering people an additional choice cannot make them worse off. People will only take the choice if its better than any of their current options.
This is obviously true if somehow giving a person an additional choice is literally the only change being made, but you don't have to be a low-decoupler to notice that that's very very often not true. For a specific and very common example: often other people have some idea what choices you have (and, in particular, if ...
Then it seems unfortunate that you illustrated it with a single example, in which A was a single (uniformly distributed) number between 0 and 1.
I think this claim is both key to OP's argument and importantly wrong:
But a wavefunction is just a way to embed any quantum system into a deterministic system
(the idea being that a wavefunction is just like a probability distribution, and treating the wavefunction as real is like treating the probability distribution of some perhaps-truly-stochastic thing as real).
The wavefunction in quantum mechanics is not like the probability distribution of (say) where a dart lands when you throw it at a dartboard. (In some but not all imaginable Truly Stochastic world...
I don't know exactly what the LW norms are around plagiarism and plagiarism-ish things, but I think that introducing that basically-copied material with
I learned this by observing how beginners and more experienced people approach improv comedy.
is outright dishonest. OP is claiming to have observed this phenomenon and gleaned insight from it, when in fact he read about it in someone else's book and copied it into his post.
I have strong-downvoted the post for this reason alone (though, full disclosure, I also find the one-sentence-per-paragraph style really...
One can't put a price on glory.
Pedantic note: there are many instances of "syncopathy" that I am fairly sure should be "sycophancy".
(It's an understandable mistake -- "syncopathy" is composed of familiar components, which could plausibly be put together to mean something like "the disease of agreeing too much" which is, at least in the context of AI, not far off what sycophancy in fact means. Whereas if you can parse "sycophancy" at all you might work out that it means "fig-showing" which obviously has nothing to do with anything. So far as I can tell, no one actually knows how "fig-showing" came to be the term for servile flattery.)