Said Achmiz

Wiki Contributions

Comments

Sorted by
Answer by Said Achmiz120

Tablespoons of butter.

A tablespoon is a unit of volume. Namely, it is one-sixteenth of a cup.

Now, there are two distinct units called “ounces” that are commonly used in the United States. One is the avoirdupois ounce, also know as the United States customary ounce, which is a unit of weight; it is one-sixteenth of a pound. The other is the U.S. customary fluid ounce, which is a unit of volume; it is one-eighth of a cup.

One-sixteenth of a cup is a tablespoon. One-eighth of a cup is an ounce (fluid). One-eighth of one-half of a cup is a tablespoon. These are all measures of volume.

Butter, however, is sold by weight:

The 16-oz. package of butter in the photo above says that it contains four sticks. This is one stick:

The stick is divided into eight “tablespoons”.

But the “tablespoons” of butter depicted above are not one tablespoon each in volume. And there is no such thing as a unit of weight called the “tablespoon”.

So what is this? Well, one stick of butter is 4 ounces in weight. 8 ounces in volume is one cup. By analogy, if we think of 8 ounces in weight as a “cup” in weight (which is not actually a real weight unit!), then one-sixteenth of that weight is a “tablespoon” in weight (by analogy with one-sixteenth of a cup in volume being a tablespoon in volume). Neither cups nor tablespoons are real weight units! But if we call an 8-oz. weight a “cup”, then we can call a 1/2 oz. weight a “tablespoon”.

Sticks of butter are divided into metaphorical tablespoons of butter.

“A space Odyssey” is not watchable without discussing historical context

… why? I’ve watched this movie, and I… don’t think I’m aware of any special “historical context” that was relevant to it. (Or, at any rate, I don’t know what you mean by this.) It seemed to work out fine…

The main problem with your approach is not that it is counterintuitive (although it is, and more so than ours!), but that there is no way to return to “auto” mode via the site’s UI![1] Having clicked the mode selector, how do I go back to “no, just use my browser preference”? A two-state selector with a hidden, ephemeral third state, which cannot be retrieved once abandoned is, I’m afraid, the worst approach…


  1. You can go into your browse’s dev tools and deleting the localStorage item, or clear all your saved data via the browser’s preferences. (Well, on desktop, anyway; on mobile—who knows? Not the former, at least, and how many mobile users even know about the latter? And the latter is anyhow an undesirable method!) ↩︎

Do you think a 3-state dark mode selector is better than a 1-state (where “auto” is the only state)? My website is 1-state, on the assumption that auto will work for almost everyone and it lets me skip the UI clutter of having a lighting toggle that most people won’t use.

Gwern discusses this on his “Design Graveyard” page:

Auto-dark mode: a good idea but “readers are why we can’t have nice things”.

OSes/browsers have defined a ‘global dark mode’ toggle the reader can set if they want dark mode everywhere, and this is available to a web page; if you are implementing a dark mode for your website, it then seems natural to just make it a feature and turn on iff the toggle is on. There is no need for complicated UI-cluttering widgets with complicated implementations. And yet—if you do do that, readers will regularly complain about the website acting bizarre or being dark in the daytime, having apparently forgotten that they enabled it (or never understood what that setting meant).

A widget is necessary to give readers control, although even there it can be screwed up: many websites settle for a simple negation switch of the global toggle, but if you do that, someone who sets dark mode at day will be exposed to blinding white at night… Our widget works better than that. Mostly.

Is it possible that someday dark-mode will become so widespread, and users so educated, that we could quietly drop the widget? Yes, even by 2023 dark-mode had become quite popular, and I suspect that an auto-dark-mode would cause much less confusion in 2024 or 2025. However, we are stuck with the widget—once we had a widget, the temptation to stick in more controls (for reader-mode and then disabling/enabling popups) was impossible to resist, and who knows, it may yet accrete more features (site-wide fulltext search?), rendering removal impossible.

(The site-wide fulltext search feature has since been added, of course.)

Not bad at all! Needs some work on the details and some bug fixes, but—really not bad! The dropcaps, in particular, are well done; and the overall theme is elegant.

I’m just going to link the comment I wrote the last time you mentioned that Rethink Priorities report. That report continues to be of very little use in supporting such arguments as you present here.

I in fact don’t use Google very much these days, and don’t particularly recommend that anyone else do so, either.

(If by “google” you meant “search engines in general”, then that’s a bit different, of course. But then, the analogy here would be to something like “carefully select which LLM products you use, try to minimize their use, avoid the popular ones, and otherwise take all possible steps to ensure that LLMs affect what you see and do as little as possible”.)

The most important thing is “There is a small number of individuals who are paying attention, who you can argue with, and if you don’t like what they’re doing, I encourage you to write blogposts or comments complaining about it. And if your arguments make sense to me/us, we might change our mind. If they don’t make sense, but there seems to be some consensus that the arguments are true, we might lose the Mandate of Heaven or something.”

There’s not, like, anything necessarily wrong with this, on its own terms, but… this is definitely not what “being held accountable” is.

It happening at all already constitutes “going wrong”.

This particular sort of comment doesn’t particularly move me.

All this really means is that you’ll just do with this whatever you feel like doing. Which, again, is not necessarily “wrong”, and really it’s the default scenario for, like… websites, in general… I just really would like to emphasize that “being held accountable” has approximately nothing to do with anything that you’re describing.

As far as the specifics go… well, the bad effect here is that instead of the site being a way for me to read the ideas and commentary of people whose thoughts and writings I find interesting, it becomes just another purveyor of AI “extruded writing product”. I really don’t know why I’d want more of that than there already is, all over the internet. I mean… it’s a bad thing. Pretty straightforwardly. If you don’t think so then I don’t know what to tell you.

All I can say is that this sort of thing drastically reduces my interest in participating here. But then, my participation level has already been fairly low for a while, so… maybe that doesn’t matter very much, either. On the other hand, I don’t think that I’m the only one who has this opinion of LLM outputs.

Do you not use LLMs daily?

Not even once.

In general, I think Gwern’s suggested LLM policy seems roughly right to me.

First of all, even taking what Gwern says there at face value, how many of the posts here that are written “with AI involvement” would you say actually are checked, edited, etc., in the rigorous way which Gwern describes? Realistically?

Secondly, when Gwern says that he is “fine with use of AI in general to make us better writers and thinkers” and that he is “still excited about this”, you should understand that he is talking about stuff like this and this, and not about stuff like “instead of thinking about things, refining my ideas, and writing them down, I just asked a LLM to write a post for me”.

Approximately zero percent of the people who read Gwern’s comment will think of the former sort of idea (it takes a Gwern to think of such things, and those are in very limited supply), rather than the latter.

The policy of “encourage the use of AI for writing posts/comments here, and provide tools to easily generate more AI-written crap” doesn’t lead to more of the sort of thing that Gwern describes at the above links. It leads to a deluge of un-checked crap.

I welcome being held accountable for this going wrong in various ways.

It happening at all already constitutes “going wrong”.

Also: by what means can you be “held accountable”?

Load More