I often talk to people who don't find LLMs useful. Paraphrasing:

Why would I want to ask questions to a thing that might give me the right answer but might also make something up that fools me into thinking it's right? I've played with them, but they don't seem like they'd save me any time.

My experience with has been that there are some things LLMs are much better at than others and you need to learn where they're suitable, but overall I get a lot out of working with them. I went back over my recent history in Claude, the one I use most, and here are cases where I found it helpful:

  • In yesterday's post I wanted to list a bunch of posts where I talked about failures, but my recall isn't that great. I pulled a list of titles from my last thousand posts and prompted:

    I've attached a list of titles of some of Jeff Kaufman's blog posts. Can you identify some where he probably talks about a mistake he's made?

    While some it highlighted were bad examples or not about failures at all, others were posts I would essentially have had to read through pages of titles to find. And I know my own posts well enough that in looking over its answers I could immediately tell which were useful.

  • I help my dad with sysadmin work for his small B2B SaaS and have a long-running issue where my edits to /etc/systemd/system/multi-user.target.wants/tomcat9.service were getting overwritten by automatic updates. I asked how to fix this and got an answer (sudo systemctl edit tomcat9.service) that I could then Google and validate.

  • I had a set of locations in an organism's genome that had a certain property, and I wanted to plot this. This would plot better, however, as a series of runs instead of a set of descrete locations. If I thought about it I could have written a function to convert the data, but I was feeling a but fuzzy and asking the LLM to do it was faster:

    I have a list of values in order with gaps: [1,2,3,4,5,6,10,11,12]

    Could you use run length encoding to convert it to a list of [start, length] pairs?

    Looking over the code it was easy to tell that it was doing what I wanted.
  • I've been working on a pedalboard and wasn't sure what methods people typically used to attach the pedals. Claude gave me an overview that was a good starting place for looking more deeply into each option.

  • I wanted a file format for human-writable machine-readable data that included key-value pairs and multiline strings, preserved formatting when round-tripping, and wasn't YAML (security, coercion, complexity). Claude helped me identify TOML with tomlkit as doing what I wanted.

  • I often write code that assembles some data and then want to graph it. Pasting the code into Claude along with a request for the kind of plot I want typically produces something close to what I had in mind, and a good starting place for tweaks. It can also be useful for tweaking existing plots: "In the following code the bars for the medians in the violin plots are blue. Could you make them black?", "Could you modify the following plot to lightly shade the period between 2023-12-01 and 2024-05-01, labeling it 'study period'?", "Could you extend the following code to add a trend line for each plot?", etc.

  • Claude is very strong verbally, so asking it to rephrase things can work well. I'm reasonably happy with my writing in informal contexts like this one, so this is mostly something I use when I'm tired and it feels like words aren't flowing well or if I need to communicate in a voice that doesn't come naturally to me (contracting, military, academic, etc).

  • I often ask questions like "what do you call it when ..." with a verbose description and then Google the technical terms it thinks I might be gesturing at.

There are still a lot of things it gets wrong, but it turns out there are many areas where I can pretty easily check the output and where something mostly-right is a good starting point.

Comment via: mastodon

New Comment
2 comments, sorted by Click to highlight new comments since:

LLMs are great at doing work that would take me a lot of time, but I can quickly evaluate whether it is correct; for example, I can run the generated code. As a side effect, I sometimes learn new things; for example recently the generated code contained a new language feature that I was not aware of. Or I get a solution that is OK, but different from how I would have done it, so it is something I can think about.

I did this web page in a weekend (it needs some more polishing, and then I am going to write an article about it -- but the idea is that you are supposed to click the button at the bottom and then use "print preview"), where "weekend" is an euphemism for two blocks of time, about 2 or 3 hours each, and that includes doing and re-doing the graphics. If I tried to do the same thing myself, it would take me at least as much time just to figure out the necessary JavaScript and CSS features (because I don't use JS and CSS regularly).

The single most useful thing I use LLMs for is telling me how to do things in bash. I use bash all the time for one off tasks, but not quite enough to build familiarity with it + learn all the quirks of the commands + language.

90% of the time it gives me a working bash script first shot, each time saving me between 5 minutes to half an hour.

Another thing LLMs are good at is e.g taking a picture of e.g. screw, and asking what type of screw it is.

They're also great at converting data from one format to another: here's some JSON, convert it into Yaml. Now prototext. I forgot to mention, use maps instead of nested structs, and use Pascal case. Also the JSON is hand written and not actually legal.

Similarly they're good at fuzzy data querying tasks. I received this giant error response including full stack trace and lots of irrelevant fields, where's the actual error, and what lines of the file should I look at.