Neil

I'm bumping into walls but hey now I know what the maze looks like.

Wikitag Contributions

Comments

Sorted by

A functionality I'd like to see on LessWrong: the ability to give quick feedback for a post in the same way you can react to comments (click for image). When you strong-upvote or strong-downvote a post, a little popup menu appears offering you some basic feedback options. The feedback is private and can only be seen by the author. 

I've often found myself drowning in downvotes or upvotes without knowing why. Karma is a one-dimensional measure, and writing public comments is a trivial inconvience: this is an attempt at middle ground, and I expect it to make post reception clearer. 

See below my crude diagrams.

See also: 

https://borretti.me/article/how-i-use-claude

I was planning on putting a whole list here but alas I am drawing a blank. 

There's a lot of dispersed wisdom in @Zvi's stack too but I can't remember any sufficiently discriminatory key words to find them.

Can confirm. Half the LessWrong posts I've read in my life were read in the shower.

no one's getting a million dollars and an invitation the the beisutsukai 

I like object-level posts that also aren't about AI. They're a minority on LW now, so they feel like high signals in a sea of noise. (That doesn't mean they're necessarily more signally, just that the rarity makes it seem that way to me.)

It felt odd to read that and think "this isn't directed toward me, I could skip if I wanted to". Like I don't know how to articulate the feeling, but it's an odd "woah text-not-for-humans is going to become more common isn't it". Just feels strange to be left behind. 

Thank you for this. I feel like a general policy of "please at least disclose" would make me feel significantly less insane when reading certain posts. 

Have you tried iterating on this? Like, the "I don't care about the word prodrome'" sounds like the kind of thing you could include in your prompt and reiterate until everything you don't like about the LLM's responses is solved or you run out of ideas. 

Also fyi ChatGPT Deep Research uses the "o3" model, not 4o, even if it says 4o at the top left (you can try running Deep Research with any of the models selected in the top left and it will output the same kind of thing).

o3 was RLed (!) into being particularly good at web search (and tangential skills like avoiding suspicious links), and isn't released in a way that lets you just chat with it. The output isn't even raw o3, it's the o3-mini model summarizing o3's chain of thought (where o3 will think things, send a dozen tentacles out into the web, then continue thinking). 

I learned this when I asked Deep Research to reverse engineer itself, and it linked the model card which in retrospect I should have done first and was foolish not to. 

 

Anyway I mention this because afaik all the other deep research frameworks are a lot less specialized than OpenAI's, and more like "we took an LLM and gave it access to the internet and let it think and search for a really long time". I expect OpenAI to continue being SOTA here for a while. 

Though I do enjoy using Grok's "DeepSearch" and "DeeperSearch" function sometimes; it's free and fun to watch (but terrible at understanding user intent, which I attribute to how little-flexible it is. It won't listen to suggestions on where to look first or how to structure its research, relying on whatever system prompt it was given instead), you might want to check it out and update this post. 

Load More