Supposedly intelligence is some kind of superpower.  And they're now selling intelligence for pennies/million tokens.  Logically, it seems like I should be spending way more of my income than I currently am on intelligence.  But what should I spend it on?

For context, I currently spend ~$50/month on AI:

  • ChatGPT $20/month
  • Github Copilot $10/month
  • Various AI art apps ~$20/month

Suppose I wanted to spend much more on intelligence (~$1000/month), what should I spend it on?

One idea might be: buy a pair of smart glasses, record everything I do, dump it into a database, and then have the smartest LLM I can find constantly suggest things to me based on what it sees.

Is this the best thing I could do?

Would it be anywhere worth $1000/month (assume spending this money will not impact my welfare in any way and I would otherwise dump it into an SNP500 index fund).

New Answer
New Comment

5 Answers sorted by

gwern

8739

If you're having trouble coming up with tasks for 'artificial intelligence too cheap to meter', it could be because you are having trouble coming up with tasks for intelligence period. Just because something is highly useful doesn't mean you can immediately make use of it in your current local optimum; you may need to seriously reorganize your life and workflows before any kind of intelligence could be useful.

There is a good post on the front page right now about exactly this: https://www.lesswrong.com/posts/7L8ZwMJkhLXjSa7tD/the-great-data-integration-schlep Most of the examples in it do not actually depend on the details of 'AI' vs employee vs contractor vs API vs... - the organization is organized to defeat the improvement. It doesn't matter whether it's a data scientist or an AI reading the data if there is some employee whose career depends on that data not being read and who is sabotaging it, or some department defending its fief. (I usually call this concept "automation as colonization wave": many major technologies of undoubted enormous value, such as steam or the Internet or teleconferencing/remote-working, take a long time to have massive effects because you have everyone stuck in local optima and potentially outright sabotaging any integration of the Big New Thing, and potentially have to create entirely new organizations and painfully liquidate the old ones through decades of bleeding.) There are few valuable "AI-shaped holes" because we've organized everything to minimize the damage from lacking AI to fill those holes, as it were: if there were some sort of organization which had naturally large LLM-shaped holes where filling them would massively increase the organization's output... it would've gone extinct long ago and been replaced by ones with human-shaped holes instead, because humans were all you could get. (This is why LLM uses are pretty ridiculous right now as a % of GDP - oh wow, it can do a slightly better job of spellchecking my emails? I can have it write some code for me? Not exactly a new regime of hyperbolic global economic growth.)

So one thing you could try, if you are struggling to spend $1000/month usefully on artificial intelligence, is to instead experiment by committing to spend $1000/month on natural intelligence. That is, look into hiring a remote worker / assistant / secretary, an intern, or something else of that ilk. They are, by definition, a flexible multimodal generally-intelligent human-level neural net capable of tool use and agency, an 'ANI' if you will. (And if you mentally ignore that $1000/month because it's an experiment, you can treat it as 'natural intelligence too cheap to meter', just regarding it a sunk cost.) An outsourced human fills a very similar hole as an AI could, so it removes the distracting factor of AI and simply asks, 'are there any large, valuable, genuinely-moving-the-needle outsourced-human-shaped holes in your life?' There probably are not! Then it's no surprise if you can't plug the holes which don't exist with any AI, present or future.

(If this is still too confusing, you can try treating yourself as a remote worker and roleplay as them by sending yourself emails and trying to pretend you have amnesia as you write a reply and avoid doing anything a remote work could not do, like edit files on your computer, and charging yourself an appropriate hourly rate, terminating at $1000 cumulative.)

If you find you cannot make good use of your hired natural intelligent neural net, then that fully explains your difficulty of coming up with compelling usecases for artificially intelligent neural nets too. And if you do, you now have a clean set of things you can meaningfully try to do with AI services.

For what workflows/tasks does this 'AI delegation paradigm' actually work though, aside from research/experimentation with AI itself? Like Janus's apparent experiments with running an AI discord I'm sure cost a lot, but the object level work there is AI research. If AI agents could be trusted to generate a better signal/noise ratio by delegation than by working-alongside the AI (where the bottleneck is the human)....isn't that the singularity? They'd be self sustaining. 

Thus having 'can you delegate this to a human' be a prerequisite test of whether o... (read more)

[-]gwern276

For what workflows/tasks does this 'AI delegation paradigm' actually work though, aside from research/experimentation with AI itself? Like Janus's apparent experiments with running an AI discord I'm sure cost a lot, but the object level work there is AI research. If AI agents could be trusted to generate a better signal/noise ratio by delegation than by working-alongside the AI (where the bottleneck is the human)....isn't that the singularity? They'd be self sustaining.

I'm not following your point here. You seem to have a much more elaborate idea of outsourcing than I do. Personally cost-effective outsourcing is actually quite difficult, for all the reasons I discuss under the 'colonization wave' rubric. A Go AI is inarguably superhuman; nevertheless, no matter how incredible Go AI become, I would pay exactly $0 for it (or hours of Go consultation from Lee Sedol for that matter), because I don't care about Go or play it. If society were reorganized to make Go playing a key skill of any refined person, closer to the Heian era than right now, my dollar value would very abruptly changed. But right now? $0.

What I'm suggesting is finding things like, "email your current blog post dra... (read more)

3Casey B.
I largely don't think we're disagreeing? My point didn't depend on a distinction between 'raw' capabilities vs 'possible right now with enough arranging' capabilities, and was mostly: "I don't see what you could actually delegate right now, as opposed to operating in the normal paradigm of ai co-work the OP is already saying they do (chat, copilot, imagegen)", and then your personal example is detailing why you couldn't currently delegate a task. Sounds like agreement.  Also I didn't really consider your example of:    > "email your current blog post draft to the assistant for copyediting". to be outside the paradigm of AI co-work the OP is already doing, even if it saves them time. Scaling up this kind of work to the point of $1k would seem pretty difficult and also outside what I took to be their question, since this amounts to "just work a lot more yourself, and thus the proportion of work you currently use AI for will go up till you hit $1k". That's a lot of API credits for such normal personal use.   ...  But back to your example, I do question just how much of a leap of insight/connection would be necessary to write the standard Gwern mini article. Maybe in this exact case you know there is enough latent insight/connection in your clippings/writings, and the LLM corpus, and possibly some rudimentary wikipedia/tool use, such that your prompt providing the cherry on top connecting idea ('spontaneous biting is prey drive!') could actually produce a Gwern-approved mini-essay. You'd know the level of insight-leap for such articles better than I, but do you really think there'd be many such things within reach for very long? I'd argue an agent that could do this semi indefinitely, rather than just clearing your backlog of maybe like 20 such ideas, would be much more capable than we currently see, in terms of necessary 'raw' capability. But maybe I'm wrong and you regularly have ideas that sufficiently fit this pattern, where the bar to pass isn't "be even close
1Sheikh Abdur Raheem Ali
I enjoyed reading this, highlights were part on reorganization of the entire workflow, as well as the linked mini-essay on cats biting due to prey drive.
2eggsyntax
They can't typically (currently) do better on their own than working alongside a human, but a) a human can delegate a lot more tasks than they can collaborate on (and can delegate more cheaply to an AI than to another human), and b) though they're not as good on their own they're sometimes good enough. Consider call centers as a central case here. Companies are finding it a profitable tradeoff to replace human call-center workers with AI even if the AI makes more mistakes, as long as it doesn't make too many mistakes.

You can post on a subreddit and get replies from real people interested in that topic, for free, in less than a day.

Is that valuable? Sometimes it is, but...not usually. How much is the median comment on reddit or facebook or youtube worth? Nothing?

In the current economy, the "average-human-level intelligence" part of employees is only valuable when you're talking about specialists in the issue at hand, even when that issue is being a general personal assistant for an executive rather than a technical engineering problem.

jacquesthibs

350

Here's what I'm currently using and how much I am paying:

  • Superwhisper (or other new Speech-to-Text that leverage "LLMs for rewriting" apps). Under $8.49 per month. You can use different STT models (different speed and accuracy for each) and LLM for rewriting the transcript based on a prompt you give the models. You can also have different "modes", meaning that you can have the model take your transcript and write code instructions in a pre-defined format when you are in an IDE, turn a transcript into a report when writing in Google Docs, etc. There is also an iOS app.
  • Cursor Pro ($20-30/month). Switch to API credits when the slow responses take too long. (You can try Zed (an IDE) too if you want. I've only used it a little bit, but Anthropic apparently uses it and there's an exclusive "fast-edit" feature with the Anthropic models.)
  • Claude.ai Pro ($20/month). You could consider getting two accounts or a Team account to worry less about hitting the token limit.
  • Chatgpt.com Pro account ($20/month). Again, can get a second account to have more o1-preview responses from the chat.
  • Aider (~$10/month max in API credits if used with Cursor Pro).
  • Google Colab Pro subscription ($9.99/month). You could get the Pro+ plan for $49.99/month.
  • Google One 2TB AI Premium plan ($20/month). This comes with Gemini chat and other AI features. I also sign up to get the latest features earlier, like Notebook LM and Illuminate.
  • v0 chat ($20/month). Used for creating Next.js websites quickly.
  • jointakeoff.com ($22.99/month) for courses on using AI for development.
  • I still have GitHub Copilot (along with Cursor's Copilot++) because I bought a long-term subscription.
  • Grammarly ($12/month).
  • Reader by ElevenLabs (Free, for now). Best quality TTS app out there right now.

Other things I'm considering paying for:

  • Perplexity AI ($20/month).
  • Other AI-focused courses that help me best use AI for productivity (web dev or coding in general).
  • Suno AI ($8/month). I might want to make music with it.

Apps others may be willing to pay for:

  • Warp, an LLM-enabled terminal ($20/month). I don't use the free version enough to upgrade to the paid version.

There are ways to optimize how much I'm paying to save a bit of cash for sure. But I'm currently paying roughly $168/month.

That said, I am also utilizing research credits from Anthropic, which could range from $500 to $2000 depending on the month. In addition, I'm working on an "alignment research assistant" which will leverage LLMs, agents, API calls to various websites, and more. If successful, I could see this project absorbing hundreds of thousands in inference costs.

Note: I am a technical alignment researcher who also works on trying to augment alignment researchers and eventually automate more and more of alignment research so I'm biasing myself to overspend on products in order to make sure I'm aware of the bleeding-edge setup.

Thank you for writing this up! I've purchased some subscriptions and plan to block out time to play around with some of these services and get familiar with them.

Viliam

42

One possible approach could be to have the AI make something useful, and then sell it. That way, you could get a part of the $1000 back. Possibly all of it. Possibly make some extra money, which would allow you to spend even more money on AI the next month.

So we need a product that is purely digital, like a book, or a computer program. Sell the book using some online shop that will print it on demand, sell the computer game on Steam. Keep producing one book after another, and one game after another.

Many people are probably already doing this, so you need something to separate yourself from the crowd. I assume that most people using this strategy produce complete crap, so you only need to be slightly better. For example, read what the computer generated, and click "retry" if it is awful. Create a website for your book, offering a free chapter. Publish an online novel, chapter by chapter. Make a blog for your game; let the AI also generate articles describing your (fictional) creative process, the obstacles you met and the lessons you learned. Basically, the AI will do the product, but you need to do marketing.

A more complex project would be to think about an online project, and let the AI build it. But you need a good idea first. However, depending on how cheap intelligence is, the idea doesn't have to be too good; you only need enough users to pay for the costs of development and hosting, plus some profit. So basically, read random web pages, when you find people complaining about (lack of) something, built it, and send them a link.

Nathan Helm-Burger

20

Personally, I have some long lists of ideas for things I haven't got time for including: research projects in AI, research projects in other subjects which could be advanced entirely by work on a computer (e.g. collecting and summarizing relevant facts from papers, running physical simulations of potential designs, etc), games, books, productivity tools, etc.

I've tried some of the current AI agent stuff, and nothing I've tried is quite good enough with the current set of models to automated enough of actualizing my ideas to make it worth my time. I'm prioritizing saving the lives of everyone on Earth, including everyone I love, by attempting to reduce the risk of AI catastrophe. Maybe next year, the critical point will be reached where spending a lot on inference to make many tries at each necessary step will become effective. If I could just dump in a couple thousand dollars a month into AI agent inference working on my ideas, and get a handful of mostly complete projects out, then I'd be making tons of money even if my success rate for the ideas taking off were 1 in 1000.

If you aren't the sort of person who does have lists of potentially valuable projects sitting around waiting for intelligent workers to breathe life into them... I dunno. Maybe the next generation of models will be good enough to also help you with the ideation phase?

Maybe next year, the critical point will be reached where spending a lot on inference to make many tries at each necessary step will become effective.

 

That raises an excellent point that hasn't been otherwise brought up -- it's clear that there are at least some cases already where you can get much better performance by doing best-of-n with large n. I'm thinking especially of Ryan Greenblatt's approach to ARC-AGI, where that was pretty successful (n = 8000). And as Ryan points out, that's the approach that AlphaCode uses as well (n = some enormous num... (read more)

If the idea is "you should use AI to work on your personal projects", the problem is I'm already doing that (notice the $50/month).  I'm looking for ways to spend 20x more on AI without spending 20x more of my time (which is physically impossible).

Michael Roe

10

Things I might spend more money on, if the were better AI’s to spend it on,


1. I am currently having a lot of blood tests done, with a genuine qualified medical doctor  interpreting the results. Just for fun, I can see if AI gives a similar interpretation of the test results (its not bad).

Suppose we had AI that was actually better than human doctors, and cheaper. (Sounds like that might be here real soon, to be honest). I would probably pay money for that.


2. Some work things I am doing involve formally proving correctness of software. AI is not there, quite yet. If it was, I could probably get DARPA  to pay the license fee for it, assuming cost isnt absolutely astronomical.


Etc.


On the other hand, this would imply that most doctors, and mathematicians, are out of work.

Probably, if some AI were to recommend additional blood testing I could manage to persuade the wctual medical professionals to do it. Recent conversation went some thing like this:


Me: “can I have my thyroid levels checked pleas? And the consultant endocrinologist said he’d like to see a liver function test done next time i give a blood sample.”

Nurse (taking my blood sample and pulling my medical record up in the computer) “you take carbimazole right?”

Me: “yes”

Nurse (ticking boxes on a form on the computer) “… and full blood panel, and electrolytes…”

Probably wouldn’t be hard to get suggestions from an AI added to the list.

-1Michael Roe
If I was going to play this game with an AI, I’d also feed it my genomic data, which would reveal I have a version of the HLA genes that makes me more likely to develop autoimmune diseases.