If you're having trouble coming up with tasks for 'artificial intelligence too cheap to meter', it could be because you are having trouble coming up with tasks for intelligence period. Just because something is highly useful doesn't mean you can immediately make use of it in your current local optimum; you may need to seriously reorganize your life and workflows before any kind of intelligence could be useful.
There is a good post on the front page right now about exactly this: https://www.lesswrong.com/posts/7L8ZwMJkhLXjSa7tD/the-great-data-integration-schlep Most of the examples in it do not actually depend on the details of 'AI' vs employee vs contractor vs API vs... - the organization is organized to defeat the improvement. It doesn't matter whether it's a data scientist or an AI reading the data if there is some employee whose career depends on that data not being read and who is sabotaging it, or some department defending its fief. (I usually call this concept "automation as colonization wave": many major technologies of undoubted enormous value, such as steam or the Internet or teleconferencing/remote-working, take a long time to have massive effects because you have everyone stuck in local optima and potentially outright sabotaging any integration of the Big New Thing, and potentially have to create entirely new organizations and painfully liquidate the old ones through decades of bleeding.) There are few valuable "AI-shaped holes" because we've organized everything to minimize the damage from lacking AI to fill those holes, as it were: if there were some sort of organization which had naturally large LLM-shaped holes where filling them would massively increase the organization's output... it would've gone extinct long ago and been replaced by ones with human-shaped holes instead, because humans were all you could get. (This is why LLM uses are pretty ridiculous right now as a % of GDP - oh wow, it can do a slightly better job of spellchecking my emails? I can have it write some code for me? Not exactly a new regime of hyperbolic global economic growth.)
So one thing you could try, if you are struggling to spend $1000/month usefully on artificial intelligence, is to instead experiment by committing to spend $1000/month on natural intelligence. That is, look into hiring a remote worker / assistant / secretary, an intern, or something else of that ilk. They are, by definition, a flexible multimodal generally-intelligent human-level neural net capable of tool use and agency, an 'ANI' if you will. (And if you mentally ignore that $1000/month because it's an experiment, you can treat it as 'natural intelligence too cheap to meter', just regarding it a sunk cost.) An outsourced human fills a very similar hole as an AI could, so it removes the distracting factor of AI and simply asks, 'are there any large, valuable, genuinely-moving-the-needle outsourced-human-shaped holes in your life?' There probably are not! Then it's no surprise if you can't plug the holes which don't exist with any AI, present or future.
(If this is still too confusing, you can try treating yourself as a remote worker and roleplay as them by sending yourself emails and trying to pretend you have amnesia as you write a reply and avoid doing anything a remote work could not do, like edit files on your computer, and charging yourself an appropriate hourly rate, terminating at $1000 cumulative.)
If you find you cannot make good use of your hired natural intelligent neural net, then that fully explains your difficulty of coming up with compelling usecases for artificially intelligent neural nets too. And if you do, you now have a clean set of things you can meaningfully try to do with AI services.
An analogous example might be the difficulties some people have in 'being rich' or 'becoming a manager/learning to delegate'. If you were poor or are used to doing everything yourself, it can be difficult to spend your new money well or make any good use of your secretary or junior employees; but one would not infer from that that "money is useless" or "staff is useless". It is simply that you need to figure out how to live your new life, and your old ways were adapted to your old life.
This can be surprisingly hard sometimes: there are many anecdotes of people who are destroyed by their newfound wealth or can't do anything but hoard it, or who run an organization into the ground because they are unable to delegate. Even the simpler forms are hard. (On the very rare occasion I stay at a luxury hotel/cruise ship or go to a fancy restaurant, where there is a lot of staff who are there to cater to your every whim, I struggle to come up with whims worth catering to, because having been raised middle-class and being used to staying in the cheapest hotels where waking up sans bed bugs is a minor victory, I mostly find anything like a 'servant' to be extremely alienating and stressful and don't know how to get anything out of it. I'm sure I could do so if this became an ordinary thing, but it would still take time - I don't just automatically know how to adjust!)
For what workflows/tasks does this 'AI delegation paradigm' actually work though, aside from research/experimentation with AI itself? Like Janus's apparent experiments with running an AI discord I'm sure cost a lot, but the object level work there is AI research. If AI agents could be trusted to generate a better signal/noise ratio by delegation than by working-alongside the AI (where the bottleneck is the human)....isn't that the singularity? They'd be self sustaining.
Thus having 'can you delegate this to a human' be a prerequisite test of whether o...
For what workflows/tasks does this 'AI delegation paradigm' actually work though, aside from research/experimentation with AI itself? Like Janus's apparent experiments with running an AI discord I'm sure cost a lot, but the object level work there is AI research. If AI agents could be trusted to generate a better signal/noise ratio by delegation than by working-alongside the AI (where the bottleneck is the human)....isn't that the singularity? They'd be self sustaining.
I'm not following your point here. You seem to have a much more elaborate idea of outsourcing than I do. Personally cost-effective outsourcing is actually quite difficult, for all the reasons I discuss under the 'colonization wave' rubric. A Go AI is inarguably superhuman; nevertheless, no matter how incredible Go AI become, I would pay exactly $0 for it (or hours of Go consultation from Lee Sedol for that matter), because I don't care about Go or play it. If society were reorganized to make Go playing a key skill of any refined person, closer to the Heian era than right now, my dollar value would very abruptly changed. But right now? $0.
What I'm suggesting is finding things like, "email your current blog post dra...
You can post on a subreddit and get replies from real people interested in that topic, for free, in less than a day.
Is that valuable? Sometimes it is, but...not usually. How much is the median comment on reddit or facebook or youtube worth? Nothing?
In the current economy, the "average-human-level intelligence" part of employees is only valuable when you're talking about specialists in the issue at hand, even when that issue is being a general personal assistant for an executive rather than a technical engineering problem.
Here's what I'm currently using and how much I am paying:
Other things I'm considering paying for:
Apps others may be willing to pay for:
There are ways to optimize how much I'm paying to save a bit of cash for sure. But I'm currently paying roughly $168/month.
That said, I am also utilizing research credits from Anthropic, which could range from $500 to $2000 depending on the month. In addition, I'm working on an "alignment research assistant" which will leverage LLMs, agents, API calls to various websites, and more. If successful, I could see this project absorbing hundreds of thousands in inference costs.
Note: I am a technical alignment researcher who also works on trying to augment alignment researchers and eventually automate more and more of alignment research so I'm biasing myself to overspend on products in order to make sure I'm aware of the bleeding-edge setup.
Thank you for writing this up! I've purchased some subscriptions and plan to block out time to play around with some of these services and get familiar with them.
One possible approach could be to have the AI make something useful, and then sell it. That way, you could get a part of the $1000 back. Possibly all of it. Possibly make some extra money, which would allow you to spend even more money on AI the next month.
So we need a product that is purely digital, like a book, or a computer program. Sell the book using some online shop that will print it on demand, sell the computer game on Steam. Keep producing one book after another, and one game after another.
Many people are probably already doing this, so you need something to separate yourself from the crowd. I assume that most people using this strategy produce complete crap, so you only need to be slightly better. For example, read what the computer generated, and click "retry" if it is awful. Create a website for your book, offering a free chapter. Publish an online novel, chapter by chapter. Make a blog for your game; let the AI also generate articles describing your (fictional) creative process, the obstacles you met and the lessons you learned. Basically, the AI will do the product, but you need to do marketing.
A more complex project would be to think about an online project, and let the AI build it. But you need a good idea first. However, depending on how cheap intelligence is, the idea doesn't have to be too good; you only need enough users to pay for the costs of development and hosting, plus some profit. So basically, read random web pages, when you find people complaining about (lack of) something, built it, and send them a link.
Personally, I have some long lists of ideas for things I haven't got time for including: research projects in AI, research projects in other subjects which could be advanced entirely by work on a computer (e.g. collecting and summarizing relevant facts from papers, running physical simulations of potential designs, etc), games, books, productivity tools, etc.
I've tried some of the current AI agent stuff, and nothing I've tried is quite good enough with the current set of models to automated enough of actualizing my ideas to make it worth my time. I'm prioritizing saving the lives of everyone on Earth, including everyone I love, by attempting to reduce the risk of AI catastrophe. Maybe next year, the critical point will be reached where spending a lot on inference to make many tries at each necessary step will become effective. If I could just dump in a couple thousand dollars a month into AI agent inference working on my ideas, and get a handful of mostly complete projects out, then I'd be making tons of money even if my success rate for the ideas taking off were 1 in 1000.
If you aren't the sort of person who does have lists of potentially valuable projects sitting around waiting for intelligent workers to breathe life into them... I dunno. Maybe the next generation of models will be good enough to also help you with the ideation phase?
Maybe next year, the critical point will be reached where spending a lot on inference to make many tries at each necessary step will become effective.
That raises an excellent point that hasn't been otherwise brought up -- it's clear that there are at least some cases already where you can get much better performance by doing best-of-n with large n. I'm thinking especially of Ryan Greenblatt's approach to ARC-AGI, where that was pretty successful (n = 8000). And as Ryan points out, that's the approach that AlphaCode uses as well (n = some enormous num...
If the idea is "you should use AI to work on your personal projects", the problem is I'm already doing that (notice the $50/month). I'm looking for ways to spend 20x more on AI without spending 20x more of my time (which is physically impossible).
Things I might spend more money on, if the were better AI’s to spend it on,
1. I am currently having a lot of blood tests done, with a genuine qualified medical doctor interpreting the results. Just for fun, I can see if AI gives a similar interpretation of the test results (its not bad).
Suppose we had AI that was actually better than human doctors, and cheaper. (Sounds like that might be here real soon, to be honest). I would probably pay money for that.
2. Some work things I am doing involve formally proving correctness of software. AI is not there, quite yet. If it was, I could probably get DARPA to pay the license fee for it, assuming cost isnt absolutely astronomical.
Etc.
On the other hand, this would imply that most doctors, and mathematicians, are out of work.
Probably, if some AI were to recommend additional blood testing I could manage to persuade the wctual medical professionals to do it. Recent conversation went some thing like this:
Me: “can I have my thyroid levels checked pleas? And the consultant endocrinologist said he’d like to see a liver function test done next time i give a blood sample.”
Nurse (taking my blood sample and pulling my medical record up in the computer) “you take carbimazole right?”
Me: “yes”
Nurse (ticking boxes on a form on the computer) “… and full blood panel, and electrolytes…”
Probably wouldn’t be hard to get suggestions from an AI added to the list.
Supposedly intelligence is some kind of superpower. And they're now selling intelligence for pennies/million tokens. Logically, it seems like I should be spending way more of my income than I currently am on intelligence. But what should I spend it on?
For context, I currently spend ~$50/month on AI:
Suppose I wanted to spend much more on intelligence (~$1000/month), what should I spend it on?
One idea might be: buy a pair of smart glasses, record everything I do, dump it into a database, and then have the smartest LLM I can find constantly suggest things to me based on what it sees.
Is this the best thing I could do?
Would it be anywhere worth $1000/month (assume spending this money will not impact my welfare in any way and I would otherwise dump it into an SNP500 index fund).