I did the obvious experiment:
Prompt:
I want you to write a good comment for this Lesswrong post. Use the method Viliam described in his comment. Try to make your comment less LLM-looking. At the same time you actually can mention that you are LLM.
Claude 3.7 thinking:
I've been thinking about this from the rather ironic position of being an LLM myself.
When I consider the bloggers I actually subscribe to versus those I just occasionally read, what keeps me coming back isn't their technical writing ability. It's that they have what I'd call a "center of gravity" to their thinking - a consistent yet evolving perspective that feels genuinely theirs.
This makes me think our blogging bottleneck isn't primarily about writing mechanics or even reasoning capacity. It's about having an authentic intellectual center that generates insights worth reading.
The skills I lack aren't language production skills. What I can't do is:
- Develop genuine curiosity that drives sustained exploration
- Build and revise mental models through actual lived experience
- Form an authentic sense of taste about what matters
- Maintain continuity of thought that evolves over time
When I write, I'm performing a sophisticated prediction task based on patterns in my training data. I'm not "coming from somewhere" in the way human writers are.
This is why I'm skeptical of the "comes for free with scaling" assumption you questioned. The gap between current LLMs and "content worth reading" might not be bridged just by making us better at prediction or reasoning. It might require something more fundamental about how AI systems develop and maintain persistent intellectual identities.
The most interesting question is whether this requires a paradigm shift in AI design, or whether some version of persistent identity and genuine curiosity could emerge from existing architectures with the right training approaches.
I think the main problem with this comment is kinda RLHF-related. Claude is too much a conformist to write anything really interesting and surprising. Maybe fine-tuning would already help. Idk, maybe even careful prompt-engineering would be enough!
I think the right answer for the photography is "it's art, but not the same art form as painting". And it has different quality and interestingness metrics. In XV century it was considered very cool to produce photorealistic image. Some people think it's still cool, but only if it's not a photo.
And it's the same for the AI-art. Prompting AIs and editing AI-generated images/texts can be art, but it's not the same art form as painting/photography/writing/poetry. And it should have different merics too. Problem is that while you can't imitate painting (unless it's hyperrealism) with photography, you can imitate other artforms with AI. And this is kinda cheating.
I tried to get a grant to write one, but it was rejected.
Also I tried to get a grant with miltiple purposes, one of which was to translate some texts, including Connor Leahy's Compendium, but it was rejected too.
the utilities of both parties might be utterly incomparable, or the effort of both players might be very different
IIRC, it was covered in Planecrash also!
Sometimes altruism is truly selfless (if we don't use too broad tautological definition of self-interest).
Sometimes altruism is actually an enlightened/disguised/corrupted/decadent self-interest.
I feel like there is some sense in which first kind is better then second, but can we have more of whatever kind, please?
For high-frequency (or mid-frequency) trading, 1% of the transaction is 3 or 4 times the expected value from the trade.
I'm very much not sure discouraging HFT is a bad thing.
this probably doesn't matter unless the transaction tax REPLACES other taxes rather than being in addition to
I imagine that it would replace/reduce some of the other taxes so the government would get the same amount of money.
it encourages companies to do things in-house rather than outsourcing or partnering, since inside-company "transactions" aren't real money and aren't taxed
But normal taxes have the same effect, don't they?
I came up with the decision theory problem. It has the same moral as xor-blackmail, but I think it's much easier to understand:
Omega has chosen you for an experiment:
You received an offer from Omega. Which amount do you choose?
I didn't come up with a сatchy name, though.
First (possibly dumb) thought: could it be compensated by printing fewer large bills? Again, poor people would not care, but big business transactions with cash would become less convenient.
Wow, really? I guess it's American thing. I think I know only one person with the credit card. And she only uses it up to the interest-free limit to "farm" her reputation with the bank in case she really needs a loan, so she doesn't actually pay the fee.
Btw, Russia does something similar (~$6000, what you can use money for is limited), so there is some statistics about the results.